We study the Bayesian inverse problem for inferring the log-normal slowness function of the eikonal equation given noisy observation data on its solution at a set of spatial points. We consider the Gaussian prior probability for the log-slowness, which is expressed as a countable linear expansion of mutually independent normal random variables. The well-posedness of the inverse problem is established, using the variational formulation of the eikonal equation. We approximate the posterior by finitely truncating the expansion of the log-slowness, with an explicit error estimate in the Hellinger metric with respect to the truncation level. Solving the truncated eikonal equation by the Fast Matching Method, we obtain an approximation for the posterior in terms of the truncation level and the discrete grid size in the Fast Matching Method resolution. Using this result, we develop and justify the convergence of a Multilevel Markov Chain Monte Carlo (MLMCMC) method. In comparison to the case of a forward log-normal elliptic equation, proving error estimate for the MLMCMC method is technically more complicated, as the available result on the error of the Fast Matching Method only holds when the grid size is not more than a threshold, which is not uniform for all the realizations of the log-normal slowness. Using the heap sort procedure for the Fast Marching Method, our MLMCMC method achieves a prescribed level of accuracy for approximating the posterior expectation of quantities of interest, requiring only an essentially optimal level of complexity, which is equivalent to that of the forward solver. This reduces the computation complexity drastically, in comparison to the plain Monte Carlo method where a large number of realizations of the forward equation are solved with equal high accuracy. Numerical examples confirm the theoretical results on the convergence rate of the method and the optimal complexity. This is a joint work with Zhan Fei Yeo.