Steady-State Diffusion Approximations of Markov Chains: Error Analysis
Diffusion approximations of Markov chain models are prominent in many fields of study. Some examples include queueing, inventory management, and Markov chain Monte Carlo algorithms, just to name a few. The approximations usually offer better computational tractability compared to their Markov chain counterparts. At times, they also offer analytical insights not available in the original model. This talk focuses on quantifying the diffusion approximation error in steady-state. The last five years have seen a number of papers using Stein's method to address this question. Stein's method says that the approximation error is tied to the derivatives of the solution to the Poisson equation for the diffusion process, and bounding said derivatives is key to any application of the method. However, this problem is difficult because it is equivalent to bounding the derivatives of a solution to a second order partial differential equation (PDE). This technical bottleneck prevents wider adoption of Stein's method as the go-to tool for studying approximation errors. I will present a new spin on the traditional Stein approach that offers an alternative to bounding PDE solution derivatives, by linking the approximation error to the sensitivity of the Markov chain to its initial condition. At the end of the talk, the audience members will have an intuitive way to check whether this methodology is applicable to their setting.