Estimate the Markov chain associated with a time series \(X = (X_0, \ldots, X_{T-1})\) assuming that the state space is the finite set \(\{X_0, \ldots, X_{T-1}\}\) (duplicates removed). The estimation is by maximum likelihood. The estimated transition probabilities are given by the matrix \(P\) such that \(P[i, j] = N_{ij} / N_i\), where \(N_{ij} = \sum_{t=0}^{T-1} 1_{\{X_t=s_i, X_{t+1}=s_j\}}\), the number of transitions from state \(s_i\) to state \(s_j\), while \(N_i\) is the total number of visits to \(s_i\). The result is returned as a MarkovChain instance.


A time series of state values, from which the transition matrix will be estimated, where X[t] contains the t-th observation.


A MarkovChain instance where mc.P is a stochastic matrix estimated from the data X and mc.state_values is an array of values that appear in X (sorted in ascending order).

quantecon.markov.estimate.fit_discrete_mc(X, grids, order='C')[source]

Function that takes an arbitrary time series :math: (X_t)_{t=0}^{T-1} in :math: mathbb R^n plus a set of grid points in each dimension and converts it to a MarkovChain by first applying discretization onto the grid and then estimation of the Markov chain.

X: array_like(ndim=2)

Time-series such that the t-th row is \(x_t\). It should be of the shape T x n, where n is the number of dimensions.

grids: array_like(array_like(ndim=1))

Array of n sorted arrays. Set of grid points in each dimension

mc: MarkovChain

An instance of the MarkovChain class constructed after discretization onto the grid.


>>> grids = (np.arange(3), np.arange(2))
>>> X = [(-0.1, 1.2), (2, 0), (0.6, 0.4), (1.0, 0.1)]
>>> mc = fit_discrete_mc(X, grids)
>>> mc.state_values
array([[0, 1],
       [1, 0],
       [2, 0]])
>>> mc.P
array([[0., 0., 1.],
       [0., 1., 0.],
       [0., 1., 0.]])