lss¶
Filename: lss.py Reference: http://quantecon.net/py/linear_models.html
Computes quantities associated with the Gaussian linear state space model.

class
quantecon.lss.
LinearStateSpace
(A, C, G, H=None, mu_0=None, Sigma_0=None)[source]¶ Bases:
object
A class that describes a Gaussian linear state space model of the form:
x_{t+1} = A x_t + C w_{t+1}
y_t = G x_t + H v_t
where {w_t} and {v_t} are independent and standard normal with dimensions k and l respectively. The initial conditions are mu_0 and Sigma_0 for x_0 ~ N(mu_0, Sigma_0). When Sigma_0=0, the draw of x_0 is exactly mu_0.
Parameters: A : array_like or scalar(float)
Part of the state transition equation. It should be n x n
C : array_like or scalar(float)
Part of the state transition equation. It should be n x m
G : array_like or scalar(float)
Part of the observation equation. It should be k x n
H : array_like or scalar(float), optional(default=None)
Part of the observation equation. It should be k x l
mu_0 : array_like or scalar(float), optional(default=None)
This is the mean of initial draw and is n x 1
Sigma_0 : array_like or scalar(float), optional(default=None)
This is the variance of the initial draw and is n x n and also should be positive definite and symmetric
Attributes
A, C, G, H, mu_0, Sigma_0 (see Parameters) n, k, m, l (scalar(int)) The dimensions of x_t, y_t, w_t and v_t respectively Methods
convert
(x)Convert array_like objects (lists of lists, floats, etc.) into geometric_sums
(beta, x_t)Forecast the geometric sums impulse_response
([j])Pulls off the imuplse response coefficients to a shock moment_sequence
()Create a generator to calculate the population mean and varianceconvariance matrix for both x_t and y_t, starting at the initial condition (self.mu_0, self.Sigma_0). replicate
([T, num_reps])Simulate num_reps observations of x_T and y_T given x_0 ~ N(mu_0, Sigma_0). simulate
([ts_length])Simulate a time series of length ts_length, first drawing stationary_distributions
([max_iter, tol])Compute the moments of the stationary distributions of x_t and y_t if possible. 
convert
(x)[source]¶ Convert array_like objects (lists of lists, floats, etc.) into well formed 2D NumPy arrays

geometric_sums
(beta, x_t)[source]¶ Forecast the geometric sums
S_x := E [sum_{j=0}^{infty} beta^j x_{t+j}  x_t ]
S_y := E [sum_{j=0}^{infty} beta^j y_{t+j}  x_t ]
Parameters: beta : scalar(float)
Discount factor, in [0, 1)
beta : array_like(float)
The term x_t for conditioning
Returns: S_x : array_like(float)
Geometric sum as defined above
S_y : array_like(float)
Geometric sum as defined above

impulse_response
(j=5)[source]¶ Pulls off the imuplse response coefficients to a shock in w_{t} for x and y
Important to note: We are uninterested in the shocks to v for this method
 x coefficients are C, AC, A^2 C...
 y coefficients are GC, GAC, GA^2C...
Parameters: j : Scalar(int)
Number of coefficients that we want
Returns: xcoef : list(array_like(float, 2))
The coefficients for x
ycoef : list(array_like(float, 2))
The coefficients for y

moment_sequence
()[source]¶ Create a generator to calculate the population mean and varianceconvariance matrix for both x_t and y_t, starting at the initial condition (self.mu_0, self.Sigma_0). Each iteration produces a 4tuple of items (mu_x, mu_y, Sigma_x, Sigma_y) for the next period.
Yields: mu_x : array_like(float)
An n x 1 array representing the population mean of x_t
mu_y : array_like(float)
A k x 1 array representing the population mean of y_t
Sigma_x : array_like(float)
An n x n array representing the variancecovariance matrix of x_t
Sigma_y : array_like(float)
A k x k array representing the variancecovariance matrix of y_t

replicate
(T=10, num_reps=100)[source]¶ Simulate num_reps observations of x_T and y_T given x_0 ~ N(mu_0, Sigma_0).
Parameters: T : scalar(int), optional(default=10)
The period that we want to replicate values for
num_reps : scalar(int), optional(default=100)
The number of replications that we want
Returns: x : array_like(float)
An n x num_reps array, where the jth column is the j_th observation of x_T
y : array_like(float)
A k x num_reps array, where the jth column is the j_th observation of y_T

simulate
(ts_length=100)[source]¶ Simulate a time series of length ts_length, first drawing
x_0 ~ N(mu_0, Sigma_0)Parameters: ts_length : scalar(int), optional(default=100)
The length of the simulation
Returns: x : array_like(float)
An n x ts_length array, where the tth column is x_t
y : array_like(float)
A k x ts_length array, where the tth column is y_t

stationary_distributions
(max_iter=200, tol=1e05)[source]¶ Compute the moments of the stationary distributions of x_t and y_t if possible. Computation is by iteration, starting from the initial conditions self.mu_0 and self.Sigma_0
Parameters: max_iter : scalar(int), optional(default=200)
The maximum number of iterations allowed
tol : scalar(float), optional(default=1e5)
The tolerance level that one wishes to achieve
Returns: mu_x_star : array_like(float)
An n x 1 array representing the stationary mean of x_t
mu_y_star : array_like(float)
An k x 1 array representing the stationary mean of y_t
Sigma_x_star : array_like(float)
An n x n array representing the stationary varcov matrix of x_t
Sigma_y_star : array_like(float)
An k x k array representing the stationary varcov matrix of y_t


quantecon.lss.
multivariate_normal
(mean, cov[, size])¶ Draw random samples from a multivariate normal distribution.
The multivariate normal, multinormal or Gaussian distribution is a generalization of the onedimensional normal distribution to higher dimensions. Such a distribution is specified by its mean and covariance matrix. These parameters are analogous to the mean (average or “center”) and variance (standard deviation, or “width,” squared) of the onedimensional normal distribution.
Parameters: mean : 1D array_like, of length N
Mean of the Ndimensional distribution.
cov : 2D array_like, of shape (N, N)
Covariance matrix of the distribution. It must be symmetric and positivesemidefinite for proper sampling.
size : int or tuple of ints, optional
Given a shape of, for example,
(m,n,k)
,m*n*k
samples are generated, and packed in an mbynbyk arrangement. Because each sample is Ndimensional, the output shape is(m,n,k,N)
. If no shape is specified, a single (ND) sample is returned.Returns: out : ndarray
The drawn samples, of shape size, if that was provided. If not, the shape is
(N,)
.In other words, each entry
out[i,j,...,:]
is an Ndimensional value drawn from the distribution.Notes
The mean is a coordinate in Ndimensional space, which represents the location where samples are most likely to be generated. This is analogous to the peak of the bell curve for the onedimensional or univariate normal distribution.
Covariance indicates the level to which two variables vary together. From the multivariate normal distribution, we draw Ndimensional samples, \(X = [x_1, x_2, ... x_N]\). The covariance matrix element \(C_{ij}\) is the covariance of \(x_i\) and \(x_j\). The element \(C_{ii}\) is the variance of \(x_i\) (i.e. its “spread”).
Instead of specifying the full covariance matrix, popular approximations include:
 Spherical covariance (cov is a multiple of the identity matrix)
 Diagonal covariance (cov has nonnegative elements, and only on the diagonal)
This geometrical property can be seen in two dimensions by plotting generated datapoints:
>>> mean = [0, 0] >>> cov = [[1, 0], [0, 100]] # diagonal covariance
Diagonal covariance means that points are oriented along x or yaxis:
>>> import matplotlib.pyplot as plt >>> x, y = np.random.multivariate_normal(mean, cov, 5000).T >>> plt.plot(x, y, 'x') >>> plt.axis('equal') >>> plt.show()
Note that the covariance matrix must be positive semidefinite (a.k.a. nonnegativedefinite). Otherwise, the behavior of this method is undefined and backwards compatibility is not guaranteed.
References
[R11] Papoulis, A., “Probability, Random Variables, and Stochastic Processes,” 3rd ed., New York: McGrawHill, 1991. [R12] Duda, R. O., Hart, P. E., and Stork, D. G., “Pattern Classification,” 2nd ed., New York: Wiley, 2001. Examples
>>> mean = (1, 2) >>> cov = [[1, 0], [0, 1]] >>> x = np.random.multivariate_normal(mean, cov, (3, 3)) >>> x.shape (3, 3, 2)
The following is probably true, given that 0.6 is roughly twice the standard deviation:
>>> list((x[0,0,:]  mean) < 0.6) [True, True]

quantecon.lss.
simulate_linear_model
[source]¶ This is a separate function for simulating a vector linear system of the form
x_{t+1} = A x_t + v_t given x_0 = x0Here x_t and v_t are both n x 1 and A is n x n.
The purpose of separating this functionality out is to target it for optimization by Numba. For the same reason, matrix multiplication is broken down into for loops.
Parameters: A : array_like or scalar(float)
Should be n x n
x0 : array_like
Should be n x 1. Initial condition
v : np.ndarray
Should be n x ts_length1. Its tth column is used as the time t shock v_t
ts_length : int
The length of the time series
Returns: x : np.ndarray
Time series with ts_length columns, the tth column being x_t