lqnash¶

quantecon.lqnash.nnash(A, B1, B2, R1, R2, Q1, Q2, S1, S2, W1, W2, M1, M2, beta=1.0, tol=1e-08, max_iter=1000, random_state=None)[source]

Compute the limit of a Nash linear quadratic dynamic game. In this problem, player i minimizes

$\sum_{t=0}^{\infty} \left\{ x_t' r_i x_t + 2 x_t' w_i u_{it} +u_{it}' q_i u_{it} + u_{jt}' s_i u_{jt} + 2 u_{jt}' m_i u_{it} \right\}$

subject to the law of motion

$x_{t+1} = A x_t + b_1 u_{1t} + b_2 u_{2t}$

and a perceived control law $$u_j(t) = - f_j x_t$$ for the other player.

The solution computed in this routine is the $$f_i$$ and $$p_i$$ of the associated double optimal linear regulator problem.

Parameters:
Ascalar(float) or array_like(float)

Corresponds to the above equation, should be of size (n, n)

B1scalar(float) or array_like(float)

As above, size (n, k_1)

B2scalar(float) or array_like(float)

As above, size (n, k_2)

R1scalar(float) or array_like(float)

As above, size (n, n)

R2scalar(float) or array_like(float)

As above, size (n, n)

Q1scalar(float) or array_like(float)

As above, size (k_1, k_1)

Q2scalar(float) or array_like(float)

As above, size (k_2, k_2)

S1scalar(float) or array_like(float)

As above, size (k_1, k_1)

S2scalar(float) or array_like(float)

As above, size (k_2, k_2)

W1scalar(float) or array_like(float)

As above, size (n, k_1)

W2scalar(float) or array_like(float)

As above, size (n, k_2)

M1scalar(float) or array_like(float)

As above, size (k_2, k_1)

M2scalar(float) or array_like(float)

As above, size (k_1, k_2)

betascalar(float), optional(default=1.0)

Discount rate

tolscalar(float), optional(default=1e-8)

This is the tolerance level for convergence

max_iterscalar(int), optional(default=1000)

This is the maximum number of iteratiosn allowed

random_stateint or np.random.RandomState/Generator, optional

Random seed (integer) or np.random.RandomState or Generator instance to set the initial state of the random number generator for reproducibility. If None, a randomly initialized RandomState is used.

Returns:
F1array_like, dtype=float, shape=(k_1, n)

Feedback law for agent 1

F2array_like, dtype=float, shape=(k_2, n)

Feedback law for agent 2

P1array_like, dtype=float, shape=(n, n)

The steady-state solution to the associated discrete matrix Riccati equation for agent 1

P2array_like, dtype=float, shape=(n, n)

The steady-state solution to the associated discrete matrix Riccati equation for agent 2