lqnash

quantecon.lqnash.nnash(A, B1, B2, R1, R2, Q1, Q2, S1, S2, W1, W2, M1, M2, beta=1.0, tol=1e-08, max_iter=1000)[source]

Compute the limit of a Nash linear quadratic dynamic game. In this problem, player i minimizes

\[\sum_{t=0}^{\infty} \left\{ x_t' r_i x_t + 2 x_t' w_i u_{it} +u_{it}' q_i u_{it} + u_{jt}' s_i u_{jt} + 2 u_{jt}' m_i u_{it} \right\}\]

subject to the law of motion

\[x_{t+1} = A x_t + b_1 u_{1t} + b_2 u_{2t}\]

and a perceived control law \(u_j(t) = - f_j x_t\) for the other player.

The solution computed in this routine is the \(f_i\) and \(p_i\) of the associated double optimal linear regulator problem.

Parameters:

A : scalar(float) or array_like(float)

Corresponds to the above equation, should be of size (n, n)

B1 : scalar(float) or array_like(float)

As above, size (n, k_1)

B2 : scalar(float) or array_like(float)

As above, size (n, k_2)

R1 : scalar(float) or array_like(float)

As above, size (n, n)

R2 : scalar(float) or array_like(float)

As above, size (n, n)

Q1 : scalar(float) or array_like(float)

As above, size (k_1, k_1)

Q2 : scalar(float) or array_like(float)

As above, size (k_2, k_2)

S1 : scalar(float) or array_like(float)

As above, size (k_1, k_1)

S2 : scalar(float) or array_like(float)

As above, size (k_2, k_2)

W1 : scalar(float) or array_like(float)

As above, size (n, k_1)

W2 : scalar(float) or array_like(float)

As above, size (n, k_2)

M1 : scalar(float) or array_like(float)

As above, size (k_2, k_1)

M2 : scalar(float) or array_like(float)

As above, size (k_1, k_2)

beta : scalar(float), optional(default=1.0)

Discount rate

tol : scalar(float), optional(default=1e-8)

This is the tolerance level for convergence

max_iter : scalar(int), optional(default=1000)

This is the maximum number of iteratiosn allowed

Returns:

F1 : array_like, dtype=float, shape=(k_1, n)

Feedback law for agent 1

F2 : array_like, dtype=float, shape=(k_2, n)

Feedback law for agent 2

P1 : array_like, dtype=float, shape=(n, n)

The steady-state solution to the associated discrete matrix Riccati equation for agent 1

P2 : array_like, dtype=float, shape=(n, n)

The steady-state solution to the associated discrete matrix Riccati equation for agent 2