normal_form_game¶
Tools for normal form games.
Definitions and Basic Concepts¶
An \(N\)player normal form game \(g = (I, (A_i)_{i \in I}, (u_i)_{i \in I})\) consists of
 the set of players \(I = \{0, \ldots, N1\}\),
 the set of actions \(A_i = \{0, \ldots, n_i1\}\) for each player \(i \in I\), and
 the payoff function \(u_i \colon A_i \times A_{i+1} \times \cdots \times A_{i+N1} \to \mathbb{R}\) for each player \(i \in I\),
where \(i+j\) is understood modulo \(N\). Note that we adopt the convention that the 0th argument of the payoff function \(u_i\) is player \(i\)’s own action and the \(j\)th argument is player (\(i+j\))’s action (modulo \(N\)). A mixed action for player \(i\) is a probability distribution on \(A_i\) (while an element of \(A_i\) is referred to as a pure action). A pure action \(a_i \in A_i\) is identified with the mixed action that assigns probability one to \(a_i\). Denote the set of mixed actions of player \(i\) by \(X_i\). We also denote \(A_{i} = A_{i+1} \times \cdots \times A_{i+N1}\) and \(X_{i} = X_{i+1} \times \cdots \times X_{i+N1}\).
The (pureaction) best response correspondence \(b_i \colon X_{i} \to A_i\) for each player \(i\) is defined by
where \(u_i(a_i, x_{i}) = \sum_{a_{i} \in A_{i}} u_i(a_i, a_{i}) \prod_{j=1}^{N1} x_{i+j}(a_j)\) is the expected payoff to action \(a_i\) against mixed actions \(x_{i}\). A profile of mixed actions \(x^* \in X_0 \times \cdots \times X_{N1}\) is a Nash equilibrium if for all \(i \in I\) and \(a_i \in A_i\),
or equivalently, \(x_i^* \cdot v_i(x_{i}^*) \geq x_i \cdot v_i(x_{i}^*)\) for all \(x_i \in X_i\), where \(v_i(x_{i})\) is the vector of player \(i\)’s payoffs when the opponent players play mixed actions \(x_{i}\).
Creating a NormalFormGame¶
There are three ways to construct a NormalFormGame instance.
The first is to pass an array of payoffs for all the players:
>>> matching_pennies_bimatrix = [[(1, 1), (1, 1)], [(1, 1), (1, 1)]]
>>> g = NormalFormGame(matching_pennies_bimatrix)
>>> print(g.players[0])
Player in a 2player normal form game with payoff array:
[[ 1, 1],
[1, 1]]
>>> print(g.players[1])
Player in a 2player normal form game with payoff array:
[[1, 1],
[ 1, 1]]
If a square matrix (2dimensional array) is given, then it is considered to be a symmetric twoplayer game:
>>> coordination_game_matrix = [[4, 0], [3, 2]]
>>> g = NormalFormGame(coordination_game_matrix)
>>> print(g)
2player NormalFormGame with payoff profile array:
[[[4, 4], [0, 3]],
[[3, 0], [2, 2]]]
The second is to specify the sizes of the action sets of the players, which gives a NormalFormGame instance filled with payoff zeros, and then set the payoff values to each entry:
>>> g = NormalFormGame((2, 2))
>>> print(g)
2player NormalFormGame with payoff profile array:
[[[ 0., 0.], [ 0., 0.]],
[[ 0., 0.], [ 0., 0.]]]
>>> g[0, 0] = 1, 1
>>> g[0, 1] = 2, 3
>>> g[1, 0] = 3, 2
>>> print(g)
2player NormalFormGame with payoff profile array:
[[[ 1., 1.], [2., 3.]],
[[ 3., 2.], [ 0., 0.]]]
The third is to pass an array of Player instances, as explained in the next section.
Creating a Player¶
A Player instance is created by passing a payoff array:
>>> player0 = Player([[3, 1], [0, 2]])
>>> player0.payoff_array
array([[3, 1],
[0, 2]])
Passing an array of Player instances is the third way to create a NormalFormGame instance.
>>> player1 = Player([[2, 0], [1, 3]])
>>> player1.payoff_array
array([[2, 0],
[1, 3]])
>>> g = NormalFormGame((player0, player1))
>>> print(g)
2player NormalFormGame with payoff profile array:
[[[3, 2], [1, 1]],
[[0, 0], [2, 3]]]
Beware that in payoff_array[h, k], h refers to the player’s own action, while k refers to the opponent player’s action.

class
quantecon.game_theory.normal_form_game.
NormalFormGame
(data, dtype=None)[source]¶ Bases:
object
Class representing an Nplayer normal form game.
Parameters:  data : array_like of Player, int (ndim=1), or float (ndim=2 or N+1)
Data to initialize a NormalFormGame. data may be an array of Players, in which case the shapes of the Players’ payoff arrays must be consistent. If data is an array of N integers, then these integers are treated as the numbers of actions of the N players and a NormalFormGame is created consisting of payoffs all 0 with data[i] actions for each player i. data may also be an (N+1)dimensional array representing payoff profiles. If data is a square matrix (2dimensional array), then the game will be a symmetric twoplayer game where the payoff matrix of each player is given by the input matrix.
 dtype : datatype, optional(default=None)
Relevant only when data is an array of integers. Data type of the players’ payoff arrays. If not supplied, default to numpy.float64.
Attributes:  players : tuple(Player)
Tuple of the Player instances of the game.
 N : scalar(int)
The number of players.
 nums_actions : tuple(int)
Tuple of the numbers of actions, one for each player.
 payoff_arrays : tuple(ndarray(float, ndim=N))
Tuple of the payoff arrays, one for each player.
Methods
delete_action
(player_idx, action)Return a new NormalFormGame instance with the action(s) specified by action deleted from the action set of the player specified by player_idx. is_nash
(action_profile[, tol])Return True if action_profile is a Nash equilibrium. 
delete_action
(player_idx, action)[source]¶ Return a new NormalFormGame instance with the action(s) specified by action deleted from the action set of the player specified by player_idx. Deletion is not performed in place.
Parameters:  player_idx : scalar(int)
Index of the player to delete action(s) for.
 action : scalar(int) or array_like(int)
Integer or array like of integers representing the action(s) to be deleted.
Returns:  NormalFormGame
Copy of self with the action(s) deleted as specified.
Examples
>>> g = NormalFormGame( ... [[(3, 0), (0, 1)], [(0, 0), (3, 1)], [(1, 1), (1, 0)]] ... ) >>> print(g) 2player NormalFormGame with payoff profile array: [[[3, 0], [0, 1]], [[0, 0], [3, 1]], [[1, 1], [1, 0]]]
Delete player 0’s action 2 from g:
>>> g1 = g.delete_action(0, 2) >>> print(g1) 2player NormalFormGame with payoff profile array: [[[3, 0], [0, 1]], [[0, 0], [3, 1]]]
Then delete player 1’s action 0 from g1:
>>> g2 = g1.delete_action(1, 0) >>> print(g2) 2player NormalFormGame with payoff profile array: [[[0, 1]], [[3, 1]]]

is_nash
(action_profile, tol=None)[source]¶ Return True if action_profile is a Nash equilibrium.
Parameters:  action_profile : array_like(int or array_like(float))
An array of N objects, where each object must be an integer (pure action) or an array of floats (mixed action).
 tol : scalar(float)
Tolerance level used in determining best responses. If None, default to each player’s tol attribute value.
Returns:  bool
True if action_profile is a Nash equilibrium; False otherwise.

payoff_profile_array
¶

class
quantecon.game_theory.normal_form_game.
Player
(payoff_array)[source]¶ Bases:
object
Class representing a player in an Nplayer normal form game.
Parameters:  payoff_array : array_like(float)
Array representing the player’s payoff function, where payoff_array[a_0, a_1, …, a_{N1}] is the payoff to the player when the player plays action a_0 while his N1 opponents play actions a_1, …, a_{N1}, respectively.
Attributes:  payoff_array : ndarray(float, ndim=N)
See Parameters.
 num_actions : scalar(int)
The number of actions available to the player.
 num_opponents : scalar(int)
The number of opponent players.
 dtype : dtype
Data type of the elements of payoff_array.
 tol : scalar(float), default=1e8
Default tolerance value used in determining best responses.
Methods
best_response
(opponents_actions[, …])Return the best response action(s) to opponents_actions. delete_action
(action[, player_idx])Return a new Player instance with the action(s) specified by action deleted from the action set of the player specified by player_idx. dominated_actions
([tol, method])Return a list of actions that are strictly dominated by some mixed actions. is_best_response
(own_action, opponents_actions)Return True if own_action is a best response to opponents_actions. is_dominated
(action[, tol, method])Determine whether action is strictly dominated by some mixed action. payoff_vector
(opponents_actions)Return an array of payoff values, one for each own action, given a profile of the opponents’ actions. random_choice
([actions, random_state])Return a pure action chosen randomly from actions. 
best_response
(opponents_actions, tie_breaking='smallest', payoff_perturbation=None, tol=None, random_state=None)[source]¶ Return the best response action(s) to opponents_actions.
Parameters:  opponents_actions : scalar(int) or array_like
A profile of N1 opponents’ actions, represented by either scalar(int), array_like(float), array_like(int), or array_like(array_like(float)). If N=2, then it must be a scalar of integer (in which case it is treated as the opponent’s pure action) or a 1dimensional array of floats (in which case it is treated as the opponent’s mixed action). If N>2, then it must be an array of N1 objects, where each object must be an integer (pure action) or an array of floats (mixed action).
 tie_breaking : str, optional(default=’smallest’)
str in {‘smallest’, ‘random’, False}. Control how, or whether, to break a tie (see Returns for details).
 payoff_perturbation : array_like(float), optional(default=None)
Array of length equal to the number of actions of the player containing the values (“noises”) to be added to the payoffs in determining the best response.
 tol : scalar(float), optional(default=None)
Tolerance level used in determining best responses. If None, default to the value of the tol attribute.
 random_state : int or np.random.RandomState, optional
Random seed (integer) or np.random.RandomState instance to set the initial state of the random number generator for reproducibility. If None, a randomly initialized RandomState is used. Relevant only when tie_breaking=’random’.
Returns:  scalar(int) or ndarray(int, ndim=1)
If tie_breaking=False, returns an array containing all the best response pure actions. If tie_breaking=’smallest’, returns the best response action with the smallest index; if tie_breaking=’random’, returns an action randomly chosen from the best response actions.

delete_action
(action, player_idx=0)[source]¶ Return a new Player instance with the action(s) specified by action deleted from the action set of the player specified by player_idx. Deletion is not performed in place.
Parameters:  action : scalar(int) or array_like(int)
Integer or array like of integers representing the action(s) to be deleted.
 player_idx : scalar(int), optional(default=0)
Index of the player to delete action(s) for.
Returns:  Player
Copy of self with the action(s) deleted as specified.
Examples
>>> player = Player([[3, 0], [0, 3], [1, 1]]) >>> player Player([[3, 0], [0, 3], [1, 1]]) >>> player.delete_action(2) Player([[3, 0], [0, 3]]) >>> player.delete_action(0, player_idx=1) Player([[0], [3], [1]])

dominated_actions
(tol=None, method=None)[source]¶ Return a list of actions that are strictly dominated by some mixed actions.
Parameters:  tol : scalar(float), optional(default=None)
Tolerance level used in determining domination. If None, default to the value of the tol attribute.
 method : str, optional(default=None)
If None, minmax from quantecon.optimize is used. If method is set to ‘simplex’, ‘interiorpoint’, or ‘revised simplex’, then scipy.optimize.linprog is used with the method as specified by method.
Returns:  list(int)
List of integers representing pure actions, each of which is strictly dominated by some mixed action.

is_best_response
(own_action, opponents_actions, tol=None)[source]¶ Return True if own_action is a best response to opponents_actions.
Parameters:  own_action : scalar(int) or array_like(float, ndim=1)
An integer representing a pure action, or an array of floats representing a mixed action.
 opponents_actions : see best_response
 tol : scalar(float), optional(default=None)
Tolerance level used in determining best responses. If None, default to the value of the tol attribute.
Returns:  bool
True if own_action is a best response to opponents_actions; False otherwise.

is_dominated
(action, tol=None, method=None)[source]¶ Determine whether action is strictly dominated by some mixed action.
Parameters:  action : scalar(int)
Integer representing a pure action.
 tol : scalar(float), optional(default=None)
Tolerance level used in determining domination. If None, default to the value of the tol attribute.
 method : str, optional(default=None)
If None, minmax from quantecon.optimize is used. If method is set to ‘simplex’, ‘interiorpoint’, or ‘revised simplex’, then scipy.optimize.linprog is used with the method as specified by method.
Returns:  bool
True if action is strictly dominated by some mixed action; False otherwise.

payoff_vector
(opponents_actions)[source]¶ Return an array of payoff values, one for each own action, given a profile of the opponents’ actions.
Parameters:  opponents_actions : see best_response.
Returns:  payoff_vector : ndarray(float, ndim=1)
An array representing the player’s payoff vector given the profile of the opponents’ actions.

random_choice
(actions=None, random_state=None)[source]¶ Return a pure action chosen randomly from actions.
Parameters:  actions : array_like(int), optional(default=None)
An array of integers representing pure actions.
 random_state : int or np.random.RandomState, optional
Random seed (integer) or np.random.RandomState instance to set the initial state of the random number generator for reproducibility. If None, a randomly initialized RandomState is used.
Returns:  scalar(int)
If actions is given, returns an integer representing a pure action chosen randomly from actions; if not, an action is chosen randomly from the player’s all actions.

quantecon.game_theory.normal_form_game.
best_response_2p
[source]¶ Numbaoptimized version of Player.best_response compilied in nopython mode, specialized for 2player games (where there is only one opponent).
Return the best response action (with the smallest index if more than one) to opponent_mixed_action under payoff_matrix.
Parameters:  payoff_matrix : ndarray(float, ndim=2)
Payoff matrix.
 opponent_mixed_action : ndarray(float, ndim=1)
Opponent’s mixed action. Its length must be equal to payoff_matrix.shape[1].
 tol : scalar(float), optional(default=None)
Tolerance level used in determining best responses.
Returns:  scalar(int)
Best response action.

quantecon.game_theory.normal_form_game.
pure2mixed
(num_actions, action)[source]¶ Convert a pure action to the corresponding mixed action.
Parameters:  num_actions : scalar(int)
The number of the pure actions (= the length of a mixed action).
 action : scalar(int)
The pure action to convert to the corresponding mixed action.
Returns:  ndarray(float, ndim=1)
The mixed action representation of the given pure action.