2 Complete-Information Static Games
%if doesn’t work
%random variable %random vector or matrix%differentiated vector or matrix
%#TODO%#TODO
% Basic notation - general % Basic notation - general % Special symbols % Basic functions %image %cardinality % Operators - Analysis % Linear algebra % Linear algebra - Matrices % Statistics %at time % Useful commands % Game Theory %T-stage game %strategy in a repeated game %successor function %player function%action function
% Optimization$$
2.1 Intuition
Proceed in two steps:
- Players simultaneously and independently choose their strategies. This means that players play without observing strategies chosen by other players.
- Conditional on the players’ strategies, payoffs are distributed to all players.
Complete information means that the following is common knowledge among players:
- all possible strategies of all players,
- what payoff is assigned to each combination of strategies.
Definition 2.1 (Common knowledge) A fact \(E\) is a common knowledge among players \(\left\{1, \dots, n\right\}\) if for every sequence \(i_1, \dots, i_k \in \left\{1, \dots, n\right\}\) we have that \(i_1\) knows that \(i_2\) knows that \(\dots\) \(i_{k-1}\) knows that \(i_k\) knows \(E\).
The goal of each player is to maximize his payoff (and this fact is a common knowledge).
2.2 Strategic-Form games
To formally represent static games of complete information we define strategic-form games
Definition 2.2 (Strategic-form games) A game in strategic-form (or normal-form) is an ordered triple \(G = (N, (S_i)_{i \in N}, (u_i)_{i \in N})\), in which:
\(N = \left\{1, \dots, n\right\}\) is a finite set of players;
\(S_i\) is a set of (pure) strategies of player \(i\), for every \(i \in N\).
A strategy profile \(\boldsymbol{s}\) is a vector of strategies of all players \(\boldsymbol{s} = (s_1, \dots, s_n) \in S_1 \times \cdots \times S_n\).
We denote the set of all strategy profiles by \(S = S_1 \times \cdots \times S_n\);
\(u_i : S \to \mathbb{R}\) is a function associating each strategy profile \(\boldsymbol{s} = (s_1, \dots, s_n) \in S\) with the payoff \(u_i(\boldsymbol{s})\) to player \(i\), for every player.
Definition 2.3 (Zero-sum games) A zero-sum game \(G\) is one in which for all \(\boldsymbol{s} = (s_1, \dots, s_n) \in S\) we have \(\sum_{i \in N} u_i(\boldsymbol{s}) = 0\)
Two examples are provided in the slides Prisoners’ dilemma and Cournot Doupoly.
2.3 Solution Concepts
A solution concept is a method of analyzing games with the objective of restricting the set of all possible outcomes to those that are more reasonable than others. We will use the term equilibrium for any one of the strategy profiles that emerge as one of the solution concepts’ predictions.
We follow the approach of Steven Tadelis here, even though it is not completely standard.
Nash equilibrium is a solution concept. That is, we “solve” games by finding Nash equilibria and declaring them to be reasonable outcomes.
2.4 Assumptions
Throughout the lecture, we assume that:
- Players are rational: a rational player is one who chooses his strategy to maximize his payoff.
- Players are intelligent: An intelligent player knows everything about the game (actions and payoffs) and can make any inferences about the situation that we can make.
- Common knowledge: The fact that players are rational and intelligent is common knowledge among them.
- Self-enforcement: Any prediction (or equilibrium) of a solution concept must be self-enforcing.
Here 4th assumption implies non-cooperative game theory: Each player is in control of his actions, and he will stick to an action only if he finds it to be in his best interest
2.5 Evaluating Solution Concepts
In order to evaluate our theory as a methodological tool we use the following criteria:
- Existence (i.e., how often does it apply?): The solution concept should apply to a wide variety of games.
- E.g. We shall see that mixed Nash equilibria exist in all two-player finite strategic-form games.
- Uniqueness (How much does it restrict behavior?): We demand our solution concept to restrict the behavior as much as possible.
- E.g. So-called strictly dominant strategy equilibria are always unique as opposed to Nash eq.
2.5.1 Pure Strategies
We will consider the following solution concepts:
- strict dominant strategy equilibrium;
- iterated elimination of strictly dominated strategies (IESDS);
- rationalizability;
- Nash equilibria.
These concepts will be first discussed in terms of pure strategies only at first.