# Game theory and game dynamics

Game theory was introduced by Von Neumann and Morgenstern [VON 44] in order to model the essentials of decision-making in interactive situations. In its normal-form representation, it consists of a finite set of players *I =* {1,.., *n*], a set of pure strategies for each player *S _{i}* ={s

_{1},.,

*s*and a utility function S

_{m}]_{1}x... x

*S*R, which associates strategies with payoffs. Each player can adopt a strategy in order to play a game and the obtained payoff depends on the combination of strategies played at the same time by two players (strategy profile). In non-cooperative games, the players choose their strategies independently, considering what other players can play and trying to find the best strategy to employ in a game.

_{n}^The players can play mixed strategies, which are probability distributions over pure strategies. A mixed strategy can be defined as a vector *x* = {Xj,...,*x*_{m}}, where *m* is the number of pure strategies and each

component *x ^{h}* denotes the probability that a player chooses its

*h*-th pure strategy. Each mixed strategy corresponds to a point on the simplex, whose corners correspond to pure strategies.

A strategy profile can be defined as a pair (p,*q*), where *pe A _{t}* and

*qe Aj*. The expected payoff for this strategy profile is computed as:

*
*

and

*
*

where *A _{i}* and

*A*are the payoff matrices of players

_{j}*i*and

*j*respectively.

In evolutionary game theory, we have a population of agents that play games repeatedly with their neighbors and update their beliefs on the state of the system, choosing their strategy according to what action has been effective and what has not in previous games, until the system converges. The strategy space of each player *i* is defined as a mixed strategy *x _{i}* , as defined above. The payoff corresponding to a single strategy can be computed as:

*
*

and the average payoff is: *
*

where *n* is the number of players with whom the games are played and *A _{j }*is the payoff matrix among players

*i*and

*j*. The replicator dynamic equation [TAY 78] is used to find those states of the system that correspond to the Nash equilibria of the games:

*
*

This equation allows better than average strategies to grow at each iteration. Each iteration of the dynamics can be considered as an instance of an inductive learning process, in which the players learn from the others how to play their best strategy in a determined context. When equilibrium is reached, we assign to each player the label corresponding to the strategy with the highest value, which is computed with the following equation:

*
*

Experimentally, we noticed that the selected values are always close to 1.