# Estimation

As mentioned previously, the inferential goal is to center the distribution of statistics over those of the observed network, thus fitting a model that we say gives maximal support to the data. We define the distribution as centered on the observed values when the values of the statistics from the distribution are the same as those observed on average. Formally, we want the expected value of the statistics *Eg (z(X))* to be equal to the observed statistics (i.e.,*Eg(z(X)) = z(x _{o}b*

_{s})), where

*x*is the observed graph. Equivalently,

_{o}b_{s}*E*

_{g}(z( X)) — z(x_{o}b_{s}) = 0, is known as the moment equation. Solving the moment equation for

*g*is to find the parameter values that provide maximal support to the data. For most models, we cannot solve the moment equation analytically, so we have to rely on simulation.

## Maximum Likelihood Principle

The MLE of a parameter for a given model and observed data is the value of *g* that makes observing data most likely - we want to find the vector *g *that makes the probability P_{g} *(x _{o}b_{s})* as large as possible. It can be shown that this value of

*g*is the same as the solution of the moment equation,

^{[1]}so that centering the distribution is equivalent to finding the MLE of

*g*. (This holds for all exponential family distributions, cf. Lehmann, 1983; for further details of statistical inference for ERGMs, see Corander, Dahmstrom, and Dahmstrom (1998, 2002); Crouch, Wasserman, and Trachtenberg (1998); Dahmstrom and Dahmstrom (1993); Handcock (2003); Hunter and Handcock (2006); Snijders (2002); Strauss and Ikeda (1990).)

- [1] Differentiating the logarithm of Pg(xobs) with respect to g, fg log Pg(xobs) = z(xobs) — dg log {xeXeg1 z1(x)+-+gpzp(x)}, reduces this to z(xobs) — x€X z(x)Pg (x), which is exactly the moment equation.