Table of Contents:
Phase Transition as Seen in Monte Carlo Simulations
We perform a Monte Carlo simulation for a system of linear size L at various temperatures as described in the previous section. We examine the curves of the average values of different physical quantities versus T. A phase transition is recognized by the anomalies of these quantities at some temperature.
In a second-order transition, the internal energy changes its curvature at the transition temperature Tc, the heat capacity and the susceptibility have a peak at Tc, the magnetization falls to zero but with a small tail above Tc due to a finite-size effect. These are schematically shown in Fig. 6.3.
In a first-order transition, for a sufficiently large L, the energy and the magnetization are discontinued at the transition; the heat capacity and the susceptibility therefore cannot be defined at the transition. We schematically show E and M in Fig. 6.4.
Figure 6.3 Physical quantities of a second-order phase transition as seen in a simulation with a linear size L: energy E, heat capacity Cv, magnetization M, susceptibility y.
Figure 6.4 First-order transition as seen in a simulation for a large size L: Discontinuities of E and M are observed at the transition (discontinued vertical lines).
Several remarks are in order:
Finite-Size Scaling Laws
Finite-size effects on thermodynamic quantities and on spin-spin correlation have been shown by a renormalization group analysis . The calculations are too complicated to be reproduced here. We just recall some results which are very helpful while analyzing data obtained from Monte Carlo simulations.
Let L be the linear dimension of the system under consideration. We distinguish two cases:
Second-Order Phase Transition
• The height of the maximum of the heat capacity depends on L as follows:
• The height of the maximum of the susceptibility behaves as
• The magnetization at the transition depends on L through
• The moment of n-th order is defined as
where ft = We can show that
The finite-size effects on the maxima of these moments are given
• The Binder cumulant  is defined by
In a second-order phase transition, we have
where Ue(oc) = 2/3 and в < d.
• The critical temperature depends on L through the relation
We use the above relations to determine the critical exponents by realizing simulations with many sizes L. Note that L has to be chosen so as the system sizes are different by at least two orders.
First-Order Phase Transition
When the system size is not sufficiently large, a first-order phase transition can have aspects of a second-order one, namely E and M are continuous at the transition with a maximum of Cv and /. In the case of a first-order transition, we should have
• The height of the maximum of Cv and x are proportional to the system volume Ld:
• The Binder cumulant depends on L as where UE[oo) Ф 2/3.
Note that finite-size effects are seen in the transition region around Tc because only in this region that the correlation length may become as large as the system size. This is the reason why the size effects depend on the critical exponents. They act therefore in different manners on different physical quantities. Note that at a finite size, the "pseudo" transition temperature where Cv is maximum is not that where / is maximum. Only at the infinite size that these maxima occur at the same temperature which is the "real" critical temperature. This is schematically shown in Fig. 6.5.
One of the problems encountered in Monte Carlo simulations is the estimation of errors due to a number of causes: simulation time, system size, artificial procedures used to accelerate the convergence toward equilibrium, etc. There are two types of principal errors (i) statistical errors due to autocorrelation and finite-size effects on these errors (ii) errors due to the fitting of raw data with some laws. Errors of the second type are directly given by the computer according to the chosen fitting procedure (mean least squares, for example). Errors of the first type become less and less important because computers are more and more powerful, rapid in execution with huge available memories. Simulation time nowadays is much longer than autocorrelation time and the relaxation time in most
Figure 6.5 Finite-size effects on the temperatures corresponding to the maxima of CV (black circles) and / (white circles). These temperatures coincide only when L —► oc (extrapolated by discontinued lines).
systems. Therefore, errors are extremely small. Most of Monte Carlo works at the present time do not show any more errors since they are often smaller than the size of the presented data points. We give anyway in the following some notions on error sources and show how to calculate errors.
The autocorrelation function of a quantity A at the time t with itself at t = 0 is defined by
where < • • • > is the thermal average taken between t = 0 and t, Л (t) the instantaneous value of A. By definition, we have >(0) = 1 and ф(ос) = 0. In a simulation, we can calculate and obtain the integrated autocorrelation timer by
The autocorrelation time r is defined by
We then have
Comparing to (6.32), we get
The error on A is given by its variance
where N is the total number of measures and Л(£,,) the instantaneous value of A measured at t„. The second equality has been obtained by expanding the square of the first equality and then replacing the sum by an integral using (6.31). At is the interval between two measures. To de-correlate successive values Л(Г„) we should not measure Л at each Monte Carlo step. We can, for example, take a measure once every ten steps, namely At = 10. Since
~ constant for t » t, we replace the upper limit tN by oo. In addition, we can neglect t/tN with respect to 1. We then obtain
where NAt = NMc is the total number of Monte Carlo steps used in the simulation.
The number of independent measures among N measures is The relative error on Л can be estimated by
where we neglected At while comparing to 2r. Since r is given by (6.35), < A2 > and < Л > are known from the simulation, we obtain p by the above formula.
Size Effects on Errors
In the vicinity of the transition, the relaxation time r is very long. Spins are correlated even at very long distances. This phenomenon is called "critical slowing-down.” The first error due to the finite system size comes from the error on the relaxation time. We have 
where £ is the correlation length, z the dynamic exponent (see previous sections). It has been shown that with the Metropolis algorithm (single-spin flip) we have z ~ 2 for many systems. In addition, theoretically £ diverges at the transition of second order, but in simulation the transition takes place when £ ~ L which is the infinite limit due to the periodic boundary conditions. For this reason, in a second-order transition, r depends on the size L through
In a first-order transition, we have [21, 33]
where a depends on the algorithm and (2
1.5 obtained by the Swendsen-Wang algorithm (see description in Section 6.5.1) for the same model.
Replacing r(L) of (6.41) and (6.42) in (6.39), we obtain for a second-order transition
and for a first-order transition, we have
where x is a fitting parameter.
The systematic error is defined by
The relative error on the variance < (<5Л)2 > is
For an error of e = 1%, we see that the simulation time should be 200 times r (At In simulations, we have to solve two practical problems: (i) to shorten the waiting time, (ii) to find better algorithms to study physical phenomena which cannot be treated with precision by the standard Metropolis algorithm. To solve the first kind of problems, mainly we have to find ways to accelerate convergence to equilibrium so that equilibrating time is shortened and the quality of physical quantities during a given averaging time is better. One of the best methods proposed so far is the cluster-flip method due to Wolff  and Swendsen- Wang  described below. For the second kind of problems, we can mention difficulties encountered by the Metropolis algorithm in the calculation of critical exponents and in the detection of extremely weak first-order transition. To deal with such difficulties, histogram and multiple-histogram techniques as well as flat- histogram methods have been proposed in the literature [110, 351]. We will describe them below.
In simulations, we have to solve two practical problems: (i) to shorten the waiting time, (ii) to find better algorithms to study physical phenomena which cannot be treated with precision by the standard Metropolis algorithm.
To solve the first kind of problems, mainly we have to find ways to accelerate convergence to equilibrium so that equilibrating time is shortened and the quality of physical quantities during a given averaging time is better. One of the best methods proposed so far is the cluster-flip method due to Wolff  and Swendsen- Wang  described below. For the second kind of problems, we can mention difficulties encountered by the Metropolis algorithm in the calculation of critical exponents and in the detection of extremely weak first-order transition. To deal with such difficulties, histogram and multiple-histogram techniques as well as flat- histogram methods have been proposed in the literature [110, 351]. We will describe them below.