780.20: 1094 Session 12
Handouts: Excerpt from Biner/Heermann,
printouts of sampling_test.cpp and Ising model codes
In this session, we'll extend our study of Monte Carlo
methods.
We'll compare importance sampling to ordinary random sampling and then
explore a standard example, the twodimensional Ising
model.
Please read the background notes first for an introduction
and then use the Biner/Heermann handout as you go.
Monte Carlo Sampling
In the sampling_test.cpp code, three distributions of energy for a
onedimensional Ising model [equation (2.1.1) in the handout] are
generated. The first is the exact distribution at temperature kT in
the canonical ensemble;
that is, the
distribution of energies considering every possible configuration of
spins weighted by a normalized Boltzmann factor.
The second is the energy distribution if a large number of spin
configurations are chosen at random. The third is the energy
distribution from a Metropolis Markov process.
 Here are some questions to get you familiar with the model
and its implementation in the code sampling_test.cpp:
 How does the "J" in the Ising model determine whether we
will have a ferromagnet or an antiferromagnet? Which
one does the code start with? What does the ground state
(lowest energy state) of a ferromagnet look like?
[Bonus: What is the physical origin of J for a ferromagnetic?]
 How would you add an external magnetic field H_ext
to the code?
[See Eq.(12.6) in the notes for the Hamiltonian.]
 How many total "microstates" are there for a 1D Ising
model with the number of sites in the code?
(I.e., how many possible "configurations" of spins are there?)
If calculating the energy of each configuration takes the same
amount of time, how much longer would it take to calculate a
lattice with twice the number of sites?
 Given a number of lattice sites,
what are the values of the minimum and maximum energies?
How many different values
energies are possible?
(Hint:
Can you have any integer energy between the minimum and maximum?)
 What are the boundary conditions for the line of
spins?
Find where it is used in the code.
Can you think of a different set of boundary
conditions that could be used?
 Find the calculation of the exact partition function in the
code. How is this equal to the same result as (2.1.5) in the
Binder/Heerman notes? [Hint: You can add
exp(E_{i}/kT) for each spin configuration separately, or
group all those with the same energy together and multiply by
how many have that energy.]
Can you explain how the complete set of configurations are
constructed in the code using the "next_configuration"
function?
 Make a sampling_test project
and Compile and run it to see what the output looks like.
Did you get the correct
answer above for the number of configurations and
the possible energies? (If not, rethink!)
 Look through the code and try to understand how each
distribution is generated.
 Generate and attach
a gnuplot graph of the probability of energy E [called
P(E)]
vs. the energy (this is what is output to the screen)
for kT = 10. and kT = 1.
[You'll need to get the results into data files somehow.
There is a plot file sampling_test.plt
to help with the plot, but you have to put the
results in files with these names.]
 Why is the distribution for ordinary random sampling centered
at E=0? Based on this
distribution,
what kind of temperature does ordinary random sampling
correspond to? [Hint: What Boltzmann factor would result
in a distribution like this?]
 Compare the exact P(E) for a canonical distribution to those
of random sampling and the Metropolis algorithm. Will the
Metropolis algorithm be effective in
doing importance sampling (which concentrates
points where the integrand is large)?
Why will the random sampling
be a problem for evaluating thermal averages [see figure 2.3 and
the discussion after equation (2.1.33) in Biner/Heermann]?
 Verify that the transition probabilities in (2.1.39a) and
(2.1.39b) both satisfy the condition (2.1.38). Which one
is implemented in the code?
Bonus: Switch to the other
and check that it works.

How do you calculate the average energy at a given temperature?
Modify the code to calculate the average energy at kT = 1.
and kT = 10. using the Metropolis sampling methods and compare to the exact
average energy (according to the canonical Boltzmann distribution).
The TwoD Ising Model
In this section, we explore some aspects of Monte Carlo simulations that
are discussed in section 2.2 of the handout.
We use the twodimensional Ising model first with a ferromagnetic
interaction (J > 0) and then with an "antiferromagnetic"
interaction as our example.
 Take a look at ising_model.cpp and note that one can
switch between the oned and twod Ising
models by commenting and uncommenting some lines. The default
is the twod model.
What are the differences between the oned and twod versions?
 Equilibration. Create and compile an ising_model project
including ising_model.cpp and random_seed.cpp
and run for several temperatures.

Make a gnuplot
graph of the energy vs. time for kT = 2.0, 1.0, and 0.5 (all on the
same graph, so rename your files appropriately).
Run three times at each temperature. Do you always
get the same result at large time?
 Look at the small time region.
How long does it (apparently)
take at each temperature to reach "equilibrium"? If you were going
to use the configurations generated here to calculate thermal
averages, would you want to use the
ones at the beginning? How can you deal with this?
 BONUS: Modify the code so that the output file names automatically
have the temperature in their names. (Recall filename_test.cpp from
Session 9.)
 Modify the code to change it from ferromagnetic to
antiferromagnetic (only one line needs to be changed!).
What did you change?
 Cooling. At present, the code starts from a random
configuration. Modify the code to implement "cooling" by looping
through temperatures kT = 2.0, 1.0, then 0.5 but start the simulation
at each successive temperature using the final configuration of the
higher temperature as the initial configuration of the lower
temperature. Generate a gnuplot graph of the energy vs. time
for each of the temperatures and compare to your previous results.
Conclusions?
 Efficiency. The code at present has several
inefficiencies. Here are some ways to speed it up:
 The current approach compares energies of
new and old configurations by calculating the full energy of each and
subtracting. Can you devise a (much) more efficient
approach?
(You don't need to implement it here.)
 During one MCS, we can update each spin sequentially, which
saves random number calls but also leads to shorter equilibration
times.
 We can reduce the random number calls when deciding if a spin
flips or not.
 We can use a table of nearest neighbors of each spins to save
calculation time.
These optimizations speed up the code by a factor of about six!
A new version is ising_opt.cpp (with make_ising_opt).
Look at how the optimizations were implemented.
 Phase Transition
 Modify the optimized code ising_opt.cpp to
calculate the absolute value of the magnetization of the
system (with linear size L=5) for various
temperatures. Starting at kT=4.0, cool down
the system by Delta kT =0.2 until kT=1.0. What do you observe about the
behavior of magnetization? (Make a plot.)
 When we calculate the magnetization, we use the absolute value
of it, why? It may help to look at the time dependence of
the magnetization around kT=2 with size L=5 or less.
[Bonus: Can you explain
your observation in terms of "spontaneous symmetry breaking"?]
Will this problem get better or worse when you increase the system size?
[Hint: Would it be possible for an infinite system to change from a
state with all up spins to one with all down spins?]
 Now change the system size (try L=10, 20, 40).
By changing the system size, can you observe a change in the behavior of
magnetization? If so, can you make an argument why this happens?
780.20: 1094 Session 12.
Last modified: 11:08 am, February 21, 2007.
furnstahl.1@osu.edu