# DWA Tutorial 1: Studying the temperature dependence of energy density at fixed chemical potential

## Command prompt

Write a parameter script file parm, which is illustrated as follows.

LATTICE="square lattice"
MODEL="boson Hubbard"

L=20
Nmax=20

t=1.
U=16.
mu=32.

SWEEPS=100000
SKIP=400

{T=0.7}
{T=0.8}
{T=0.9}
{T=1.0}
{T=1.1}
{T=1.2}
{T=1.3}
{T=1.4}
{T=1.5}
{T=1.6}
{T=1.7}
{T=1.8}
{T=1.9}
{T=2.0}


# DWA Tutorial 1: Single simulation (Fully manual mode)

This tutorial guides the user to be able to fully control the simulation. Such a fully manual mode allows the user to understand the flow entirely.

Interested people are kindly requested to forward further enquiries to the ALPS mailing list via here.

## Importing modules in Python

The following modules have to be imported prior the simulation(s).

import pyalps;
import pyalps.dwa;


## Starting from scratch

This section illustrates how to start a DWA simulation from scratch. As an example, we shall simulate the boson Hubbard model for a 202 square lattice. In Python interface, the parameters for the simulation can be initialized as follows.

params=[]
params.append(

{ 'LATTICE'         : 'square lattice'   # Refer to <lattice.xml> from ALPS Lattice Library
, 'MODEL'           : 'boson Hubbard'    # Refer to <model.xml>   from ALPS Model Library

, 'L'               : 20                 # Length aspect of lattice
, 'Nmax'            : 20                 # Maximum number of bosons on each site

, 't'               : 1.                 # Hopping
, 'U'               : 16.                # Onsite Interaction
, 'mu'              : 32.                # Chemical potential
, 'T'               : 1.                 # Temperature

, 'SWEEPS'          : 100000             # Total number of sweeps
, 'SKIP'            : 400                # Number of sweeps before measurement (You don't need to measure too often!)
}

)

h5_infiles = pyalps.writeInputH5Files("parm9a",params);


Here, the above python script

1. writes the parameters accordingly to parm9a.task1.in.h5, and
2. returns a list of python string ['parm9a.task1.in.h5'] to h5_infiles.

Next, the simulation could be easily started in the Python interface

pyalps.runApplication("dwa", h5_infiles);


or in the command prompt

dwa parm9a.task1.in.h5


where the output file is defaulted to be parm9a.task1.out.h5

A time limit of 7 hours, or 27200 seconds, for instance, can be additionally specified in the Python interface as

pyalps.runApplication("dwa", h5_infiles, T=27200);


or in the command prompt as

dwa -T 27200 parm9a.task1.in.h5


## Starting from last configuration (discarding all measurements)

Very often, an existing DWA simulation has just (not yet) reached thermalization . For these purposes, it is useful to start a new DWA simulation from the last saved worldlines configuration. We can easily do this in the Python interface

pyalps.dwa.extract_worldlines(infile="parm9a.task1.out.h5", outfile="parm9a.task1.in.h5");


In the above script(s), dwa-extract-worldlines extracts the existing worldlines configuration from the h5 file parm9a.task1.out.h5 and saves it into parm9a.task1.in.h5.

## Starting from last configuration (keeping all measurements)

The following command starts a new simulation from the last saved worldlines configuration while keeping all measurements, for the purpose of obtaining more measurements for better statistics. In Python interface,

pyalps.runApplication("dwa", "parm9a.task1.out.h5");


or in command prompt

dwa parm9a.task1.out.h5


where the output file is also defaulted to be parm9a.task1.out.h5

## Checking thermalization

The following command checks whether the measurement observable "Total Particle Number" has reached thermalization in the Monte Carlo simulation.

pyalps.dwa.thermalized("parm9a.task1.out.h5", "Total Particle Number");


To check for thermalization of a set of measurement observables, the following is an easy example.

pyalps.dwa.thermalized("parm9a.task1.out.h5", ["Total Particle Number", "Energy"]);


A simplified boolean indicator for thermalization can be obtained via

pyalps.dwa.thermalized("parm9a.task1.out.h5", ["Total Particle Number", "Energy"], simplified=True);


To see a more complete log, one has to set the argument variable includeLog to True.

pyalps.dwa.thermalized("parm9a.task1.out.h5", ["Total Particle Number", "Energy"], includeLog=True);


Note:

1. Only measurement observables that support detailed binning analysis can be checked for thermalization.
2. The user can easily replace the default thermalization script for his own purpose.
3. Thermalization can be improved with a bigger value of "SWEEPS".

## Checking autocorrelations — Are you measuring too frequently?

Checking autocorrelations is necessary for strategic purposes. Though they will not affect the accuracy of Monte Carlo statistics, but they do affect the speed of convergence drastically.The autocorrelation time $\tau$ of the measurement observable "Total Particle Number", for example, can be easily obtained via:

pyalps.dwa.tau("parm9a.task1.out.h5", "Total Particle Number");


or some set of measurement observables via:

pyalps.dwa.tau("parm9a.task1.out.h5", ["Total Particle Number", "Energy"]);


The knowledge of $\tau$ can help the user modify the value of "SKIP" accordingly.

## Checking convergence

The following command checks whether the measurement observable "Total Particle Number" has reached convergence in the Monte Carlo simulation.

pyalps.dwa.converged("parm9a.task1.out.h5", "Total Particle Number");


To check for convergence of a set of measurement observables, the following is an easy example.

pyalps.dwa.converged("parm9a.task1.out.h5", ["Total Particle Number", "Energy"]);


A simplified boolean indicator for convergence can be obtained via

pyalps.dwa.converged("parm9a.task1.out.h5", ["Total Particle Number", "Energy"], simplified=True);


To see a more complete log, one has to set the argument variable includeLog to True.

pyalps.dwa.converged("parm9a.task1.out.h5", ["Total Particle Number", "Energy"], includeLog=True);


Note:

1. Thermalization should be checked prior checking convergence.
2. The following are some reasons why some Monte Carlo statistics have not converged:
1. There are too few measurements. The user has to continue the simulation and obtain more measurements.
2. There are too many correlated measurements. The user has to check autocorrelations and increase the value of "SKIP".
3. The simulation has not thermalized. If thermalization has indeed once been "reached" and not now, the user has to increase the value of "SWEEPS" to stablize thermalization.

# DWA Tutorial 2: Chaining simulations (Laptop mode)

## Designing recursions

Using the same example in tutorial 1, i.e.

import pyalps;
import pyalps.dwa;

params=[]
params.append(

{ 'LATTICE'         : 'square lattice'   # Refer to <lattice.xml> from ALPS Lattice Library
, 'MODEL'           : 'boson Hubbard'    # Refer to <model.xml>   from ALPS Model Library

, 'L'               : 20                 # Length aspect of lattice
, 'Nmax'            : 20                 # Maximum number of bosons on each site

, 't'               : 1.                 # Hopping
, 'U'               : 16.                # Onsite Interaction
, 'mu'              : 32.                # Chemical potential
, 'T'               : 1.                 # Temperature

, 'SWEEPS'          : 100000             # Total number of sweeps
, 'SKIP'            : 400                # Number of sweeps before measurement (You don't need to measure too often!)
}

)

h5_infiles = pyalps.writeInputH5Files("parm9a",params);


### Chaining n recursive simulations during thermalization

The user can easily design a simulation chain of, for instance, 3 recursions during the thermalization process:

pyalps.dwa.recursiveRun(
"pyalps.runApplication('dwa', h5_infiles)" ,  cmd_lang = 'python' ,
follow_up_script = "pyalps.dwa.extract_worldlines(infile=pyalps.input2output(h5_infiles), outfile=h5_infiles)" ,
n = 3,
write_status = "pyalps.dwa.write_status(pyalps.input2output(h5_infiles), 'Thermalizing')" ,
loc = locals()
);


### Chaining simulations recursively until reaching thermalization

The user can also recursively run the simulation until reaching thermalization. (The user decides for himself/ herself the thermalization condition. The following is an example.

pyalps.dwa.recursiveRun(
"pyalps.runApplication('dwa', h5_infiles)" ,  cmd_lang = 'python' ,
follow_up_script = "pyalps.dwa.extract_worldlines(infile=pyalps.input2output(h5_infiles), outfile=h5_infiles)" ,
break_if = "pyalps.dwa.thermalized(pyalps.input2output(h5_infiles), 'Total Particle Number', simplified=True)" ,
write_status = "pyalps.dwa.write_status(pyalps.input2output(h5_infiles), 'Thermalizing')" ,
loc = locals()
);


### Chaining simulations recursively until convergence

The following is an example of a recursive chain of simulations until convergence.

pyalps.dwa.recursiveRun(
"pyalps.runApplication('dwa', pyalps.input2output(h5_infiles))" ,  cmd_lang = 'python' ,
break_if = "pyalps.dwa.converged(pyalps.input2output(h5_infiles), 'Total Particle Number', simplified=True)" ,
write_status = "pyalps.dwa.write_status(pyalps.input2output(h5_infiles), 'Converging')" ,
loc = locals()
);


### Chaining more simulations for better statistics after convergence

Very often, the user wants to improve the quality of Monte Carlo statistics beyond convergence. The following is an example of designating 10 more recursions.

pyalps.dwa.recursiveRun(
"pyalps.runApplication('dwa', pyalps.input2output(h5_infiles))" ,  cmd_lang = 'python' ,
n = 10 ,
write_status = "pyalps.dwa.write_status(pyalps.input2output(h5_infiles), 'Converging')" ,
loc = locals()
);


### Simulation status

By default, there are only 2 status, namely Thermalizing and Converging, that are archived in the output files. Following the above examples, the most recent simulation status can be read from

pyalps.dwa.status('parm9a.task1.out.h5');


and the entire simulation status history from

pyalps.dwa.status('parm9a.task1.out.h5', includeLog=True);


The user is free to define his own status based on his own needs. The following is how one can assign the status Zurich to the output file

pyalps.dwa.write_status('parm9a.task1.out.h5', 'Zurich');


Knowing the simulation status history is often very useful in practice to keep track of

1. unstablized thermalization;
2. overlengthy simulation (a strong indicative hint of telling you to measure more/less frequently;
3. possible errors due to machine/ cluster.

### Summary report of measurements

One can conveniently see the summary report of measurements stored in the file parm9a.task1.out.h5 via:

pyalps.dwa.summaryReport('parm9a.task1.out.h5');


### Multi-chaining automation

Assuming that the user knows exactly what he is doing, he can design fully automated scripts by himself/herself. An example is given as follows.

def serial_automated_processing(taskfile):
while (True):
pyalps.dwa.recursiveRun(
"pyalps.runApplication('dwa', taskfile)" ,  cmd_lang = 'python' ,
break_if = "pyalps.dwa.thermalized(pyalps.input2output(taskfile), 'Total Particle Number', simplified=True)" ,
loc = locals()
);

pyalps.dwa.recursiveRun(
"pyalps.runApplication('dwa', pyalps.input2output(taskfile))" ,  cmd_lang = 'python' ,
break_if = "pyalps.dwa.converged(pyalps.input2output(taskfile), 'Total Particle Number', simplified=True)" ,
break_elseif = "not pyalps.dwa.thermalized(pyalps.input2output(taskfile), 'Total Particle Number', simplified=True)" ,
loc = locals()
);

if pyalps.dwa.thermalized(pyalps.input2output(taskfile), 'Total Particle Number', simplified=True):
break;
else:

serial_automated_processing(h5_infiles);


## Resetting parameters

Sometimes, for whatever reasons as maybe listed in the following, parameters have to be reset. This section illustrates how this can be easily done.

### Example: Measuring too few

A close inspection of autocorrelations via

>>> pyalps.dwa.tau(pyalps.input2output(h5_infiles), 'Total Particle Number')
[0.01291737172549734]


signals that one can increase the frequency of measurements, thus speeding up convergence. The following is an example of how to reset some of the parameters, and write it to a new file parm9b.task1.in.h5.

params = pyalps.getParameters(h5_infiles);

params.update(

{ 'SWEEPS'          : 100000
, 'SKIP'            : 400
}

)

h5_infiles = pyalps.writeInputH5Files("parm9b",params);


The following easily starts the new simulation and runs it to completion.

pyalps.dwa.extract_worldlines(infile="parm9a.task1.in.h5", outfile="parm9b.task1.in.h5");  # You don't want to thermalize again, right?
pyalps.runApplication('dwa', h5_infiles);
pyalps.dwa.recursiveRun(
"pyalps.runApplication('dwa', pyalps.input2output(h5_infiles))" ,  cmd_lang = 'python' ,
break_if = "pyalps.dwa.converged(pyalps.input2output(h5_infiles), 'Total Particle Number')" ,
write_status = "pyalps.dwa.write_status(pyalps.input2output(h5_infiles), 'Converging')" ,
loc = locals()
);


Indeed, the autocorrelation time increases almost proportionally with the decrement in measurement frequency.

>>> pyalps.dwa.tau(pyalps.input2output(h5_infiles), 'Total Particle Number')
[0.15614752133389365]


### Example: Unstable automated processes

In the last examples, the automated processes are stable, as evident in the simulation status history, i.e.

>>> pyalps.dwa.status('parm9a.task1.out.h5',includeLog=True)
['Thermalizing', 'Thermalizing', 'Converging', 'Converging', 'Converging', 'Converging']


However, this may not happen all the time, and it may end up in an endless simulation loop chain. When this happens, increase SWEEPS by an order of magnitude.

### Example: Including more measurement observables

Very often, the user decides at the very last moment to perform alternative measurements. The following example illustrates how to do this easily.

Step 1: Prepare a newly modified parameter file

params = pyalps.getParameters(h5_infiles);
params.update( { 'MEASURE[Local Density]' : 1 } );
h5_infiles = pyalps.writeInputH5Files("parm9c",params);


Step 2: Collect statistics for new observable

pyalps.dwa.extract_worldlines(infile='parm9b.task1.in.h5', outfile='parm9c.task1.in.h5');
pyalps.runApplication('dwa', h5_infiles);
pyalps.dwa.recursiveRun(
"pyalps.runApplication('dwa', pyalps.input2output(h5_infiles))" ,  cmd_lang = 'python' ,
n = 10 ,
loc = locals()
);


Side remark: There is only simple binning analysis support for the measurement observable Local Density, because we do not want to blow up the memory. There is no guarantee for the convergence of individual error bars.

Suppose we want to perform simulations across a "temperature-spectra", as illustrated in the following.

import numpy;
import pyalps;
import pyalps.dwa;

Ts = numpy.arange(0.5, 5.0, 0.1);   # Ts = 0.5, 0.6, 0.7, ..., 4.7, 4.8, 4.9

params=[]
for T in Ts:
params.append(

{ 'LATTICE'         : 'square lattice'   # Refer to <lattice.xml> from ALPS Lattice Library
, 'MODEL'           : 'boson Hubbard'    # Refer to <model.xml>   from ALPS Model Library

, 'L'               : 20                 # Length aspect of lattice
, 'Nmax'            : 20                 # Maximum number of bosons on each site

, 't'               : 1.                 # Hopping
, 'U'               : 20.                # Onsite Interaction
, 'mu'              : 15.                # Chemical potential
, 'T'               : T                  # Temperature

, 'SWEEPS'          : 100000             # Total number of sweeps
, 'SKIP'            : 400                # Number of sweeps before measurement (You don't need to measure too often!)
}

)

h5_infiles = pyalps.writeInputH5Files("parm9d",params);


The following is how one could (undesirably) perform the above mentioned multi-tasking simulation chains. (The definition of the python function serial_automated_processing is located here.)

for h5_infile in h5_infiles:
serial_automated_processing(h5_infile);


# DWA tutorial 3: Chaining simulations (Cluster mode)

While tutorial 2 illustrates the concept of simulation chaining, it is only suitable to be performed in your personal laptops. For heavier or more massive simulations, the simulation chaining (or job chaining) has to be performed on clusters. This section illustrates how this is done.

## Submit a batch job to cluster

The usual way to submit a batch job to a Unix cluster is through the use of a bash script. Using the Brutus cluster of ETH Zurich as an example, the command to request for 1 processor worth 10 minutes each is

bsub -n1 -W10 dwa parm9d.task1.in.h5


However, we want to do everything in the Python interface for convenience. To do this, the equivalent Python command is

import pyalps;
import pyalps.dwa;

pyalps.dwa.recursiveRun(
batch_submit = True ,
batch_cmd_prefix = 'bsub -n1 -W10'
);


This may seem cumbersome at first. However, its convenience shows up very shortly in the coming subsections.

## Recursive chaining

Yeah! The usual recursive chaining can now be very easily brought forward to the cluster mode (due to tremendous effort done by the coder).

import pyalps;
import pyalps.dwa;

h5_infiles = pyalps.getInputH5Files(prefix="parm9d")


For thermalization, we have

pyalps.dwa.recursiveRun(
"pyalps.runApplication('dwa', h5_infiles)" ,  cmd_lang = 'python' ,
follow_up_script = "pyalps.dwa.extract_worldlines(infile=pyalps.input2output(h5_infiles), outfile=h5_infiles)" ,
break_if = "pyalps.dwa.thermalized(pyalps.input2output(h5_infiles), 'Total Particle Number', simplified=True)" ,
write_status = "pyalps.dwa.write_status(pyalps.input2output(h5_infiles), 'Thermalizing')" ,
loc = locals() ,
batch_submit = True ,
batch_cmd_prefix = 'bsub -n1 -W10' ,
batch_run_script = h5_infiles + '.thermalize.script'
);


For convergence, we have

pyalps.dwa.recursiveRun(
"pyalps.runApplication('dwa', pyalps.input2output(h5_infiles))" ,  cmd_lang = 'python' ,
break_if = "pyalps.dwa.converged(pyalps.input2output(h5_infiles), 'Total Particle Number', simplified=True)" ,
write_status = "pyalps.dwa.write_status(pyalps.input2output(h5_infiles), 'Converging')" ,
loc = locals() ,
batch_submit = True ,
batch_cmd_prefix = 'bsub -n1 -W10' ,
batch_run_script = h5_infiles + '.converge.script'
);


For better statistics, we can similarly do the following. Say, let's do 10 more recursions as follows.

pyalps.dwa.recursiveRun(
"pyalps.runApplication('dwa', pyalps.input2output(h5_infiles))" ,  cmd_lang = 'python' ,
n = 10 ,
loc = locals() ,
batch_submit = True ,
batch_cmd_prefix = 'bsub -n1 -W10' ,
batch_run_script = h5_infiles + '.statistics.script'
);


## Multi recursive chaining

The user can also flexibly build up multi layers of recursive chaining for strategic purposes. The following code shows how easily this could be!

import pyalps;
import pyalps.dwa;

h5_infiles = pyalps.getInputH5Files(prefix="parm9d")

def parallel_automated_processing(taskfile):
pyalps.dwa.recursiveRun(
"pyalps.runApplication('dwa', taskfile)" ,  cmd_lang = 'python' ,
break_if = "pyalps.dwa.thermalized(pyalps.input2output(taskfile), 'Total Particle Number', simplified=True)" ,
loc = locals() ,
batch_submit = True ,
batch_cmd_prefix = 'bsub -n1 -W10' ,
batch_run_script = taskfile + '.thermalize.script' ,
batch_next_run_script = taskfile + '.converge.script' ,
batch_noRun = True
);

pyalps.dwa.recursiveRun(
"pyalps.runApplication('dwa', pyalps.input2output(taskfile))" ,  cmd_lang = 'python' ,
break_if = "pyalps.dwa.converged(pyalps.input2output(taskfile), 'Total Particle Number', simplified=True)" ,
loc = locals() ,
batch_submit = True ,
batch_cmd_prefix = 'bsub -n1 -W10' ,
batch_run_script = taskfile + '.converge.script' ,
batch_noRun = True
);

pyalps.dwa.startRunScript(
batch_cmd_prefix = 'bsub -n1 -W10' ,
loc = locals()
);

for h5_infile in h5_infiles:
parallel_automated_processing(h5_infile);


## Handling with measurements (Example: Temperature dependence of density at fixed chemical potential)

This subsection illustrates how one can handle with measurements with ease.

import pyalps;


### Entire information

Sometimes, we have many queries regarding the measurement rawdata, and we want to inspect more closely into every single detail. This can easily be done. Say, we want to know every detail regarding the measurement observable Density in the output file parm9d.task1.out.h5:

pyalps.getMeasurements('parm9d.task1.out.h5', observable='Density');


Not to forget, the parameters can be obtained by

pyalps.getParameters('parm9d.task1.out.h5');


### Convenient tools to handle with measurements

The list of all result files with prefix parm9d can be obtained by:

resultFiles = pyalps.getResultFiles(prefix='parm9d');


To load all data regarding the measurement observable Density:

data = pyalps.loadMeasurements(resultFiles, 'Density');


or regarding an additional measurement observable Energy Density

data = pyalps.loadMeasurements(resultFiles, ['Density', 'Energy Density']);


We can next easily collect and group measurements accordingly, say:

T_Density = pyalps.collectXY(data, x='T', y='Density');


### Plotting the measurements

One can prepare the plot, say in jpg format

import matplotlib;
matplotlib.use('Agg');
import matplotlib.pyplot;
import pyalps.plot;

fig = matplotlib.pyplot.figure();
pyalps.plot.plot(T_Density);
fig.savefig('parm9d.T_Density.jpg');



### Email the plot to yourself

You can then easily send the beautiful plot to your (my) email:

pyalps.sendmail( 'pingnang@phys.ethz.ch'    # email address of recipients
, attachment='parm9d.T_Density.jpg'
, subject='Picture: parm9d.T_Density.jpg'
);


## Submitting jobs to PBS cluster

When a shell script is compulsory for job submission, say to the PBS cluster, it is really irritating. But ALPS has made auto-recursions easily possible too.

import pyalps;
import pyalps.dwa;


One can, for instance, submit a job (automatically done within python) that runs dwa parm9d.task1.h5 as follows.

pyalps.dwa.recursiveRun(
batch_submit = True ,
batch_cmd_prefix = 'qsub -l nodes=1:ppn=1 -l walltime=0:10:00'
);


The following are usual matters.

For thermalization, we have

pyalps.dwa.recursiveRun(
"pyalps.runApplication('dwa', h5_infiles)" ,  cmd_lang = 'python' ,
follow_up_script = "pyalps.dwa.extract_worldlines(infile=pyalps.input2output(h5_infiles), outfile=h5_infiles)" ,
break_if = "pyalps.dwa.thermalized(pyalps.input2output(h5_infiles), 'Total Particle Number', simplified=True)" ,
write_status = "pyalps.dwa.write_status(pyalps.input2output(h5_infiles), 'Thermalizing')" ,
loc = locals() ,
batch_submit = True ,
batch_cmd_prefix = 'qsub -l nodes=1:ppn=1 -l walltime=0:10:00' ,
batch_run_script = h5_infiles + '.thermalize.script'
);


For convergence, we have

pyalps.dwa.recursiveRun(
"pyalps.runApplication('dwa', pyalps.input2output(h5_infiles))" ,  cmd_lang = 'python' ,
break_if = "pyalps.dwa.converged(pyalps.input2output(h5_infiles), 'Total Particle Number', simplified=True)" ,
write_status = "pyalps.dwa.write_status(pyalps.input2output(h5_infiles), 'Converging')" ,
loc = locals() ,
batch_submit = True ,
batch_cmd_prefix =  'qsub -l nodes=1:ppn=1 -l walltime=0:10:00' ,
batch_run_script = h5_infiles + '.converge.script'
);


For better statistics, we can similarly do the following. Say, let's do 10 more recursions as follows.

pyalps.dwa.recursiveRun(
"pyalps.runApplication('dwa', pyalps.input2output(h5_infiles))" ,  cmd_lang = 'python' ,
n = 10 ,
loc = locals() ,
batch_submit = True ,
batch_cmd_prefix =  'qsub -l nodes=1:ppn=1 -l walltime=0:10:00' ,
batch_run_script = h5_infiles + '.statistics.script'
);


Here, I want to specially thank Mr. Francisco Cordobes Aguilar (University of London) to test this functionality.

## Example: Temperature dependence of energy density at fixed density

This is very simple.

### Chemical potential meshing

In this example, let us fix the density at 1 for the following parameters:

import numpy;
import pyalps;
import pyalps.dwa;

Ts  = numpy.arange(0.5, 5.0, 0.1);   # Ts = 0.5, 0.6, 0.7, ..., 4.7, 4.8, 4.9
mus = numpy.arange(0.0, 6.0, 1.0);   # mus = 0.0, 1.0, 2.0, 3.0, 4.0, 5.0

params=[]
for T in Ts:
for mu in mus:
params.append(

{ 'LATTICE'         : 'square lattice'        # Refer to <lattice.xml> from ALPS Lattice Library
, 'MODEL'           : 'boson Hubbard'         # Refer to <model.xml>   from ALPS Model Library

, 'L'               : 20                 # Length aspect of lattice
, 'Nmax'            : 20                 # Maximum number of bosons on each site

, 't'               : 1.                 # Hopping
, 'U'               : 10.                # Onsite Interaction
, 'mu'              : mu                 # Chemical potential
, 'T'               : T                  # Temperature

, 'SWEEPS'          : 100000             # Total number of sweeps
, 'SKIP'            : 400                # Number of sweeps before measurement (You don't need to measure too often!)
}

)

h5_infiles = pyalps.writeInputH5Files("parm9e",params);


The following will get us some insights of the physics:

for h5_infile in h5_infiles:
parallel_automated_processing(h5_infile);


### Observe from the plot

Next, we want to prepare the following plot:

resultFiles = pyalps.getResultFiles(prefix='parm9e');
density = pyalps.collectXY(data, x='T', y='Density', foreach=['mu']);

import matplotlib;
matplotlib.use('Agg');
import matplotlib.pyplot;
import pyalps.plot;

fig = matplotlib.pyplot.figure();
pyalps.plot.plot(density);
fig.savefig('parm9e.density.jpg');


Send the beautiful plot to your (my) email:

pyalps.sendmail( 'pingnang@phys.ethz.ch'    # email address of recipients
, attachment='parm9e.density.jpg'
, subject='Picture: parm9e.density.jpg'
);


This should look like the following: 2D boson Hubbard model of size 202 at U/t=10: Density vs temperature at fixed chemical potential.

Note that the target density of 1. is both lower-bounded as well as upper-bounded, and we are ready to do refinements in the next subsection.

### Fixing the density

First, we obtain a set of new parameters estimated to yield the fixed density of 1. :

resultFiles = pyalps.getResultFiles(prefix='parm9e');
params = pyalps.paramsAtFixedY(data, x='mu', y='Density', fixedY=1., foreach=['T']);


Then write into the new h5 parameter files:

h5_infiles = pyalps.writeInputH5Files("parm9e.1",params);


Run them:

for h5_infile in h5_infiles:
parallel_automated_processing(h5_infile);


Repeat the plotting procedure to see if you are satisfied with the new estimates. Else, repeat this subsection to give parm9e.2, parm9e.3, ... ... , until you are satisfied.

### Energy density at fixed density

Suppose you are satisfied with parm9e.2, then your task is done with the following procedure:

resultFiles = pyalps.getResultFiles(prefix='parm9e.2');
energy_density = pyalps.collectXY(data, x='T', y='Energy Density');

import matplotlib;
matplotlib.use('Agg');
import matplotlib.pyplot;
import pyalps.plot;

fig = matplotlib.pyplot.figure();
pyalps.plot.plot(energy_density);
fig.savefig('parm9e.energy_density.jpg');


Send the beautiful plot to your (my) email:

pyalps.sendmail( 'pingnang@phys.ethz.ch'    # email address of recipients
, attachment='parm9e.energy_density.jpg'
, subject='Picture: parm9e.energy_density.jpg'
);


This should look like the following: 2D boson Hubbard model of size 202 at U/t=10: Energy density vs temperature at fixed density of 1.

# DWA tutorial 4: Extracting worldlines configuration

In reality, it is impossible to cater to everyone's desire for every single measurement observable to be performed. However, when it comes to diagonal measurements, for instance the density-density correlations, there is a simple trick to do this.

This section illustrates how this can be easily done.

## Worldlines configuration

import numpy;
import pyalps;
import pyalps.dwa;
import pyalps.ngs;


Using the same example as parm9a, i.e.:

params=[]
params.append(

{ 'LATTICE'         : 'square lattice'   # Refer to <lattice.xml> from ALPS Lattice Library
, 'MODEL'           : 'boson Hubbard'    # Refer to <model.xml>   from ALPS Model Library

, 'L'               : 20                 # Length aspect of lattice
, 'Nmax'            : 20                 # Maximum number of bosons on each site

, 't'               : 1.                 # Hopping
, 'U'               : 16.                # Onsite Interaction
, 'mu'              : 32.                # Chemical potential
, 'T'               : 1.                 # Temperature

, 'SWEEPS'          : 100000             # Total number of sweeps
, 'SKIP'            : 400                # Number of sweeps before measurement (You don't need to measure too often!)
}

)

h5_infiles = pyalps.writeInputH5Files("parm9a",params);


h5_infiles = pyalps.getInputH5Files(prefix='parm9a');


Run till thermalisation: (The user is free to modify the following for cluster mode.)

pyalps.dwa.recursiveRun(
"pyalps.runApplication('dwa', h5_infiles)" ,  cmd_lang = 'python' ,
follow_up_script = "pyalps.dwa.extract_worldlines(infile=pyalps.input2output(h5_infiles), outfile=h5_infiles)" ,
break_if = "pyalps.dwa.thermalized(pyalps.input2output(h5_infiles), 'Total Particle Number', simplified=True)" ,
write_status = "pyalps.dwa.write_status(pyalps.input2output(h5_infiles), 'Thermalizing')" ,
loc = locals()
);


The entire worldlines configuration object can be obtained via:

wl = pyalps.dwa.extract_worldlines(pyalps.input2output(h5_infiles));


A single shot measurement of the state(s) is obtained via:

states = numpy.array(wl.states()).reshape([20,20])


A single shot measurement of density-density correlation will be naturally:

rho = numpy.zeros((20,20));
for i in range(20):
for j in range(20):
rho[i][j] = (states * pyalps.dwa.cyclic_shift(states, by=(i,j))).mean()


## Visualizing the worldlines configuration

The user can easily visualize the worldlines configuration for himself/herself.

Suppose, for instance, we want to visually look at the worldlines configuration along the x-direction at y=5 from the task parm9a.task1.out.h5. This can be easily done as the following:

wl = pyalps.dwa.extract_worldlines('parm9a.task1.out.h5')
pyalps.dwa.show_worldlines(wl, reshape = [20,20], at = '[:,5]');


This will look something like this:

It is also possible to easily sketch your own (extended) worldlines configuration using Python, but that will not be discussed here. In practice, you can even produce your own movie that illustrates how the worm propagates from one worldlines configuration to another.

Interested people are kindly requested to forward further enquiries to the ALPS mailing list via here.

## Example: Density-density correlation

After thermalization, we shall now turn off all intrinsic measurements:

params = pyalps.getParameters(h5_infiles);
params.update({ 'MEASURE': 0 , 'SWEEPS': 400 });
h5_infiles = pyalps.writeInputH5Files("parm9a.1",params);


and not to forget copying the thermalized worldlines configuration and run once:

pyalps.dwa.extract_worldlines('parm9a.task1.in.h5','parm9a.1.task1.in.h5');
pyalps.runApplication('dwa',h5_infiles);


Next, we write our own script to collect the statistics for density-density correlations:

def my_own_private_python_function_to_collect_statistics(taskfile):
wl = pyalps.dwa.worldlines();
states = numpy.array(wl.states()).reshape([20,20])

rho = numpy.zeros((20,20));
for i in range(20):
for j in range(20):
rho[i][j] = (states * pyalps.dwa.cyclic_shift(states, by=(i,j))).mean();



Finally, we can very easily gear up our own implementation of single tasking like this:

def my_own_private_simulation(taskfile, loc):
locals().update(loc);
pyalps.dwa.recursiveRun(
"pyalps.runApplication('dwa', pyalps.input2output(taskfile))" ,  cmd_lang = 'python' ,
n = 64 ,
loc = locals()
);


A simple check can be done as the following:

my_own_private_simulation(h5_infiles, loc=locals());


Next, we want to automate matters again:

def my_own_private_serial_automated_processing(taskfile, loc):
locals().update(loc);
pyalps.dwa.recursiveRun(
break_if = "my/own/private/python/function/to/check/for/convergence" ,
loc = locals()
);

my_own_private_serial_automated_processing(h5_infiles, loc=locals());


Very easy.

Step 1: Ensure all tasks have been thermalized. Refer to the previous sections to see how this is done.

Step 2: Collect statistics until convergence. To do this

in laptop mode:

for h5_infile in h5_infiles:
my_own_private_serial_automated_processing(h5_infile);


or in cluster mode:

def my_own_private_parallel_automated_processing(taskfile):
pyalps.dwa.recursiveRun(
break_if = "my/own/private/python/function/to/check/for/convergence" ,
loc = locals() ,
batch_submit = True ,
batch_cmd_prefix = 'bsub -n1 -W60' ,
);

for h5_infile in h5_infiles:
my_own_private_parallel_automated_processing(h5_infile);


# DWA tutorial 5: Trapped bosons in actual-size 3D optical lattice

The DWA code is itself marvellous in being able to handle very robust simulation sizes. As a first example, we first look at one specific parameter input that resembles the experiments.

import pyalps;
import pyalps.dwa;


Step 2: Preparing the parameter file

tof_phase = pyalps.dwa.tofPhase(time_of_flight=15.5, wavelength=[843,765,765], mass=86.99)

params=[]
params.append(

{ 'LATTICE'         : 'inhomogeneous simple cubic lattice'   # Refer to <lattice.xml> from ALPS Lattice Library
, 'MODEL'           : 'boson Hubbard'                        # Refer to <model.xml>   from ALPS Model Library

, 'L'               : 100                # Length aspect of lattice
, 'Nmax'            : 20                 # Maximum number of bosons on each site

, 't'               : 1.                 # Hopping
, 'U'               : 8.11               # Onsite Interaction
, 'T'               : 1.                 # Temperature
, 'mu_homogeneous'  : 4.05               # Chemical potential (homogeneous)
, 'mu'              : 'mu_homogeneous - (0.0073752*(x-(L-1)/2.)*(x-(L-1)/2.) + 0.0036849*(y-(L-1)/2.)*(y-(L-1)/2.) + 0.0039068155*(z-(L-1)/2.)*(z-(L-1)/2.))'

, 'tof_phase'       :  str(tof_phase)

, 'SWEEPS'          : 100000             # Total number of sweeps
, 'SKIP'            : 100                # Number of sweeps before measurement (You don't need to measure too often!)
}

)

h5_infiles = pyalps.writeInputH5Files("parm9f",params);


or simply if existent,

h5_infiles = pyalps.getInputH5Files(prefix='parm9f');


Have a preliminary taste:

pyalps.runApplication('dwa', h5_infiles);


Detailed step by step instruction for running this example is illustrated here.

## Density profile

The derivation is found here.

The cross-section density profile at z=0:

and the column-integrated density profile:

## Time-of-flight image

The derivation is found here.

The time-of-flight image:

# Documentation of DWA code

## Introduction to the physics

### Optical lattice

At this first moment, we shall look at the simplest case, i.e. a single particle of mass m which experiences a periodic potential $V(\vec{r})$, where $V(\vec{r}) = \sum_{x_\alpha = x,y,z} V_0^{x_\alpha} \sin^2 (\pi x_\alpha)$

in the units of recoil energy $E_r^\alpha = \frac{\hbar^2}{2m} \left( \frac{2\pi}{\lambda_\alpha} \right)^2$ and lattice spacing $\frac{\lambda_\alpha}{2}$.

The quantum mechanical behaviour of the single particle follows $\left[\frac{1}{\pi^2} \left( -i \nabla + 2\pi \vec{k} \right)^2 + \sum_{x_\alpha = x,y,z} V_0^{x_\alpha} \sin^2 (\pi x_\alpha) \right] u_k (\vec{r}) = \epsilon_k u_k(\vec{r})$

which is clearly separable to say the x-component: $\left[\frac{1}{\pi^2} \left( -i \partial_x + 2\pi k_x \right)^2 + V_0^{x} \sin^2 (\pi x) \right] u_{k_x} (x) = \epsilon_{k_x} u_{k_x}(x) \,\,\,\,\, ,\,\,\,\,\, k_x = 0, \frac{1}{L_x} ,\cdots \frac{L_x-1}{L_x}$

In the plane wave basis, $u_{k_x} (x) = \frac{1}{\sqrt{L_x}} \sum_{m \in \mathbf{Z}} c_m^{(k_x)} e^{i2m\pi x}$

we arrive at a tridiagonal diagonalization problem: $\left[ 4(m + k_x)^2 + \frac{V_0^x}{2} \right] c_m^{(k_x)} - \frac{V_0^x}{4} c_{m-1}^{(k_x)} - \frac{V_0^x}{4} c_{m+1}^{(k_x)} = \epsilon_{k_x} c_m^{(k_x)}$

The wannier function is defined as: $w(x) = \frac{1}{\sqrt{L_x}} \sum_{k_x} u_{k_x} (x) e^{i 2\pi k_x x} = \frac{1}{L_x} \sum_{k_x} \sum_{m \in \mathbf{Z}} c_m^{(k_x)} e^{i 2\pi (m+k_x) x}$

and from there, one can calculate the onsite interaction: $U = g \int | w(x) |^4 dx = \frac{4 \pi a_s \hbar^2}{m} \int | w(x) |^4 dx$

After a little bit of algebra, we arrive at the hopping strength: $t = -\frac{1}{L_x} \sum_{k_x} \epsilon_{k_x} e^{-i2\pi k_x}$

Finally, the Fourier transform of the wannier function is: $\tilde{w}(q_x) = \frac{1}{\sqrt{L_x}} \int w(x) e^{-i2\pi q_x x} dx = \frac{1}{\sqrt{L_x}} \sum_{k_x} \sum_{m \in \mathbf{Z}} c_m^{(k_x)} \delta_{q_x, k_x+m}$

These can be easily evaluated in the following:

import numpy;
import pyalps.dwa;

V0   = numpy.array([8. , 8. , 8.]);      # in recoil energies
wlen = numpy.array([843., 843., 843.]);  # in nanometer
a    = 114.8;                            # s-wave scattering length in bohr radius
m    = 86.99;                            # mass in atomic mass unit
L    = 200;                              # lattice size (along 1 direction)

band = pyalps.dwa.bandstructure(V0, wlen, a, m, L);

>>> band

Optical lattice:
================
V0    [Er] = 8	8	8
lamda [nm] = 843	843	843
Er2nK      = 154.89	154.89	154.89
L          = 200
g          = 5.68473

Band structure:
===============
t [nK] : 4.77051	4.77051	4.77051
U [nK] : 38.7018
U/t    : 8.11272	8.11272	8.11272

wk2[0 ,0 ,0 ] : 5.81884e-08
wk2[pi,pi,pi] : 1.39558e-08


### Model hamiltonian

The DWA code simulates the single band boson Hubbard model $\hat{H} = -t \sum_{\langle i,j \rangle} \hat{b}_i^+ \hat{b}_j + \frac{U}{2} \sum_i \hat{n}_i (\hat{n}_i - 1) - \sum_i ( \mu - V_T ( \vec{r}_i) ) \hat{n}_i$

with hopping strength $t$, onsite interaction strength $U$, and chemical potential $\mu$ at finite temperature $T$ via Quantum Monte Carlo implemented in the directed worm algorithm. Here, $\hat{b}$ ( $\hat{b}^+$) is the annihilation (creation) operator, and $\hat{n}_i$ being the number operator at site i. Bosons in an optical lattice are confined, say in a 3D parabolic trapping potential, i.e. $V_T (\vec{r}_i) = K_x x_i^2 + K_y y_i^2 + K_z z_i^2$.

due to the gaussian beam waists as well as other sources of trapping. At finite temperature T, the physics is essentially captured by the partition function $Z = \mathrm{Tr} \, \exp \left(-\beta \hat{H} \right)$

and physical quantities such as the local density $\langle n_i \rangle = \frac{1}{Z} \, \mathrm{Tr} \, \hat{n}_i \exp \left(-\beta \hat{H} \right) = \frac{1}{Z} \sum_{\mathcal{C}} n_i (\mathcal{C}) Z(\mathcal{C})$

for some configuration $\mathcal{C}$ in the complete configuration space, with inverse temperature $\beta = 1/T$. Here, the units will be cleverly normalized later on.

### QMC Directed Worm Algorithm

The Quantum Monte Carlo simulation is in fact a Markov chain random walk in the (worldlines) configuration space, importance sampled by the configuration weight $Z(\mathcal{C})$ which is just a positive number assigned to some particular configuration $\mathcal{C}$ for instance shown here. How $Z(\mathcal{C})$ is being assigned depends on the model Hamiltonian as well as the ergodic algorithm that satisfies detailed balance.

For the directed worm algorithm, the configuration is updated with the worm transversing to and from the extended configuration space to ensure ergodicty. In addition, $n_i(\mathcal{C})$ is the number of particles (or state) at site i with time 0.

Each configuration update is known as a Monte Carlo sweep.

The complete step-by-step description of the directed worm algorithm can be found here, and the code implementation here.

The following table summarizes the basic options for the DWA simulation.

Option     Default     Remark
SWEEPS   1000000   total number of Monte Carlo configuration updates (sweeps)
t   1.   hopping strength $t$
U   0.   onsite interaction strength $U$
mu   0.   chemical potential $\mu$
K   0.   parabolic trapping strength $K$
T     temperature $T$

### Momentum distribution and time-of-flight images

The momentum distribution $\langle n(\vec{k})\rangle$ of bosons in an optical lattice at equilbrium can be expressed as: $\langle \hat{\psi}^+ (\vec{k}) \, \hat{\psi} (\vec{k}) \rangle = \frac{1}{N} \int d\vec{r} \, d\vec{r}' \langle \hat{\psi}^+ (\vec{r}) \, \hat{\psi} (\vec{r}') \rangle \, e^{i \vec{k} \cdot (\vec{r} - \vec{r}')} = \frac{1}{N} \sum_{i,j} \int d\vec{r} \, d\vec{r}' \, w(\vec{r} - \vec{r}_i) e^{i\vec{k}\cdot(\vec{r} - \vec{r}_i)} \, w(\vec{r}' - \vec{r}_j) e^{-i\vec{k}\cdot(\vec{r}' - \vec{r}_j)} \langle \hat{b}^+_i \hat{b}_j \rangle \, e^{i \vec{k} \cdot (\vec{r}_i - \vec{r}_j)}$

Expressing the fourier transform of the wannier function $w(\vec{r})$ as $\tilde{w}(\vec{k}) = \frac{1}{\sqrt{N}} \int d\vec{r} \, w(\vec{r}) e^{-i\vec{k}\cdot\vec{r}} \,\, ,$

and the interference term: $S(\vec{k}) = \sum_{i,j} \langle \hat{b}^+_i \hat{b}_j \rangle \, e^{i \vec{k} \cdot (\vec{r}_i - \vec{r}_j)} \,\, ,$

the momentum distribution reduces to the following form: $\langle n(\vec{k})\rangle = \left| \tilde{w}(\vec{k})\right|^2 S(\vec{k}) \,\, .$

To measure it in experiments, the optical lattice is momentarily switched off and the bosons expand freely by assumption with momentum gained from the lattice momentum previously. Measurements are performed in our classical world, and therefore semiclassical treatment is already sufficient, i.e. $\hbar \vec{k} = m \left( \frac{\vec{r} }{t_f} \right)$

where $t_f$ is the time-of-flight taken by the bosons to move from the origin (experiment) to the detector probe at position $\vec{r}$. Here, we assume the simplest picture, ie. 1) no interaction, 2) no collision, and 3) uniform motion.

Of course, at equilibrium, i.e. "after the bosons have travelled for a long time", the time-of-flight image will capture a density of $\langle n_\mathrm{tof} (\vec{r} ) \rangle = \left( \frac{m}{\hbar t_f} \right)^3 \left| \tilde{w}\left( \frac{m}{\hbar t} \vec{r} \right) \right|^2 S(\vec{k})$

which indirectly probes the original momentum distribution of the lattice bosons. This form is only correct in the far-field limit or in the limit of infinite time-of-flight (tof).

To improve upon the accuracy, we shall correct it with semiclassical dynamics during the time of flight. For the bosons that originate from lattice site at $\vec{r}_j$ to the detector probe at $\vec{r}$, their time-dependent wavefunction must be: $\Psi(\vec{r},t) = \left( \frac{m}{\hbar t_f} \right)^{3/2} \tilde{w}\left( \frac{m}{\hbar t_f} (\vec{r} - \vec{r}_j) \right) \, \exp \left( \frac{i \epsilon_k t_f}{\hbar} \right)$

where the kinetic energy of the free bosons must be: $\epsilon_k = \frac{\hbar^2 k^2}{2m} = \frac{\hbar^2}{2m} \left( \frac{m}{\hbar t_f} \right)^2 (\vec{r} - \vec{r}_j )^2$

Therefore, the corrected time-of-flight image must be: $\langle n_\mathrm{tof} (\vec{r} ) \rangle \approx \left( \frac{m}{\hbar t_f} \right)^3 \left| \tilde{w}\left( \frac{m}{\hbar t} \vec{r} \right) \right|^2 \sum_{i,j} \, \langle \hat{b}^+_i \hat{b}_j \rangle \exp \left( - \frac{i}{\hbar} \frac{\hbar^2}{2m} \left( \frac{m}{\hbar t_f} \right)^2 (\vec{r} - \vec{r}_i )^2 t_f \right) \, \exp \left( \frac{i}{\hbar} \frac{\hbar^2}{2m} \left( \frac{m}{\hbar t_f} \right)^2 (\vec{r} - \vec{r}_j )^2 t_f \right)$

where the dependence of the initial site on the wannier enveloped function is neglected. Finally, we arrive at: $\langle n_\mathrm{tof} (\vec{k} ) \rangle \approx \left| \tilde{w} ( \vec{k} ) \right|^2 \sum_{i,j} \, \langle \hat{b}^+_i \hat{b}_j \rangle \, e^{i \vec{k} \cdot (\vec{r}_i - \vec{r}_j) - i \gamma_f (\vec{r}_i^2 - \vec{r}_j^2) } \,\, .$

with time-of-flight phase $\gamma_f = \frac{m}{2\hbar t_f}$.

For example, the time-of-flight phase $\gamma$ for Rb-87 bosons (mass 86.99 a.m.u.) with a time-of-flight of 15.5 ms

a. expanded from an isotropic optical lattice of wavelength 765 nm:

>>> pyalps.dwa.tofPhase(time_of_flight=15.5, wavelength=765, mass=86.99)
0.006464602556863682


b. expanded from an optical lattice of wavelength 843 nm along x-direction and 765 nm along y- and z- directions:

>>> pyalps.dwa.tofPhase(time_of_flight=15.5, wavelength=[843,765,765], mass=86.99)
[0.007850080468935228, 0.006464602556863682, 0.006464602556863682]


The following table summarizes the basic option to turn on time-of-flight imaging options.

Option     Default     Remark
tof_phase   0.   time-of-flight phase $\gamma$

### Symmetry leads to further simplification

Symmetries: $\langle n_\mathrm{tof} (\vec{k} ) \rangle = \langle n_\mathrm{tof} (-\vec{k} ) \rangle$ $\langle \hat{b}^+_i \hat{b}_j \rangle = \langle \hat{b}^+_j \hat{b}_i \rangle$

Therefore: $\langle n_\mathrm{tof} (\vec{k} ) \rangle \approx \left| \tilde{w} ( \vec{k} ) \right|^2 \sum_{i,j} \, \langle \hat{b}^+_i \hat{b}_j \rangle \, \cos( \vec{k} \cdot (\vec{r}_i - \vec{r}_j) ) \, \cos(\gamma_f (r_i^2 - r_j^2)) \,\, .$

Define green function: $g_f(i,j) = \langle \hat{b}^+_i \hat{b}_j \rangle \, \cos(\gamma_f (r_i^2 - r_j^2))$

In reduced coordinates, $\vec{r}_\alpha = \vec{r}_i - \vec{r}_j$ , we redefine green function: $g_f(\alpha) = \sum_{i,j : \, \vec{r}_i - \vec{r}_j \, = \, \pm \vec{r}_\alpha} \langle \hat{b}^+_i \hat{b}_j \rangle \, \cos(\gamma_f (r_i^2 - r_j^2))$

and therefore the momentum distribution/ time-of-flight image: $\langle n_\mathrm{tof} (\vec{k} ) \rangle \approx \left| \tilde{w} ( \vec{k} ) \right|^2 \sum_{\alpha : \, |r_\alpha| \geq 0} \, g_f (\alpha) \cos( \vec{k} \cdot (\vec{r}_\alpha) ) \,\, .$

Here, we distinguish the momentum distribution from the interference $\langle n_k (\vec{k} ) \rangle = \sum_{\alpha : \, |r_\alpha| \geq 0} \, g_f (\alpha) \cos( \vec{k} \cdot (\vec{r}_\alpha) ) \,\, .$

Please refer to the list of measurement observables provided by the DWA code.

## Measurements

### Basic options

The following table summarizes the basic options for measurements available to the user of our DWA code. When unspecified in the parameter list, they assume the default values.

Option     Default     Remark
SKIP   1   number of Monte Carlo configuration updates (sweeps) per measurement
MEASURE   true   turns on/ off measurements
MEASURE[Simulation Speed]   true   observe time taken per checkpoint

### Measurement observables

When the measurement mode is turned on, the following is a list of common observables available to the user.

Observable     Boolean control     Binning analysis     Remark
Total Particle Number $\langle N \rangle$   detailed     measure always
Energy $\langle E \rangle$   detailed     measure always
Energy:Vertex $\langle E_v \rangle$   detailed     measure always
Energy:Onsite $\langle E_o \rangle$   detailed     measure always
Density $\langle n \rangle$   detailed     measure if lattice is homoogeneous
Energy Density $\langle \epsilon \rangle$   detailed     measure if lattice is homoogeneous
Energy Density:Vertex $\langle E_v \rangle$   detailed     measure if lattice is homoogeneous
Energy Density:Onsite $\langle E_o \rangle$   detailed     measure if lattice is homoogeneous
Total Particle Number^2 $\langle N^2 \rangle$ measure_number2_   detailed     —
Energy^2 $\langle E^2 \rangle$ measure_energy2_   detailed     —
Density^2 $\langle N^2 \rangle$ measure_density2_   detailed     measure if lattice is homogeneous
Energy Density^2 $\langle E^2 \rangle$ measure_energy_density2_   detailed     measure if lattice is homogeneous
Winding Number^2 $\langle W_\alpha^2 \rangle$ measure_winding_number2_   simple     measure if lattice is periodic: $\alpha=x,y,z$
Local Kink:Number $\langle n_i^r \rangle$ measure_local_num_kinks_   simple     —
Local Density $\langle n_i \rangle$ measure_local_density_   simple     —
Local Density^2 $\langle n_i^2 \rangle$ measure_local_density2_   simple     —
Green Function:0 $g_f \left(\alpha=0\right)$   detailed     measure always
Green Function:1 $\sum_{i=x,y,z} g_f \left(\alpha_i =1\right)$   detailed     measure always
Green Function $g_f \left(\alpha ; \gamma = 0 \right)$ measure_green_function_   simple     —
Green Function:TOF $g_f \left(\alpha \right)$ measure_green_function_   simple     measure if tof_phase != 0
Momentum Distribution:0 $\langle n_k \left( 0 ; \gamma = 0 \right) \rangle$   detailed     measure if tof_phase == 0
Momentum Distribution:TOF:0 $\langle n_k \left( 0 \right) \rangle$   detailed     measure if tof_phase != 0

### More options

The following table summarizes more options for measurements available to the user of our DWA code. When unspecified in the parameter list, they assume the default values.

Option     Default     Boolean control
MEASURE[Total Particle Number^2]   false   measure_number2_
MEASURE[Energy^2]   false   measure_energy2_
MEASURE[Density^2]   false   measure_density2_
MEASURE[Energy Density^2]   false   measure_energy_density2_
MEASURE[Local Kink: Number]   false   measure_local_num_kinks_
MEASURE[Winding Number]   false   measure_winding_number_
MEASURE[Local Density]   false   measure_local_density_
MEASURE[Local Density^2]   false   measure_local_density2_
MEASURE[Green Function]   false   measure_green_function_

## Thermalization

### Description

1. During thermalization, measurements are not to be trusted.
2. Only the trend in the timeseries measurements is used to determine whether thermalization has been reached.
3. After thermalization, the worldlines configuration is retained, but the measurements are to be discarded.
4. For advanced users, measurements could be turned off for better code performance by setting "MEASURE" to "False".
5. If thermalization turns out to be unstable, increase "SWEEPS" by an order of magnitude.

### Default thermalization script

The following is the default thermalization script provided by us, which can easily be replaced up to the choice of the user.

 def thermalized(h5_outfile, observables, tolerance=0.01, simplified=False, includeLog=False):
if isinstance(observables, str):
observables = [observables];

results = [];
for observable in observables:
mean = timeseries.mean();

index = scipy.linspace(0, timeseries.size-1, timeseries.size);
timeseries = scipy.polyval(scipy.polyfit(index, timeseries, 1), index);  # timeseries get fitted

percentage_increment = (timeseries[-1] - timeseries)/mean;
result = abs(percentage_increment) < tolerance;

if not includeLog:
results.append(result);
else:
results.append({'observable': observable, 'percentage_increment' : percentage_increment, 'thermalized': result})

if includeLog or not simplified:
return results;
else:
return reduce(lambda x,y: x*y, results);


## Autocorrelations

1. It is stupid to measure too frequently because only uncorrelated measurements are useful to obtaining accurate statistics.
2. Unnecessary measurements may sometimes slow down the code by a few orders of magnitude, so measure wisely!
3. Statistically, the autocorrelation time $\tau$ is the number of MC measurements for 63% of the memory to be lost on average.
4. To lose 95% (99%) of memory on average, $3\tau$ ( $5\tau$) consecutive MC measurements are needed.
5. Therefore, $\tau \approx 0.2$ would be nearly perfect to say that the obtained measurements are uncorrelated.
6. The user can increase (decrease) the value of "SKIP" to increase (decrease) the value of $\tau$ statistically.
7. The value of $\tau$ will not affect the accuracy of Monte Carlo statistics, i.e. the MC error is intrinsically corrected by $\tau$. However, it will affect the speed of MC convergence.

## Convergence

### Description

1. The ALPS ALEA Library automatically checks for the convergence of MC statistics sampled by detailed binning.
2. Convergence here means convergence of MC error.
3. Convergence is achievable with more good (uncorrelated) measurements.

### Default convergence script

The following is the default convergence script provided by us, which can easily be replaced up to the choice of the user.

 def converged(h5_outfile, observables, simplified=False, includeLog=False):
if isinstance(observables, str):
observables = [observables];

results = [];
for observable in observables:

result = (measurements['mean']['error_convergence'] == 0);

if not includeLog:
results.append(result);
else:
mean  = measurements['mean']['value'];
error = measurements['mean']['error'];
tau   = measurements['tau']['value'];
count = measurements['count'];
results.append({'observable': observable, 'converged': result, 'mean': mean, 'error': error, 'tau': tau, 'count': count});

if includeLog or not simplified:
return results;
else:
return reduce(lambda x,y: x*y, results);


© 2013 by Ping Nang Ma (Tama Ma), Lode Pollet, and Matthias Troyer