ALPS 2 Tutorials:MC-09 Directed Worm Algorithm
Languages: |
English • 日本語 (ja) • 繁體中文 (zh-tw) • 简体中文 (zh) |
Contents
- 1 DWA Tutorial 1: Studying the temperature dependence of energy density at fixed chemical potential
- 2 DWA Tutorial 1: Single simulation (Fully manual mode)
- 2.1 Importing modules in Python
- 2.2 Starting from scratch
- 2.3 Starting from last configuration (discarding all measurements)
- 2.4 Starting from last configuration (keeping all measurements)
- 2.5 Checking thermalization
- 2.6 Checking autocorrelations — Are you measuring too frequently?
- 2.7 Checking convergence
- 3 DWA Tutorial 2: Chaining simulations (Laptop mode)
- 3.1 Designing recursions
- 3.1.1 Chaining n recursive simulations during thermalization
- 3.1.2 Chaining simulations recursively until reaching thermalization
- 3.1.3 Chaining simulations recursively until convergence
- 3.1.4 Chaining more simulations for better statistics after convergence
- 3.1.5 Simulation status
- 3.1.6 Summary report of measurements
- 3.1.7 Multi-chaining automation
- 3.2 Resetting parameters
- 3.3 Multi-tasking
- 3.1 Designing recursions
- 4 DWA tutorial 3: Chaining simulations (Cluster mode)
- 5 DWA tutorial 4: Extracting worldlines configuration
- 6 DWA tutorial 5: Trapped bosons in actual-size 3D optical lattice
- 7 Documentation of DWA code
DWA Tutorial 1: Studying the temperature dependence of energy density at fixed chemical potential
Command prompt
Write a parameter script file parm, which is illustrated as follows.
LATTICE="square lattice" MODEL="boson Hubbard" L=20 Nmax=20 t=1. U=16. mu=32. SWEEPS=100000 SKIP=400 {T=0.7} {T=0.8} {T=0.9} {T=1.0} {T=1.1} {T=1.2} {T=1.3} {T=1.4} {T=1.5} {T=1.6} {T=1.7} {T=1.8} {T=1.9} {T=2.0}
DWA Tutorial 1: Single simulation (Fully manual mode)
This tutorial guides the user to be able to fully control the simulation. Such a fully manual mode allows the user to understand the flow entirely.
Interested people are kindly requested to forward further enquiries to the ALPS mailing list via here.
Importing modules in Python
The following modules have to be imported prior the simulation(s).
import pyalps; import pyalps.dwa;
Starting from scratch
This section illustrates how to start a DWA simulation from scratch. As an example, we shall simulate the boson Hubbard model for a 20^{2} square lattice. In Python interface, the parameters for the simulation can be initialized as follows.
params=[] params.append( { 'LATTICE' : 'square lattice' # Refer to <lattice.xml> from ALPS Lattice Library , 'MODEL' : 'boson Hubbard' # Refer to <model.xml> from ALPS Model Library , 'L' : 20 # Length aspect of lattice , 'Nmax' : 20 # Maximum number of bosons on each site , 't' : 1. # Hopping , 'U' : 16. # Onsite Interaction , 'mu' : 32. # Chemical potential , 'T' : 1. # Temperature , 'SWEEPS' : 100000 # Total number of sweeps , 'SKIP' : 400 # Number of sweeps before measurement (You don't need to measure too often!) } ) h5_infiles = pyalps.writeInputH5Files("parm9a",params);
Here, the above python script
- writes the parameters accordingly to parm9a.task1.in.h5, and
- returns a list of python string ['parm9a.task1.in.h5'] to h5_infiles.
Next, the simulation could be easily started in the Python interface
pyalps.runApplication("dwa", h5_infiles);
or in the command prompt
dwa parm9a.task1.in.h5
where the output file is defaulted to be parm9a.task1.out.h5
A time limit of 7 hours, or 27200 seconds, for instance, can be additionally specified in the Python interface as
pyalps.runApplication("dwa", h5_infiles, T=27200);
or in the command prompt as
dwa -T 27200 parm9a.task1.in.h5
Starting from last configuration (discarding all measurements)
Very often, an existing DWA simulation has just (not yet) reached thermalization . For these purposes, it is useful to start a new DWA simulation from the last saved worldlines configuration. We can easily do this in the Python interface
pyalps.dwa.extract_worldlines(infile="parm9a.task1.out.h5", outfile="parm9a.task1.in.h5"); pyalps.runApplication("dwa", "parm9a.task1.in.h5");
In the above script(s), dwa-extract-worldlines extracts the existing worldlines configuration from the h5 file parm9a.task1.out.h5 and saves it into parm9a.task1.in.h5.
Starting from last configuration (keeping all measurements)
The following command starts a new simulation from the last saved worldlines configuration while keeping all measurements, for the purpose of obtaining more measurements for better statistics. In Python interface,
pyalps.runApplication("dwa", "parm9a.task1.out.h5");
or in command prompt
dwa parm9a.task1.out.h5
where the output file is also defaulted to be parm9a.task1.out.h5
Checking thermalization
The following command checks whether the measurement observable "Total Particle Number" has reached thermalization in the Monte Carlo simulation.
pyalps.dwa.thermalized("parm9a.task1.out.h5", "Total Particle Number");
To check for thermalization of a set of measurement observables, the following is an easy example.
pyalps.dwa.thermalized("parm9a.task1.out.h5", ["Total Particle Number", "Energy"]);
A simplified boolean indicator for thermalization can be obtained via
pyalps.dwa.thermalized("parm9a.task1.out.h5", ["Total Particle Number", "Energy"], simplified=True);
To see a more complete log, one has to set the argument variable includeLog to True.
pyalps.dwa.thermalized("parm9a.task1.out.h5", ["Total Particle Number", "Energy"], includeLog=True);
Note:
- Only measurement observables that support detailed binning analysis can be checked for thermalization.
- The user can easily replace the default thermalization script for his own purpose.
- Thermalization can be improved with a bigger value of "SWEEPS".
Checking autocorrelations — Are you measuring too frequently?
Checking autocorrelations is necessary for strategic purposes. Though they will not affect the accuracy of Monte Carlo statistics, but they do affect the speed of convergence drastically.The autocorrelation time of the measurement observable "Total Particle Number", for example, can be easily obtained via:
pyalps.dwa.tau("parm9a.task1.out.h5", "Total Particle Number");
or some set of measurement observables via:
pyalps.dwa.tau("parm9a.task1.out.h5", ["Total Particle Number", "Energy"]);
The knowledge of can help the user modify the value of "SKIP" accordingly.
Checking convergence
The following command checks whether the measurement observable "Total Particle Number" has reached convergence in the Monte Carlo simulation.
pyalps.dwa.converged("parm9a.task1.out.h5", "Total Particle Number");
To check for convergence of a set of measurement observables, the following is an easy example.
pyalps.dwa.converged("parm9a.task1.out.h5", ["Total Particle Number", "Energy"]);
A simplified boolean indicator for convergence can be obtained via
pyalps.dwa.converged("parm9a.task1.out.h5", ["Total Particle Number", "Energy"], simplified=True);
To see a more complete log, one has to set the argument variable includeLog to True.
pyalps.dwa.converged("parm9a.task1.out.h5", ["Total Particle Number", "Energy"], includeLog=True);
Note:
- Thermalization should be checked prior checking convergence.
- The following are some reasons why some Monte Carlo statistics have not converged:
- There are too few measurements. The user has to continue the simulation and obtain more measurements.
- There are too many correlated measurements. The user has to check autocorrelations and increase the value of "SKIP".
- The simulation has not thermalized. If thermalization has indeed once been "reached" and not now, the user has to increase the value of "SWEEPS" to stablize thermalization.
DWA Tutorial 2: Chaining simulations (Laptop mode)
Designing recursions
Using the same example in tutorial 1, i.e.
import pyalps; import pyalps.dwa;
params=[] params.append( { 'LATTICE' : 'square lattice' # Refer to <lattice.xml> from ALPS Lattice Library , 'MODEL' : 'boson Hubbard' # Refer to <model.xml> from ALPS Model Library , 'L' : 20 # Length aspect of lattice , 'Nmax' : 20 # Maximum number of bosons on each site , 't' : 1. # Hopping , 'U' : 16. # Onsite Interaction , 'mu' : 32. # Chemical potential , 'T' : 1. # Temperature , 'SWEEPS' : 100000 # Total number of sweeps , 'SKIP' : 400 # Number of sweeps before measurement (You don't need to measure too often!) } ) h5_infiles = pyalps.writeInputH5Files("parm9a",params);
Chaining n recursive simulations during thermalization
The user can easily design a simulation chain of, for instance, 3 recursions during the thermalization process:
pyalps.dwa.recursiveRun( "pyalps.runApplication('dwa', h5_infiles[0])" , cmd_lang = 'python' , follow_up_script = "pyalps.dwa.extract_worldlines(infile=pyalps.input2output(h5_infiles[0]), outfile=h5_infiles[0])" , n = 3, write_status = "pyalps.dwa.write_status(pyalps.input2output(h5_infiles[0]), 'Thermalizing')" , loc = locals() );
Chaining simulations recursively until reaching thermalization
The user can also recursively run the simulation until reaching thermalization. (The user decides for himself/ herself the thermalization condition. The following is an example.
pyalps.dwa.recursiveRun( "pyalps.runApplication('dwa', h5_infiles[0])" , cmd_lang = 'python' , follow_up_script = "pyalps.dwa.extract_worldlines(infile=pyalps.input2output(h5_infiles[0]), outfile=h5_infiles[0])" , break_if = "pyalps.dwa.thermalized(pyalps.input2output(h5_infiles[0]), 'Total Particle Number', simplified=True)" , write_status = "pyalps.dwa.write_status(pyalps.input2output(h5_infiles[0]), 'Thermalizing')" , loc = locals() );
Chaining simulations recursively until convergence
The following is an example of a recursive chain of simulations until convergence.
pyalps.dwa.recursiveRun( "pyalps.runApplication('dwa', pyalps.input2output(h5_infiles[0]))" , cmd_lang = 'python' , break_if = "pyalps.dwa.converged(pyalps.input2output(h5_infiles[0]), 'Total Particle Number', simplified=True)" , write_status = "pyalps.dwa.write_status(pyalps.input2output(h5_infiles[0]), 'Converging')" , loc = locals() );
Chaining more simulations for better statistics after convergence
Very often, the user wants to improve the quality of Monte Carlo statistics beyond convergence. The following is an example of designating 10 more recursions.
pyalps.dwa.recursiveRun( "pyalps.runApplication('dwa', pyalps.input2output(h5_infiles[0]))" , cmd_lang = 'python' , n = 10 , write_status = "pyalps.dwa.write_status(pyalps.input2output(h5_infiles[0]), 'Converging')" , loc = locals() );
Simulation status
By default, there are only 2 status, namely Thermalizing and Converging, that are archived in the output files. Following the above examples, the most recent simulation status can be read from
pyalps.dwa.status('parm9a.task1.out.h5');
and the entire simulation status history from
pyalps.dwa.status('parm9a.task1.out.h5', includeLog=True);
The user is free to define his own status based on his own needs. The following is how one can assign the status Zurich to the output file
pyalps.dwa.write_status('parm9a.task1.out.h5', 'Zurich');
Knowing the simulation status history is often very useful in practice to keep track of
- unstablized thermalization;
- overlengthy simulation (a strong indicative hint of telling you to measure more/less frequently;
- possible errors due to machine/ cluster.
Summary report of measurements
One can conveniently see the summary report of measurements stored in the file parm9a.task1.out.h5 via:
pyalps.dwa.summaryReport('parm9a.task1.out.h5');
Multi-chaining automation
Assuming that the user knows exactly what he is doing, he can design fully automated scripts by himself/herself. An example is given as follows.
def serial_automated_processing(taskfile): while (True): pyalps.dwa.recursiveRun( "pyalps.runApplication('dwa', taskfile)" , cmd_lang = 'python' , follow_up_script = "pyalps.dwa.extract_worldlines(infile=pyalps.input2output(taskfile), outfile=taskfile)" , break_if = "pyalps.dwa.thermalized(pyalps.input2output(taskfile), 'Total Particle Number', simplified=True)" , write_status = "pyalps.dwa.write_status(pyalps.input2output(taskfile), 'Thermalizing')" , loc = locals() ); pyalps.dwa.recursiveRun( "pyalps.runApplication('dwa', pyalps.input2output(taskfile))" , cmd_lang = 'python' , break_if = "pyalps.dwa.converged(pyalps.input2output(taskfile), 'Total Particle Number', simplified=True)" , break_elseif = "not pyalps.dwa.thermalized(pyalps.input2output(taskfile), 'Total Particle Number', simplified=True)" , write_status = "pyalps.dwa.write_status(pyalps.input2output(taskfile), 'Converging')" , loc = locals() ); if pyalps.dwa.thermalized(pyalps.input2output(taskfile), 'Total Particle Number', simplified=True): break; else: pyalps.dwa.extract_worldlines(infile=pyalps.input2output(taskfile), outfile=taskfile);
serial_automated_processing(h5_infiles[0]);
Resetting parameters
Sometimes, for whatever reasons as maybe listed in the following, parameters have to be reset. This section illustrates how this can be easily done.
Example: Measuring too few
A close inspection of autocorrelations via
>>> pyalps.dwa.tau(pyalps.input2output(h5_infiles[0]), 'Total Particle Number') [0.01291737172549734]
signals that one can increase the frequency of measurements, thus speeding up convergence. The following is an example of how to reset some of the parameters, and write it to a new file parm9b.task1.in.h5.
params = pyalps.getParameters(h5_infiles[0]); params[0].update( { 'SWEEPS' : 100000 , 'SKIP' : 400 } ) h5_infiles = pyalps.writeInputH5Files("parm9b",params);
The following easily starts the new simulation and runs it to completion.
pyalps.dwa.extract_worldlines(infile="parm9a.task1.in.h5", outfile="parm9b.task1.in.h5"); # You don't want to thermalize again, right? pyalps.runApplication('dwa', h5_infiles[0]); pyalps.dwa.recursiveRun( "pyalps.runApplication('dwa', pyalps.input2output(h5_infiles[0]))" , cmd_lang = 'python' , break_if = "pyalps.dwa.converged(pyalps.input2output(h5_infiles[0]), 'Total Particle Number')" , write_status = "pyalps.dwa.write_status(pyalps.input2output(h5_infiles[0]), 'Converging')" , loc = locals() );
Indeed, the autocorrelation time increases almost proportionally with the decrement in measurement frequency.
>>> pyalps.dwa.tau(pyalps.input2output(h5_infiles[0]), 'Total Particle Number') [0.15614752133389365]
Example: Unstable automated processes
In the last examples, the automated processes are stable, as evident in the simulation status history, i.e.
>>> pyalps.dwa.status('parm9a.task1.out.h5',includeLog=True) ['Thermalizing', 'Thermalizing', 'Converging', 'Converging', 'Converging', 'Converging']
However, this may not happen all the time, and it may end up in an endless simulation loop chain. When this happens, increase SWEEPS by an order of magnitude.
Example: Including more measurement observables
Very often, the user decides at the very last moment to perform alternative measurements. The following example illustrates how to do this easily.
Step 1: Prepare a newly modified parameter file
params = pyalps.getParameters(h5_infiles[0]); params[0].update( { 'MEASURE[Local Density]' : 1 } ); h5_infiles = pyalps.writeInputH5Files("parm9c",params);
Step 2: Collect statistics for new observable
pyalps.dwa.extract_worldlines(infile='parm9b.task1.in.h5', outfile='parm9c.task1.in.h5'); pyalps.runApplication('dwa', h5_infiles[0]); pyalps.dwa.recursiveRun( "pyalps.runApplication('dwa', pyalps.input2output(h5_infiles[0]))" , cmd_lang = 'python' , n = 10 , loc = locals() );
Side remark: There is only simple binning analysis support for the measurement observable Local Density, because we do not want to blow up the memory. There is no guarantee for the convergence of individual error bars.
Multi-tasking
Suppose we want to perform simulations across a "temperature-spectra", as illustrated in the following.
import numpy; import pyalps; import pyalps.dwa;
Ts = numpy.arange(0.5, 5.0, 0.1); # Ts = 0.5, 0.6, 0.7, ..., 4.7, 4.8, 4.9 params=[] for T in Ts: params.append( { 'LATTICE' : 'square lattice' # Refer to <lattice.xml> from ALPS Lattice Library , 'MODEL' : 'boson Hubbard' # Refer to <model.xml> from ALPS Model Library , 'L' : 20 # Length aspect of lattice , 'Nmax' : 20 # Maximum number of bosons on each site , 't' : 1. # Hopping , 'U' : 20. # Onsite Interaction , 'mu' : 15. # Chemical potential , 'T' : T # Temperature , 'SWEEPS' : 100000 # Total number of sweeps , 'SKIP' : 400 # Number of sweeps before measurement (You don't need to measure too often!) } ) h5_infiles = pyalps.writeInputH5Files("parm9d",params);
The following is how one could (undesirably) perform the above mentioned multi-tasking simulation chains. (The definition of the python function serial_automated_processing is located here.)
for h5_infile in h5_infiles: serial_automated_processing(h5_infile);
DWA tutorial 3: Chaining simulations (Cluster mode)
While tutorial 2 illustrates the concept of simulation chaining, it is only suitable to be performed in your personal laptops. For heavier or more massive simulations, the simulation chaining (or job chaining) has to be performed on clusters. This section illustrates how this is done.
Submit a batch job to cluster
The usual way to submit a batch job to a Unix cluster is through the use of a bash script. Using the Brutus cluster of ETH Zurich as an example, the command to request for 1 processor worth 10 minutes each is
bsub -n1 -W10 dwa parm9d.task1.in.h5
However, we want to do everything in the Python interface for convenience. To do this, the equivalent Python command is
import pyalps; import pyalps.dwa; pyalps.dwa.recursiveRun( "dwa parm9d.task1.in.h5" , batch_submit = True , batch_cmd_prefix = 'bsub -n1 -W10' );
This may seem cumbersome at first. However, its convenience shows up very shortly in the coming subsections.
Recursive chaining
Yeah! The usual recursive chaining can now be very easily brought forward to the cluster mode (due to tremendous effort done by the coder).
Here are the usual business.
import pyalps; import pyalps.dwa; h5_infiles = pyalps.getInputH5Files(prefix="parm9d")
For thermalization, we have
pyalps.dwa.recursiveRun( "pyalps.runApplication('dwa', h5_infiles[0])" , cmd_lang = 'python' , follow_up_script = "pyalps.dwa.extract_worldlines(infile=pyalps.input2output(h5_infiles[0]), outfile=h5_infiles[0])" , break_if = "pyalps.dwa.thermalized(pyalps.input2output(h5_infiles[0]), 'Total Particle Number', simplified=True)" , write_status = "pyalps.dwa.write_status(pyalps.input2output(h5_infiles[0]), 'Thermalizing')" , loc = locals() , batch_submit = True , batch_cmd_prefix = 'bsub -n1 -W10' , batch_run_script = h5_infiles[0] + '.thermalize.script' );
For convergence, we have
pyalps.dwa.recursiveRun( "pyalps.runApplication('dwa', pyalps.input2output(h5_infiles[0]))" , cmd_lang = 'python' , break_if = "pyalps.dwa.converged(pyalps.input2output(h5_infiles[0]), 'Total Particle Number', simplified=True)" , write_status = "pyalps.dwa.write_status(pyalps.input2output(h5_infiles[0]), 'Converging')" , loc = locals() , batch_submit = True , batch_cmd_prefix = 'bsub -n1 -W10' , batch_run_script = h5_infiles[0] + '.converge.script' );
For better statistics, we can similarly do the following. Say, let's do 10 more recursions as follows.
pyalps.dwa.recursiveRun( "pyalps.runApplication('dwa', pyalps.input2output(h5_infiles[0]))" , cmd_lang = 'python' , n = 10 , loc = locals() , batch_submit = True , batch_cmd_prefix = 'bsub -n1 -W10' , batch_run_script = h5_infiles[0] + '.statistics.script' );
Multi recursive chaining
The user can also flexibly build up multi layers of recursive chaining for strategic purposes. The following code shows how easily this could be!
Here are the usual business.
import pyalps; import pyalps.dwa; h5_infiles = pyalps.getInputH5Files(prefix="parm9d")
def parallel_automated_processing(taskfile): pyalps.dwa.recursiveRun( "pyalps.runApplication('dwa', taskfile)" , cmd_lang = 'python' , follow_up_script = "pyalps.dwa.extract_worldlines(infile=pyalps.input2output(taskfile), outfile=taskfile)" , break_if = "pyalps.dwa.thermalized(pyalps.input2output(taskfile), 'Total Particle Number', simplified=True)" , write_status = "pyalps.dwa.write_status(pyalps.input2output(taskfile), 'Thermalizing')" , loc = locals() , batch_submit = True , batch_cmd_prefix = 'bsub -n1 -W10' , batch_run_script = taskfile + '.thermalize.script' , batch_next_run_script = taskfile + '.converge.script' , batch_noRun = True ); pyalps.dwa.recursiveRun( "pyalps.runApplication('dwa', pyalps.input2output(taskfile))" , cmd_lang = 'python' , break_if = "pyalps.dwa.converged(pyalps.input2output(taskfile), 'Total Particle Number', simplified=True)" , write_status = "pyalps.dwa.write_status(pyalps.input2output(taskfile), 'Converging')" , loc = locals() , batch_submit = True , batch_cmd_prefix = 'bsub -n1 -W10' , batch_run_script = taskfile + '.converge.script' , batch_noRun = True ); pyalps.dwa.startRunScript( batch_run_script= taskfile + '.thermalize.script' , batch_cmd_prefix = 'bsub -n1 -W10' , loc = locals() );
for h5_infile in h5_infiles: parallel_automated_processing(h5_infile);
Handling with measurements (Example: Temperature dependence of density at fixed chemical potential)
This subsection illustrates how one can handle with measurements with ease.
Of course the header:
import pyalps;
Entire information
Sometimes, we have many queries regarding the measurement rawdata, and we want to inspect more closely into every single detail. This can easily be done. Say, we want to know every detail regarding the measurement observable Density in the output file parm9d.task1.out.h5:
pyalps.getMeasurements('parm9d.task1.out.h5', observable='Density');
Not to forget, the parameters can be obtained by
pyalps.getParameters('parm9d.task1.out.h5');
Convenient tools to handle with measurements
The list of all result files with prefix parm9d can be obtained by:
resultFiles = pyalps.getResultFiles(prefix='parm9d');
To load all data regarding the measurement observable Density:
data = pyalps.loadMeasurements(resultFiles, 'Density');
or regarding an additional measurement observable Energy Density
data = pyalps.loadMeasurements(resultFiles, ['Density', 'Energy Density']);
We can next easily collect and group measurements accordingly, say:
T_Density = pyalps.collectXY(data, x='T', y='Density');
Plotting the measurements
One can prepare the plot, say in jpg format
import matplotlib; matplotlib.use('Agg'); import matplotlib.pyplot; import pyalps.plot; fig = matplotlib.pyplot.figure(); pyalps.plot.plot(T_Density); fig.savefig('parm9d.T_Density.jpg');
Email the plot to yourself
You can then easily send the beautiful plot to your (my) email:
pyalps.sendmail( 'pingnang@phys.ethz.ch' # email address of recipients , attachment='parm9d.T_Density.jpg' , subject='Picture: parm9d.T_Density.jpg' );
Submitting jobs to PBS cluster
When a shell script is compulsory for job submission, say to the PBS cluster, it is really irritating. But ALPS has made auto-recursions easily possible too.
As usual the header:
import pyalps; import pyalps.dwa;
One can, for instance, submit a job (automatically done within python) that runs dwa parm9d.task1.h5 as follows.
pyalps.dwa.recursiveRun( "dwa parm9d.task1.in.h5" , batch_submit = True , batch_cmd_prefix = 'qsub -l nodes=1:ppn=1 -l walltime=0:10:00' );
The following are usual matters.
For thermalization, we have
pyalps.dwa.recursiveRun( "pyalps.runApplication('dwa', h5_infiles[0])" , cmd_lang = 'python' , follow_up_script = "pyalps.dwa.extract_worldlines(infile=pyalps.input2output(h5_infiles[0]), outfile=h5_infiles[0])" , break_if = "pyalps.dwa.thermalized(pyalps.input2output(h5_infiles[0]), 'Total Particle Number', simplified=True)" , write_status = "pyalps.dwa.write_status(pyalps.input2output(h5_infiles[0]), 'Thermalizing')" , loc = locals() , batch_submit = True , batch_cmd_prefix = 'qsub -l nodes=1:ppn=1 -l walltime=0:10:00' , batch_run_script = h5_infiles[0] + '.thermalize.script' );
For convergence, we have
pyalps.dwa.recursiveRun( "pyalps.runApplication('dwa', pyalps.input2output(h5_infiles[0]))" , cmd_lang = 'python' , break_if = "pyalps.dwa.converged(pyalps.input2output(h5_infiles[0]), 'Total Particle Number', simplified=True)" , write_status = "pyalps.dwa.write_status(pyalps.input2output(h5_infiles[0]), 'Converging')" , loc = locals() , batch_submit = True , batch_cmd_prefix = 'qsub -l nodes=1:ppn=1 -l walltime=0:10:00' , batch_run_script = h5_infiles[0] + '.converge.script' );
For better statistics, we can similarly do the following. Say, let's do 10 more recursions as follows.
pyalps.dwa.recursiveRun( "pyalps.runApplication('dwa', pyalps.input2output(h5_infiles[0]))" , cmd_lang = 'python' , n = 10 , loc = locals() , batch_submit = True , batch_cmd_prefix = 'qsub -l nodes=1:ppn=1 -l walltime=0:10:00' , batch_run_script = h5_infiles[0] + '.statistics.script' );
Here, I want to specially thank Mr. Francisco Cordobes Aguilar (University of London) to test this functionality.
Example: Temperature dependence of energy density at fixed density
This is very simple.
Chemical potential meshing
In this example, let us fix the density at 1 for the following parameters:
import numpy; import pyalps; import pyalps.dwa; Ts = numpy.arange(0.5, 5.0, 0.1); # Ts = 0.5, 0.6, 0.7, ..., 4.7, 4.8, 4.9 mus = numpy.arange(0.0, 6.0, 1.0); # mus = 0.0, 1.0, 2.0, 3.0, 4.0, 5.0 params=[] for T in Ts: for mu in mus: params.append( { 'LATTICE' : 'square lattice' # Refer to <lattice.xml> from ALPS Lattice Library , 'MODEL' : 'boson Hubbard' # Refer to <model.xml> from ALPS Model Library , 'L' : 20 # Length aspect of lattice , 'Nmax' : 20 # Maximum number of bosons on each site , 't' : 1. # Hopping , 'U' : 10. # Onsite Interaction , 'mu' : mu # Chemical potential , 'T' : T # Temperature , 'SWEEPS' : 100000 # Total number of sweeps , 'SKIP' : 400 # Number of sweeps before measurement (You don't need to measure too often!) } ) h5_infiles = pyalps.writeInputH5Files("parm9e",params);
The following will get us some insights of the physics:
for h5_infile in h5_infiles: parallel_automated_processing(h5_infile);
Observe from the plot
Next, we want to prepare the following plot:
resultFiles = pyalps.getResultFiles(prefix='parm9e'); data = pyalps.loadMeasurements(resultFiles, 'Density'); density = pyalps.collectXY(data, x='T', y='Density', foreach=['mu']);
import matplotlib; matplotlib.use('Agg'); import matplotlib.pyplot; import pyalps.plot; fig = matplotlib.pyplot.figure(); pyalps.plot.plot(density); fig.savefig('parm9e.density.jpg');
Send the beautiful plot to your (my) email:
pyalps.sendmail( 'pingnang@phys.ethz.ch' # email address of recipients , attachment='parm9e.density.jpg' , subject='Picture: parm9e.density.jpg' );
This should look like the following:
Note that the target density of 1. is both lower-bounded as well as upper-bounded, and we are ready to do refinements in the next subsection.
Fixing the density
First, we obtain a set of new parameters estimated to yield the fixed density of 1. :
resultFiles = pyalps.getResultFiles(prefix='parm9e'); data = pyalps.loadMeasurements(resultFiles, 'Density'); params = pyalps.paramsAtFixedY(data, x='mu', y='Density', fixedY=1., foreach=['T']);
Then write into the new h5 parameter files:
h5_infiles = pyalps.writeInputH5Files("parm9e.1",params);
Run them:
for h5_infile in h5_infiles: parallel_automated_processing(h5_infile);
Repeat the plotting procedure to see if you are satisfied with the new estimates. Else, repeat this subsection to give parm9e.2, parm9e.3, ... ... , until you are satisfied.
Energy density at fixed density
Suppose you are satisfied with parm9e.2, then your task is done with the following procedure:
resultFiles = pyalps.getResultFiles(prefix='parm9e.2'); data = pyalps.loadMeasurements(resultFiles, 'Energy Density'); energy_density = pyalps.collectXY(data, x='T', y='Energy Density');
import matplotlib; matplotlib.use('Agg'); import matplotlib.pyplot; import pyalps.plot; fig = matplotlib.pyplot.figure(); pyalps.plot.plot(energy_density); fig.savefig('parm9e.energy_density.jpg');
Send the beautiful plot to your (my) email:
pyalps.sendmail( 'pingnang@phys.ethz.ch' # email address of recipients , attachment='parm9e.energy_density.jpg' , subject='Picture: parm9e.energy_density.jpg' );
This should look like the following:
DWA tutorial 4: Extracting worldlines configuration
In reality, it is impossible to cater to everyone's desire for every single measurement observable to be performed. However, when it comes to diagonal measurements, for instance the density-density correlations, there is a simple trick to do this.
This section illustrates how this can be easily done.
Worldlines configuration
Of course, the header:
import numpy; import pyalps; import pyalps.dwa; import pyalps.ngs;
Using the same example as parm9a, i.e.:
params=[] params.append( { 'LATTICE' : 'square lattice' # Refer to <lattice.xml> from ALPS Lattice Library , 'MODEL' : 'boson Hubbard' # Refer to <model.xml> from ALPS Model Library , 'L' : 20 # Length aspect of lattice , 'Nmax' : 20 # Maximum number of bosons on each site , 't' : 1. # Hopping , 'U' : 16. # Onsite Interaction , 'mu' : 32. # Chemical potential , 'T' : 1. # Temperature , 'SWEEPS' : 100000 # Total number of sweeps , 'SKIP' : 400 # Number of sweeps before measurement (You don't need to measure too often!) } ) h5_infiles = pyalps.writeInputH5Files("parm9a",params);
or if that already exists,
h5_infiles = pyalps.getInputH5Files(prefix='parm9a');
Run till thermalisation: (The user is free to modify the following for cluster mode.)
pyalps.dwa.recursiveRun( "pyalps.runApplication('dwa', h5_infiles[0])" , cmd_lang = 'python' , follow_up_script = "pyalps.dwa.extract_worldlines(infile=pyalps.input2output(h5_infiles[0]), outfile=h5_infiles[0])" , break_if = "pyalps.dwa.thermalized(pyalps.input2output(h5_infiles[0]), 'Total Particle Number', simplified=True)" , write_status = "pyalps.dwa.write_status(pyalps.input2output(h5_infiles[0]), 'Thermalizing')" , loc = locals() );
The entire worldlines configuration object can be obtained via:
wl = pyalps.dwa.extract_worldlines(pyalps.input2output(h5_infiles[0]));
A single shot measurement of the state(s) is obtained via:
states = numpy.array(wl.states()).reshape([20,20])
A single shot measurement of density-density correlation will be naturally:
rho = numpy.zeros((20,20)); for i in range(20): for j in range(20): rho[i][j] = (states * pyalps.dwa.cyclic_shift(states, by=(i,j))).mean()
Visualizing the worldlines configuration
The user can easily visualize the worldlines configuration for himself/herself.
Suppose, for instance, we want to visually look at the worldlines configuration along the x-direction at y=5 from the task parm9a.task1.out.h5. This can be easily done as the following:
wl = pyalps.dwa.extract_worldlines('parm9a.task1.out.h5') pyalps.dwa.show_worldlines(wl, reshape = [20,20], at = '[:,5]');
This will look something like this:
It is also possible to easily sketch your own (extended) worldlines configuration using Python, but that will not be discussed here. In practice, you can even produce your own movie that illustrates how the worm propagates from one worldlines configuration to another.
Interested people are kindly requested to forward further enquiries to the ALPS mailing list via here.
Example: Density-density correlation
Single tasking (tailor-made to the needs of the user)
After thermalization, we shall now turn off all intrinsic measurements:
params = pyalps.getParameters(h5_infiles[0]); params[0].update({ 'MEASURE': 0 , 'SWEEPS': 400 }); h5_infiles = pyalps.writeInputH5Files("parm9a.1",params);
and not to forget copying the thermalized worldlines configuration and run once:
pyalps.dwa.extract_worldlines('parm9a.task1.in.h5','parm9a.1.task1.in.h5'); pyalps.runApplication('dwa',h5_infiles[0]);
Next, we write our own script to collect the statistics for density-density correlations:
def my_own_private_python_function_to_collect_statistics(taskfile): wl = pyalps.dwa.worldlines(); wl.load(pyalps.input2output(taskfile)); states = numpy.array(wl.states()).reshape([20,20]) rho = numpy.zeros((20,20)); for i in range(20): for j in range(20): rho[i][j] = (states * pyalps.dwa.cyclic_shift(states, by=(i,j))).mean(); pyalps.dwa.addToObservable(pyalps.input2output(taskfile), RealVectorObservable='Density Density Correlation', measurement=rho.flatten());
Finally, we can very easily gear up our own implementation of single tasking like this:
def my_own_private_simulation(taskfile, loc): locals().update(loc); pyalps.dwa.recursiveRun( "pyalps.runApplication('dwa', pyalps.input2output(taskfile))" , cmd_lang = 'python' , follow_up_script = "my_own_private_python_function_to_collect_statistics(taskfile)" , n = 64 , loc = locals() );
A simple check can be done as the following:
my_own_private_simulation(h5_infiles[0], loc=locals());
Next, we want to automate matters again:
def my_own_private_serial_automated_processing(taskfile, loc): locals().update(loc); pyalps.dwa.recursiveRun( "my_own_private_simulation(taskfile)" , cmd_lang='python' , break_if = "my/own/private/python/function/to/check/for/convergence" , loc = locals() );
my_own_private_serial_automated_processing(h5_infiles[0], loc=locals());
Multi-tasking
Very easy.
Step 1: Ensure all tasks have been thermalized. Refer to the previous sections to see how this is done.
Step 2: Collect statistics until convergence. To do this
in laptop mode:
for h5_infile in h5_infiles: my_own_private_serial_automated_processing(h5_infile);
or in cluster mode:
def my_own_private_parallel_automated_processing(taskfile): pyalps.dwa.recursiveRun( "my_own_private_simulation(taskfile)" , cmd_lang='python' , break_if = "my/own/private/python/function/to/check/for/convergence" , loc = locals() , batch_submit = True , batch_cmd_prefix = 'bsub -n1 -W60' , batch_run_script = taskfile + '.converge.script' );
for h5_infile in h5_infiles: my_own_private_parallel_automated_processing(h5_infile);
DWA tutorial 5: Trapped bosons in actual-size 3D optical lattice
The usual business
The DWA code is itself marvellous in being able to handle very robust simulation sizes. As a first example, we first look at one specific parameter input that resembles the experiments.
Step 1: The usual business
import pyalps; import pyalps.dwa;
Step 2: Preparing the parameter file
tof_phase = pyalps.dwa.tofPhase(time_of_flight=15.5, wavelength=[843,765,765], mass=86.99) params=[] params.append( { 'LATTICE' : 'inhomogeneous simple cubic lattice' # Refer to <lattice.xml> from ALPS Lattice Library , 'MODEL' : 'boson Hubbard' # Refer to <model.xml> from ALPS Model Library , 'L' : 100 # Length aspect of lattice , 'Nmax' : 20 # Maximum number of bosons on each site , 't' : 1. # Hopping , 'U' : 8.11 # Onsite Interaction , 'T' : 1. # Temperature , 'mu_homogeneous' : 4.05 # Chemical potential (homogeneous) , 'mu' : 'mu_homogeneous - (0.0073752*(x-(L-1)/2.)*(x-(L-1)/2.) + 0.0036849*(y-(L-1)/2.)*(y-(L-1)/2.) + 0.0039068155*(z-(L-1)/2.)*(z-(L-1)/2.))' , 'tof_phase' : str(tof_phase) , 'SWEEPS' : 100000 # Total number of sweeps , 'SKIP' : 100 # Number of sweeps before measurement (You don't need to measure too often!) } )
h5_infiles = pyalps.writeInputH5Files("parm9f",params);
or simply if existent,
h5_infiles = pyalps.getInputH5Files(prefix='parm9f');
Have a preliminary taste:
pyalps.runApplication('dwa', h5_infiles[0]);
Detailed step by step instruction for running this example is illustrated here.
Density profile
The derivation is found here.
The cross-section density profile at z=0:
and the column-integrated density profile:
Time-of-flight image
The derivation is found here.
The time-of-flight image:
More intensive examples
Documentation of DWA code
Introduction to the physics
Optical lattice
At this first moment, we shall look at the simplest case, i.e. a single particle of mass m which experiences a periodic potential , where
in the units of recoil energy and lattice spacing .
The quantum mechanical behaviour of the single particle follows
which is clearly separable to say the x-component:
In the plane wave basis,
we arrive at a tridiagonal diagonalization problem:
The wannier function is defined as:
and from there, one can calculate the onsite interaction:
After a little bit of algebra, we arrive at the hopping strength:
Finally, the Fourier transform of the wannier function is:
These can be easily evaluated in the following:
import numpy; import pyalps.dwa; V0 = numpy.array([8. , 8. , 8.]); # in recoil energies wlen = numpy.array([843., 843., 843.]); # in nanometer a = 114.8; # s-wave scattering length in bohr radius m = 86.99; # mass in atomic mass unit L = 200; # lattice size (along 1 direction) band = pyalps.dwa.bandstructure(V0, wlen, a, m, L);
>>> band Optical lattice: ================ V0 [Er] = 8 8 8 lamda [nm] = 843 843 843 Er2nK = 154.89 154.89 154.89 L = 200 g = 5.68473 Band structure: =============== t [nK] : 4.77051 4.77051 4.77051 U [nK] : 38.7018 U/t : 8.11272 8.11272 8.11272 wk2[0 ,0 ,0 ] : 5.81884e-08 wk2[pi,pi,pi] : 1.39558e-08
Model hamiltonian
The DWA code simulates the single band boson Hubbard model
with hopping strength , onsite interaction strength , and chemical potential at finite temperature via Quantum Monte Carlo implemented in the directed worm algorithm. Here, () is the annihilation (creation) operator, and being the number operator at site i. Bosons in an optical lattice are confined, say in a 3D parabolic trapping potential, i.e.
.
due to the gaussian beam waists as well as other sources of trapping. At finite temperature T, the physics is essentially captured by the partition function
and physical quantities such as the local density
for some configuration in the complete configuration space, with inverse temperature . Here, the units will be cleverly normalized later on.
QMC Directed Worm Algorithm
The Quantum Monte Carlo simulation is in fact a Markov chain random walk in the (worldlines) configuration space, importance sampled by the configuration weight which is just a positive number assigned to some particular configuration for instance shown here. How is being assigned depends on the model Hamiltonian as well as the ergodic algorithm that satisfies detailed balance.
For the directed worm algorithm, the configuration is updated with the worm transversing to and from the extended configuration space to ensure ergodicty.
In addition, is the number of particles (or state) at site i with time 0.
Each configuration update is known as a Monte Carlo sweep.
The complete step-by-step description of the directed worm algorithm can be found here, and the code implementation here.
The following table summarizes the basic options for the DWA simulation.
Option | Default | Remark |
---|---|---|
SWEEPS | 1000000 | total number of Monte Carlo configuration updates (sweeps) |
t | 1. | hopping strength |
U | 0. | onsite interaction strength |
mu | 0. | chemical potential |
K | 0. | parabolic trapping strength |
T | — | temperature |
Momentum distribution and time-of-flight images
The momentum distribution of bosons in an optical lattice at equilbrium can be expressed as:
Expressing the fourier transform of the wannier function as
and the interference term:
the momentum distribution reduces to the following form:
To measure it in experiments, the optical lattice is momentarily switched off and the bosons expand freely by assumption with momentum gained from the lattice momentum previously.
Measurements are performed in our classical world, and therefore semiclassical treatment is already sufficient, i.e.
where is the time-of-flight taken by the bosons to move from the origin (experiment) to the detector probe at position . Here, we assume the simplest picture, ie. 1) no interaction, 2) no collision, and 3) uniform motion.
Of course, at equilibrium, i.e. "after the bosons have travelled for a long time", the time-of-flight image will capture a density of
which indirectly probes the original momentum distribution of the lattice bosons. This form is only correct in the far-field limit or in the limit of infinite time-of-flight (tof).
To improve upon the accuracy, we shall correct it with semiclassical dynamics during the time of flight.
For the bosons that originate from lattice site at to the detector probe at , their time-dependent wavefunction must be:
where the kinetic energy of the free bosons must be:
Therefore, the corrected time-of-flight image must be:
where the dependence of the initial site on the wannier enveloped function is neglected. Finally, we arrive at:
with time-of-flight phase .
For example, the time-of-flight phase for Rb-87 bosons (mass 86.99 a.m.u.) with a time-of-flight of 15.5 ms
a. expanded from an isotropic optical lattice of wavelength 765 nm:
>>> pyalps.dwa.tofPhase(time_of_flight=15.5, wavelength=765, mass=86.99) 0.006464602556863682
b. expanded from an optical lattice of wavelength 843 nm along x-direction and 765 nm along y- and z- directions:
>>> pyalps.dwa.tofPhase(time_of_flight=15.5, wavelength=[843,765,765], mass=86.99) [0.007850080468935228, 0.006464602556863682, 0.006464602556863682]
The following table summarizes the basic option to turn on time-of-flight imaging options.
Option | Default | Remark |
---|---|---|
tof_phase | 0. | time-of-flight phase |
Symmetry leads to further simplification
Symmetries:
Therefore:
Define green function:
In reduced coordinates, , we redefine green function:
and therefore the momentum distribution/ time-of-flight image:
Here, we distinguish the momentum distribution from the interference
Please refer to the list of measurement observables provided by the DWA code.
Measurements
Basic options
The following table summarizes the basic options for measurements available to the user of our DWA code. When unspecified in the parameter list, they assume the default values.
Option | Default | Remark |
---|---|---|
SKIP | 1 | number of Monte Carlo configuration updates (sweeps) per measurement |
MEASURE | true | turns on/ off measurements |
MEASURE[Simulation Speed] | true | observe time taken per checkpoint |
Measurement observables
When the measurement mode is turned on, the following is a list of common observables available to the user.
Observable | Boolean control | Binning analysis | Remark | |
---|---|---|---|---|
Total Particle Number | — | detailed | measure always | |
Energy | — | detailed | measure always | |
Energy:Vertex | — | detailed | measure always | |
Energy:Onsite | — | detailed | measure always | |
Density | — | detailed | measure if lattice is homoogeneous | |
Energy Density | — | detailed | measure if lattice is homoogeneous | |
Energy Density:Vertex | — | detailed | measure if lattice is homoogeneous | |
Energy Density:Onsite | — | detailed | measure if lattice is homoogeneous | |
Total Particle Number^2 | measure_number2_ | detailed | — | |
Energy^2 | measure_energy2_ | detailed | — | |
Density^2 | measure_density2_ | detailed | measure if lattice is homogeneous | |
Energy Density^2 | measure_energy_density2_ | detailed | measure if lattice is homogeneous | |
Winding Number^2 | measure_winding_number2_ | simple | measure if lattice is periodic: | |
Local Kink:Number | measure_local_num_kinks_ | simple | — | |
Local Density | measure_local_density_ | simple | — | |
Local Density^2 | measure_local_density2_ | simple | — | |
Green Function:0 | — | detailed | measure always | |
Green Function:1 | — | detailed | measure always | |
Green Function | measure_green_function_ | simple | — | |
Green Function:TOF | measure_green_function_ | simple | measure if tof_phase != 0 | |
Momentum Distribution:0 | — | detailed | measure if tof_phase == 0 | |
Momentum Distribution:TOF:0 | — | detailed | measure if tof_phase != 0 |
More options
The following table summarizes more options for measurements available to the user of our DWA code. When unspecified in the parameter list, they assume the default values.
Option | Default | Boolean control |
---|---|---|
MEASURE[Total Particle Number^2] | false | measure_number2_ |
MEASURE[Energy^2] | false | measure_energy2_ |
MEASURE[Density^2] | false | measure_density2_ |
MEASURE[Energy Density^2] | false | measure_energy_density2_ |
MEASURE[Local Kink: Number] | false | measure_local_num_kinks_ |
MEASURE[Winding Number] | false | measure_winding_number_ |
MEASURE[Local Density] | false | measure_local_density_ |
MEASURE[Local Density^2] | false | measure_local_density2_ |
MEASURE[Green Function] | false | measure_green_function_ |
Thermalization
Description
- During thermalization, measurements are not to be trusted.
- Only the trend in the timeseries measurements is used to determine whether thermalization has been reached.
- After thermalization, the worldlines configuration is retained, but the measurements are to be discarded.
- For advanced users, measurements could be turned off for better code performance by setting "MEASURE" to "False".
- If thermalization turns out to be unstable, increase "SWEEPS" by an order of magnitude.
Default thermalization script
The following is the default thermalization script provided by us, which can easily be replaced up to the choice of the user.
def thermalized(h5_outfile, observables, tolerance=0.01, simplified=False, includeLog=False): if isinstance(observables, str): observables = [observables]; results = []; for observable in observables: timeseries = pyalps.hdf5.iArchive(h5_outfile).read("/simulation/results/" + observable)['timeseries']['data']; mean = timeseries.mean(); index = scipy.linspace(0, timeseries.size-1, timeseries.size); timeseries = scipy.polyval(scipy.polyfit(index, timeseries, 1), index); # timeseries get fitted percentage_increment = (timeseries[-1] - timeseries[0])/mean; result = abs(percentage_increment) < tolerance; if not includeLog: results.append(result); else: results.append({'observable': observable, 'percentage_increment' : percentage_increment, 'thermalized': result}) if includeLog or not simplified: return results; else: return reduce(lambda x,y: x*y, results);
Autocorrelations
- It is stupid to measure too frequently because only uncorrelated measurements are useful to obtaining accurate statistics.
- Unnecessary measurements may sometimes slow down the code by a few orders of magnitude, so measure wisely!
- Statistically, the autocorrelation time is the number of MC measurements for 63% of the memory to be lost on average.
- To lose 95% (99%) of memory on average, () consecutive MC measurements are needed.
- Therefore, would be nearly perfect to say that the obtained measurements are uncorrelated.
- The user can increase (decrease) the value of "SKIP" to increase (decrease) the value of statistically.
- The value of will not affect the accuracy of Monte Carlo statistics, i.e. the MC error is intrinsically corrected by . However, it will affect the speed of MC convergence.
Convergence
Description
- The ALPS ALEA Library automatically checks for the convergence of MC statistics sampled by detailed binning.
- Convergence here means convergence of MC error.
- Convergence is achievable with more good (uncorrelated) measurements.
Default convergence script
The following is the default convergence script provided by us, which can easily be replaced up to the choice of the user.
def converged(h5_outfile, observables, simplified=False, includeLog=False): if isinstance(observables, str): observables = [observables]; results = []; for observable in observables: measurements = pyalps.hdf5.iArchive(h5_outfile).read("/simulation/results/" + observable); result = (measurements['mean']['error_convergence'] == 0); if not includeLog: results.append(result); else: mean = measurements['mean']['value']; error = measurements['mean']['error']; tau = measurements['tau']['value']; count = measurements['count']; results.append({'observable': observable, 'converged': result, 'mean': mean, 'error': error, 'tau': tau, 'count': count}); if includeLog or not simplified: return results; else: return reduce(lambda x,y: x*y, results);
© 2013 by Ping Nang Ma (Tama Ma), Lode Pollet, and Matthias Troyer