Recent Changes - Search:




In this tutorial, we will introduce the tools used to perform time evolution simulations using the time-dependent variational principle (TDVP) algorithm for matrix product states. The main advantage of using TDVP instead of the time-evolving block decimation (TEBD) algorithm is that TDVP work with Hamiltonian with any arbitrary interaction lengths, while TEBD can only normally handle nearest-neighbour interactions

For our example calculation, we will look at a chain of spin-1/2s, where we have a domain wall of spins pointing up and down in {$z$} direction. {$$|\Psi(t=0)\rangle = |\cdots\uparrow\uparrow\uparrow\uparrow\uparrow\downarrow\downarrow\downarrow\downarrow\downarrow\cdots\rangle.$$} We then time evolve this state under the XX Hamiltonian, which is a version of the anisotropic Heisenberg model where the spin coupling in the {$z$} dimension is zero, and the couplings in the {$x$} and {$y$} dimensions have equal magnitude, {$$H_\text{XX} = \sum_j (S^x_j S^x_{j+1} + S^y_j S^y_{j+1}) = \frac{1}{2} \sum_j (S^-_j S^+_{j+1} + S^+_j S^-_{j+1}).$$} From the second form of this Hamiltonian, we can see that its effect is to swap neighbouring up and down spins (this model is mathematically equivalent to a free fermion model on a lattice, where up and down spins map to occupied and unoccupied sites respectively). In fact, the dynamics of this magnetisation of this state under this Hamiltonian is known exactly [1], placing the domain wall between sites {$-1$} and {$0$}, we have {$$S^z(t, x) = -\frac{1}{2} \sum_{n=-x}^x J^2_n(t), \qquad x \geq 0,$$} {$$S^z(t, x) = -S^z(t, -x-1), \qquad x < 0,$$} where {$J_n(t)$} is a Bessel function of the first kind.

For this simulation, we use the spin chain lattice file with {$U(1)$} symmetry (the Hamiltonian and initial state break the full {$SU(2)$} symmetry, so we have to use a smaller symmetry group), and we can generate the initial state for a lattice of 20 sites using mp-construct:

spinchain-u1 -o lattice
mp-construct -l lattice -o psi.t0.0 --finite 0.5:0.5:0.5:0.5:0.5:0.5:0.5:0.5:0.5:0.5:-0.5:-0.5:-0.5:-0.5:-0.5:-0.5:-0.5:-0.5:-0.5:-0.5

Now we are ready to perform time evolution on our state using mp-tdvp:

mp-tdvp -H lattice:H_J1t -w psi.t0.0 -o psi -t 0.1 -n 100

here, the -H (--Hamiltonian) flag sets the Hamiltonian to use for the time evolution, for the XX Hamiltonian we use lattice:H_J1t (other possible operators can be seen using the mp-lattice-info command), -w (--wavefunction) specifies the initial state, -o (--output) species the prefix for output files, which will have filenames of the form <prefix>.t<time>, -t (--timestep) specifies the size of the timestep and -n (--num-timesteps) the number of timesteps. Now we want to calculate the results of our time evolution simulation: for simplicity, first, we’ll only examine the {$z$}-component of the spin to the left of the domain wall. To calculate expectation values from the full time evolution simulation, we could use a bash loop, such as

for t in $(seq 0.0 0.1 10.0); do echo -n "$t "; mp-expectation psi.t$t lattice:"Sz(9)" -r; done

however, if we have access to multiple CPU cores, then we can speed this up by calculating each expectation value in parallel using GNU Parallel:

parallel -k 'echo -n "{} "; mp-expectation psi.t{} lattice:"Sz(9)" -r' ::: $(seq 0.0 0.1 10.0)

the -k (--keep-order) flag makes sure the output is printed in order.

It is quite common for people to expand the bond dimension very rapidly at the beginning of the calculation until it reaches some determined maximum, and then keep it fixed throughout the entire simulation, but this is ineffective compared to using an slower adaptive expansion.

The main reason is that the error in a fixed-bond-dimension simulation generally grows over time, until it becomes quite considerable, whereas the error in an adaptively expanding simulation is fairly constant, although the computation time per timestep will increase until it becomes too expensive to continue. This means that in an adaptive expansion simulation, the results will have roughly the same accuracy for the whole simulation, with some limitation in time depending on how quickly entanglement builds up, whereas the fixed bond dimension simulations can go on for as long as you please, although the accuracy will deteriorate as you go on, with no indication of when the errors become too significant to continue.

Now, assuming you have run the above commands correctly, you should get an expectation value of 0.5 for each timestep. Why is this? Well, the TDVP algorithm is designed to calculate the optimal time evolution for a fixed bond dimension. We started with a product state, which has a bond dimension 1, and so by default, the program performs the time evolution simulation with bond dimension 1, where the optimal result is to not change the wavefunction! Sometimes, when the initial wavefunction has a bond dimension much larger than 1, then the time evolution with a fixed bond dimension works fine. However the best solution in general is to use an adaptively expanding bond dimension: this is since the entanglement in the state generally increases in time evolution, and we need to increase the bond dimension to compensate for this.

To enable the use of an adaptively expanding bond dimension, we can use the -d (--eigen-cutoff) flag to specify the cutoff value for the singular values of the expansion matrix. Decreasing the value of this parameter increases the rate at which the bond dimension will grow during the simulation, thus increasing the accuracy at the cost of longer computation time. Here, we will try using a cutoff of {$10^{-6}$}:

mp-tdvp -H lattice:H_J1t -w psi.t0.0 -o psi -t 0.1 -n 100 -d 1e-6

Here we can see that the the maximum bond dimension in the chain (shown as the MaxStates number in the output) increases during the simulation, and the program slows down as MaxStates increases. We can then calculate the expectation values as above, and redirect them into an output file, which can be done by appending > data.dat right at the end of the line containing the for loop or parallel command, and then we can plot the results using gnuplot (or your favourite plotting program):

plot "data.dat", 0.5*besj0(x)**2

As we can see, the numerical results lie right on top of the exact results.

Another thing to note is that the energy (shown as E in the output) only deviates very slightly from the initial value of zero. This is due to a property of TDVP that it conserves the energy of a time-independent Hamiltonian and any constants of motion which can be written as the sum of single-site operators. While this is a neat property, it can sometimes be a bit misleading, since other time evolution methods (such as TEBD) will have much greater changes in the energy over time due to truncating the bond dimension, but this does not necessarily make a TDVP simulation more accurate than a TEBD simulation. The first TDVP simulation we ran in the tutorial is a good example of this: since it conserves the energy exactly for the whole evolution, but it does this by not changing the state at all, which leads to other quantities, such as the expectation value of {$S^z$}, being completely wrong. Furthermore, for a method like TEBD, the deviation of the energy from the initial value can be used to quickly assess the accuracy of the time evolution simulation, which you cannot do for TDVP (unless the energy does deviate, in which case, something has gone very wrong!).

To calculate the expectation values for the full chain, we can use a double loop, which can be done easily using GNU Parallel

parallel -k 'echo -n "{1} {2} "; mp-expectation psi.t{1} lattice:"Sz({2})" -r' ::: $(seq 0.0 0.1 10.0) ::: $(seq 0 19) > data-full.dat

We can then plot this in gnuplot using

plot "data-full.dat" using 2:1:3 with image

If we continued to evolve for further times, the wavefront will reflect off the boundary, which will interfere with the results. Hence, if we wanted to look at the evolution for further times, we would need to redo the time evolution with a larger system size. A better alternative is to use infinite boundary condition (IBC) wavefunctions, where we work with an infinite-size system, but only perturb a finite-sized region, which can be dynamically expanded during the time evolution simulation as the wavefront spreads outwards.

Infinite TDVP

We can also consider a similar quench with a translation-invariant initial state {$$|\Psi(t=0)\rangle = |\cdots\uparrow\downarrow\uparrow\downarrow\uparrow\downarrow\uparrow\downarrow\uparrow\downarrow\cdots\rangle.$$} In this case, the expectation value of {$S^z$} is given by [2] {$$S^z(t, x) = -\frac{1}{2} J_0(2t).$$} We can generate this state as an iMPS with a two-site unit cell using mp-construct

mp-construct -l lattice -o psi-inf.t0.0 0.5:-0.5

We can then evolve this state using mp-itdvp and calculate the expectation values:

mp-itdvp -H lattice:H_J1t -w psi-inf.t0.0 -o psi-inf -t 0.1 -n 80 -d 1e-6
parallel -k 'echo -n "{} "; mp-expectation psi-inf.t{} lattice:"Sz(0)" -r' ::: $(seq 0.0 0.1 8.0) > data-inf.dat

We can plot the results in gnuplot with

plot "data-inf.dat", 0.5*besj0(2*x)

Something to note here is that the growth of the entanglement is much greater in the translation-invariant state than the domain wall state. This is since the domain wall is the only part of the latter state which is nonstationary in time evolution, while the state is close to an eigenstate far away from the domain wall wavefront. On the other hand, the translation-invariant state essentially has a domain wall between every nearest neighbour, and so the state is nonstationary at every point at the lattice right from the beginning, and the entanglement increases even more as the domain wall wavefronts move through the chain. As a result, the domain wall state has a linear growth of bond dimension in time evolution, while the translation-invariant state has an exponential growth of bond dimension.


  1. We can plot the absolute error of the expectation value of {$S_z$} in the infinite simulation in gnuplot using
set logscale y
plot "data-inf.dat" using 1:(abs($2-0.5*besj0(2*$1)) with lines
The two main contributions to the error are the finite bond dimension and the size of the timestep. Try to assess how the error changes as you vary the bond expansion cutoff (-d) and the timestep (-t). When is the error dominated by the timestep, and when is it dominated by the finite bond dimension?
(You may also want to decrease the number of timesteps (-n) as you adjust these parameters, to stop the calculation from taking way too long. Alternatively, you can set the number of timesteps to be quite large, and abort the simulation (by pressing Ctrl+C) once it starts to take too long to perform a timestep, or once the tool has been running for a certain period of time, e.g. 1 minute.)


[1] T. Antal, Z. Rácz, A. Rákos and G. M. Schütz, Transport in the XX chain at zero temperature: Emergence of flat magnetization profiles, Phys. Rev. E 59, 4912 (1999), doi:10.1103/PhysRevE.59.4912.
[2] P. Barmettler, M. Punk, V. Gritsev, E. Demler and E. Altman, Quantum quenches in the anisotropic spin-{$\frac{1}{2}$} Heisenberg chain: different approaches to many-body dynamics far from equilibrium, New J. Phys. 12, 055017 (2012), doi:10.1088/1367-2630/12/5/055017.
Edit - History - Print - Recent Changes - Search
Page last modified on March 28, 2024, at 09:47 AM