Please remember that you do have all the code. If an error
occurs, you may inspect the code to understand in detail what has
caused the program to stop. The scripts mcg and mcb
are very helpful for finding the way through the code. The script
phelp explains the meaning of the variables.
The following topics are discussed:
If you really cannot trace down the error you may also file a support request to one of the developers. In this case please use the script mreport. mreport will compile a .tgz file of all relevant files in a name directory which will help us to see your setup. Please attach this file in your email. Note, that mreport will also collect some information about your computer and installed software.
This error message occurs if the program tries to allocate more memory as your computer has. Inspect the log file to see how much memory is needed. Remember that there are usually four steps when running MCTDH: building the DVRs, building the operator, building the initial wavefunction and running the propagation. Only the propagation step is expected to consume a large amount of memory.
Note that you can use at most 2 GB on a 32 bit machine.
Turning to 64 bit architectures, you may use up to 8 GB in
mctdh/propagation and in potfit. For all other programs of the
package, the 2 GB limit still exist, because of the use of
integer*4.
To check, how much memory is available on your computer, run
mmemtest84 .
***** End of file found! ***** last line no.: ...
This error message occurs if the program tries to read beyond
the end of an input file, because the input file is buggy. A
common cause is that the end-input (or
end-operator) line is missing or misspelled. A curios
error occurred once when sending input files via ftp. Somehow the
last newline "character" -- this character terminates a
line -- got lost and the end-input line could not be read.
The solution is simple, just add a blank line after
end-input.
Redimension ... in ....
.inc. Minimum:
Increase parameter ...
MCTDH allocates most of the memory dynamically. This is done
by employing C-routines. (See also Allocation error in ...). However, there are
some large arrays with fixed dimension, because they appear in
one of the common blocks (which in turn are in the include
files). The error message appears, if you run out of the
dimension of one of those arrays. The solution is simple. Edit
the include file and modify the parameter statement, which
defines the dimension. If you do not know where to find the
parameter statement which needs to be modified, use the script
mcg (MCTDH code grep) to find out. (Try mcg -h). Type
mcg -wi <variable-name-to-be-altered> include
If there is too much output, try
mcg -ui 'parameter.*<variable-name-to-be-altered>'
include
There are some parameters which are not defined in an include
file. In such a case one must drop "include" from the above
command, such that mcg searches through the whole MCTDH directory
and not through the include files exclusively.
After one has altered the parameter(s) one must, of course,
re-compile the routine in question ( compile mctdh or
compile potfit or compile overlap or ...) or
the whole package ( compile all ).
ERROR in subroutine rddvrdef : Increase MBASPAR in griddat.inc. Minimum: *****then one should not increase the parameter mbaspar. This error is most likely caused by one of the following two reasons:
Cannot open file <....> message appears, although the file exist.
First check by running "ls <....>" whether the file <....> exist. One may have given a wrong path. Note that in MCTDH relative paths are always relative to the location of the input file. If the file exist, the problem is likely due to the fact that FORTRAN cannot open a file twice. Hence it may become necessary to copy a file such that the same information can be read via two different file-paths. Such problems may arise when using block-SPF and block-A, or when using orthogonalise.
Stack size problems / mysterious segmentation faults
A few of the older analyse programs allocate big arrays of memory statically. Since such memory gets allocated on the stack, you can run into problems if the maximum stack size is limited (this can be checked with "ulimit -a"). This problem manifests itself in segmentation violations that occur immediately after program startup. To work around this problem, you can disable the limit on the stack size by "ulimit -s unlimited". However, you might not be allowed to do this on your system. In this case, try talking to your system administrator. If that doesn't help, you can decrease the needed stack size by changing the appropriate parameters in the affected programs.To avoid redundant configurations the numbers of SPFs,
nk, must satisfy the relation
nk2 ≤ Product (l=1,f) nl
If there are only two particles (f=2), this implies that the
(two) numbers of SPFs must be identical. In almost all other
cases the relation above does not impose a restriction.
If a unit-operator appears in the product form of a
Hamiltonian, it will simply be ignored if the Hamiltonian is
build with usediag. However, if the bra and ket
wavefunctions of an matrix element are different and have
different sets of SPFs, the unit operator is turned into an
non-unit overlap matrix and cannot be ignored. In such a case
nodiag must be set. Note:
usediag is used for the system Hamiltonian, eigenf,
meigenf, expect, pexpect, and for separable operators for
flux.
nodiag is used for operate, fmat, and flux.
The MCTDH program tries to determine the usediag/nodiag type automatically. If one needs a usediag operator for some analyse routine, usediag has to be explicitly specified. nodiag is default, except MCTDH recognizes that the operator should be usediag (i.e. for system, eigenf, meigenf, and expect).
In rare cases one may have to define an operator twice, once as nodiag and once as usediag. Such a situation occurs if one wants to use the same operator for, say, operate and eigenf.
autocorrelation function is wrong.
If the file auto shows data which does not make sense,
one probably had used the auto keyword in a situation, where this is
not allowed.
If one sets the auto keyword, the following warning is written to
the log file:
WARNING in subroutine Chkprop:
Symmetric Hamiltonian and real initial state assumed when calculating
autocorrelation function.
Use "cross" if these conditions are not
fulfilled.
To understand, why auto can be used only for symmetric
Hamiltonians and real initial states, please see the MCTDH review
(2000), Eqs.(164-167), or the MCTDH lecture notes,
Eqs. (1.16.1.21). Both can be downloaded from the MCTDH web site.
If the above conditions are not fullfilled, one has to
use the keyword cross instead. (See the HTML docu for cross). As
then the so called t/2-trick is no longer used, the autocorrelation
function generated by cross is not of twice the length of the
propagation time.
Slow propagation. The integrators take very small step sizes
This problem occurs if the Hamiltonian has very large
eigenvalues. These very large (absolute) eigenvalues are almost
always due to large potential values. One has to cut the
potential at some reasonable value. This is legitimate, because
the wavepacket avoids regions of high potential energy.
The cutting of the potential most conveniently done by setting
vcut < ... in the Operator-Section of the potfit
input file. (Note that the v < ... statement of
the Correlated-Weights-Section must refer to a lower value as the
cut). In case of an "exact" calculation, a cut may be applied in
the Operator-Section of the MCTDH input. In the general case, a
sensible cut may be difficult to implement. The step-functions
step and rstep may be used in the Hamiltonian
section to serve this purpose.
Time, fs, time-not-fs, and energy-not-ev
The MCTDH package uses atomic units (au) throughout. To simplify life, the user may use other units when specifying parameters. In this case, the unit has to be added to the value, e. g. eshift = 1.5,ev. With times, the situation is more subtly. (This is because units were introduced after we had chosen to use fs for times). In the input file (*.inp) all times are assumed to be in fs and a unit must not be given. In the operator file (*.op), however, all variables are in au and if one wants to input a time in fs, one has to add the unit fs. This also holds for a Parameter-Section or an alter-parameter block of the input file.
When studying a model problem in dimensionless coordinates an automatic conversion, fs -> au, may not be wanted. In such a case one may give the keyword "time-not-fs" in the Run-Section. This keyword inhibits the conversion and all times in in- and output are in atomic units (or in the dimensionless units one has adopted).
Similarly there is the keyword "energy-not-ev" which inhibits the conversion to eV of the energies printed to the output file. The energies printed to the output file still carry the label "eV", but − when "energy-not-ev" is given − they actually are in au (or in the dimensionless units one has adopted).
Some pl-scripts, like plgpop, plnat, plqdq, and plstate, require that the option -n is set when "time-not-fs" was set during the MCTDH run. For plspec (and autospec84) one must give no as unit argument.
Parameter already
assigned :..
Label already assigned : ...
Parameters and operator labels cannot be re-assigned. When one tries to assign a value to an already existing parameter the message:
Parameter already assigned : <parameter name >
is printed to the log file. The re-assignment is simply ignored. A similar rule applies to labels. This is a very sensible rule as it allows us to overwrite parameters or labels which are predefined in an operator file. The MCTDH program first evaluates the command line options, then a Parameter-Section in the input file (if such section exist), then an alter-parameter block in the Operator-Section of the input file and finally the Parameter-Section of the operator file. Because of the "first come, first served" rule, one may define a parameter in e.g. an alter-parameter block, and the definition of this parameter done in the operator file is ignored (and similarly for labels). However, this behaviour may lead to strange results, if one erroneously defines a parameter (or label) twice. The second definition will simply be ignored (except for a message printed to the log file). For example, if there is an alter-labels block like:
alter-labels
CAP_x = CAP [ −9.0 0.03 3
−1 ]
CAP_x = CAP [ 9.0 0.03 3
+1 ]
end-alter-labels
then the second (right hand) CAP will simply be ignored. The solution is simple, just assign different labels to the CAPs, e.g. CAP1_x and CAP2_x, or CAPleft_x and CAPright_x.
The folowing error message appears if the pthread library was not linked for the compilation of MCTDH.
########################################### ### pthread library not linked! ### If your compiler supports pthread, ### modify the script compile.cnf ###########################################If your compiler supports pthreads, the script compile.cnf has to be modified. One should edit compile.cnf_be and compile.cnf_le as well because compile.cnf is overwritten by one of the latter files when install_mctdh is executed. The option -lpthread must be added in the line MCTDH_ADD_LIBS and the option -pthread must be added in the line MCTDH_CFLAGS in the section of the compiler that is used. Then MCTDH must be compiled again.