Monte-Carlo CPD/CANDCOMP Documentation

General Remarks

Monte-Carlo CPD/CANDECOMP (or MC-CPD) can be used to create an approximation of a given potential in form of a Canonical Polyadic Decomposition (CPD), in the literature also known as CANDECOMP or PARAFAC.

The program starts with an initial guess of the CPD and improves it iteratively using the alternating least squares (ALS) method. As the traditional ALS requires multible integrals over the complete primitive grid, these integrals are replaced by Monte-Carlo integrals. This on the one hand enables the use of larger potentials, but on the other hand also introduces a statistical error into the resulting potential representation. However, the error can be estimated again by statistical methods.

A great advantage of using Monte-Carlo techniques is that correlated weights are implemented straight forwardly by means of the sampling density, i.e., the distribution of the sampling points. At present, generating a uniform distribution and a Boltzmann distribution using a Metropolis algorithm are implemented. Furthermore, external samplings can be read-in and used for generating cpd files.

MC-CDP is developed along the lines of MC-Potfit and requires a very similar input file.


MC-CPD supports both, shared and distributed memory parallelization using OpenMP and MPI, respectively as well as hybrid parallelization schemes. See below.

Furthermore, the program makes heavy use of BLAS/LAPACK libraries and therefore profits a lot from linking to an optimized BLAS/LAPACK library.
The use of BLAS/LAPACK libraries can be reduced by issuing the -noblas command line flag. This could lead to some performance improvement when linked to a non-parallel standard BLAS/LAPACK library while using OpenMP at the same time. It usually makes no sence to use -noblas in a pure MPI or a serial run.

Usage and command line options

Monte-Carlo CPD is called as other programs of the package from the command line. With the command
mccpd86 -h
a help text is printed:

   Purpose: creates a canonical poliadic fit.
     Usage:  mccpd [-h|-?]  [-ver -rd -w -c -D name] inpf
      vers:  program version (e.g. 86 for version 8.6)
         d:  indicates debug version
      inpf:  input file (with or without extension ".inp")
      optn: -h   : print this help-text.
            -?   : print this help-text.
            -c   : continue (optimization).
            -mnd : Make name directory.
            -ver : Version information.
            -rd  : Reading of the dvr-file is enforced.
            -w   : overwrite enabled.
          -D name: 'name' denotes the directory where files are
                   written to, (name in ~.inp file ignored).
         -up iup : The integer parameter 'iup' is passed to userpot where
                   it may be used to select between PES and DMS components.

         -noblas : When compiled with -P option (OpenMP parallelization)
                   this program relies heavily on being linked to an
                   optimized and OpenMP-parallel BLAS/LAPACK installation.
                   Use this flag if you are using OpenMP parallelization
                   but a serial (!) BLAS/LAPACK library is linked.
                   (The default BLAS/LAPACK library is serial.)

        The order of the options does not matter, but "inpf"
        must be the last argument.


General Remarks

As in other programs of the Heidelberg MCTDH package, the input file is organized in sections, where the section XYZ starts with the keyword xyz-section and ends with end-xyz-section. At present, the MC-CPD input may contain four to six sections, as listed below. Those with a STATUS of C are compulsory, O marks optional sections.

XYZ Status Description
Defines general parameters.
Defines the Monte-Carlo method, No. of points, etc.
Which surface to be used, energy cut-offs.
Definition of primitive basis. Optional if an external DVR file is specified in the RUN-SECTION.
Definition of mode combinations .
Define symmetries of the PES fit

Below, tables of keywords are given for the input sections.

The following tables describe the keywords. The number and type of arguments is specified. The type is S for a character string, R for a real number, and I for an integer. For instance, 'keyword = I,R[,S]' indicates that the keyword takes two or optionally three arguments where the optional argument is indicated by square brackets. Here, the first argument is an integer, the second a real number, and the optional third one a character string. The status column indicates whether the keyword is compulsory, C, or optional, O.


Keyword Status Description
name = S
The output files will be written to the directory S. Note: If the -D name option is used (see To Run the Program), the name-string given in the input file is ignored.
title = S
The string S is taken as the title of the run and printed to the log and output files. Note: everything that follows the equal sign '=' till the end of the line is taken as title!
readdvr (= S)
The DVR information will be taken from the DVR file in directory S. Note that the primitive-basis-section of the input file is completely ignored if this keyword is given. A DVR file residing in directory S will NEVER be overwritten or deleted, even if the keywords deldvr and/or gendvr have been specified. If S is not given, name-directory/dvr is assumed.
The DVR file will be generated unconditionally. (This is the default)
The DVR file will be deleted at the end of the calculation.
Any files already in name directory may be overwritten. (It is safer not to use overwrite but the option -w).
The program generates the dvr file and then stops.
Program timing information will be written to the file name/timing   (default).
The timing file is not opened.
If OpenMP parallelization is used with this flag no calls to the PES routine are performed in parallel. This is useful if the PES routine is not thread safe.
invert-method = S (,R)
Method used to solve the system AX=B where A is the overlap matrix of the CPD terms, and B is the overlap of the potential with a CPD term. X are the coefficients of the CPD-fit.
S can be one of direct or eigen.
When direct is selected, the overlap matrix is inverted using the LAPACK DPOSV (or ScaLAPACK PDPOSV) routine which uses a Cholesky decomposition. This is default.
If eigen is selected, the matrix is diagonalized and inverted using LAPACK DSYEV (or ScaLAPACK PDSYEV)

Note that regularization (R) is essential when solving the linear system to prevent diverging CPD terms. Per default R is set to be the square root of machine precision. If S=direct is chosen, The diagonals of the matrix A are multiplied with (1+R). If S=eigen is chosen, En → En + R tr{A} *exp(En/R tr{A}) is used as regularization. S=eigen is numerically more costly, but usually (not always!) yields better results.

load-cuts (= S)
Load the PES-cuts that have been created in an earlier run. Note that the cuts must match the samplings defined in the sampling-section. It is recommended to load the samplings from file as well. (See readidx on Sampling-Section). The optional S can be a directory in which the program will look for cut-files. If ommited, S will be the name-directory.
Try reusing existing PES-cuts in name-directory that have been created in an earlier run. This is useful if a run crashed during cut-calculations and some files have already been created. Note that the cuts must match the samplings defined in the sampling-section. It is recommanded to load the samplings from file as well.
Do not create a cpd file but only calculate the sampling points and then exit.
Do not create a cpd file but only calculate the PES-cuts from a given sampling and then exit.
Write out the energies that correspond to the sampling points. This default for Metropolis sampling.
iseed = I
Calculates the initial random seed from I. If not set, the random seed is calculated from the current system time. In case of MPI, the rank number is added in each task and the system time used is the one from rank 0.
If symmetry operations are present (see SYMMETRY-SECTION below) that do not stay on the DVR grid but lead to grid points outside the primitive grid defined in the PRIMITIVE-BASIS-SECTION, then if auto-extgrid is set, extended mode-grids will be automatically generated such that the symmetry operations stay on an extended primitive grid and a usual grid-based symmetrization can be performed. The additional grid points are subsequently removed such that the original grid is restored. If not set, an error message is produced if any symmetry operation leeds to outside the primitive grid. Default: not set.
initguess = S(,S1)
if S is set to "random", the initial guess basis functions are created from a random-number generator, if S = "cuts", the basis functions are created from the 1-mode cuts through the PES. Otherwise S is interpreted as a path to a cpd file that is used as initial guess. If S is intepreted as a path to a cpd file, the keyword "override-initrank" can be used to override the initial rank of "grow-rank" option. In this case the rank of the cpd file will be the start rank. Default: "random".
The initial guess that is fed into the ALS algorithm is saved on file. Note, that setting save-initguess will trigger the calculation of the CPD coefficients and hence will be numerically approximately as costly as one ALS optimization for one mode. If not set, the CPD coefficients are initialized with zero. They do not enter the numerics.
maxiter = I
I is the maximum number of iterations the ALS will perform. Default: I = 1000.
rank = I
I is the maximum rank of the CPD. If grow-rank (see below) is not used, I is also the initial and final rank of the CPD. 'rank = I' can be ommited if the initial guess is read from file or for continuation runs. If grow-rank is used, 'rank = I' must be set.
grow-rank = I1, I2, I3
When optimizing a CPD it proved very benefitial to start optimizing a small CPD tensor first and sequentally growing it, i.e., to repeatedly add terms after a number of ALS iterations. This can be done with the grow-rank option. Where:
I1: initial rank of the small CPD that is optimized, I2: delta-rank, i.e. number of terms terms added after I3 iterations of the ALS algorithm.
Adding additional I2 terms is done every I3 iterations of the ALS algorithm until the total rank is equal to the value given in the 'rank = I' option above (or until the number of ALS iterations exceeds the 'maxiter' option). 'rank = I' is therefore mandatory if 'grow-rank' is used.

Added terms with the 'grow-rank' option will be initialized with zero coefficients but random basis functions.

finalize-coeff = S
The coefficients that come out of the ALS iterations are not the optimal ones. They minize the CPD on the SPP sampling- points where one mode is replaced by unity. With the keyword finalize-coeff one can specify if and how the optimal coefficients shall be computed after the ALS has been completed.
If finalize-coeff = no is given, finalization is not performed and the coefficients are used as they come out of the ALS routine.
If finalize-coeff = monte-carlo is set, the coeff-sampling given in the SAMPLING-SECTION is used to re-calculate the coefficients allone (this is default). finalize-coeff = cuts is set, the coefficients are re-calculated on all cuts that have been loaded or computed. This requires sampling-coeff = sampling-spp in the SAMPLING-SECTION.-->


The Sampling-Section defines how to obtain the sampling points using different methods. It has the same structure as in mc-potfit, however there are subtle differences in the meaning of the tasks.

Options in the Sampling-Section are the following

sampling[-S1] = S2, I1 [,I2, I3, R [,S3]]
The sampling method and the number of sampling points to use. See remark below .

Sampling methods

Monte-Carlo samplings are used for tree different purposes or tasks in MC-CPD:

  1. generation of 1-D cuts through the PES on which the CPD is optimized.
  2. (optional) subsequent re-calculation of the coefficient vector,
  3. and finally testing the cpd file.
The sampling[-S1] keyword can be used to assign different sampling methods and sampling sizes for these different tasks. The suffix S1 is an optional string that specifies the particular task a sampling method is used for. S1 can be omitted. In this case the same sampling method and ensemble size are used for all three tasks. Note, that newly generated sets of sampling points will be different for all three tasks even if the specification of the task is omitted (see exception below). The following specifications are possible: sampling[-S1]
sampling for generating the cuts on which the basis functions are optimized.
sampling for finializing the coefficient vector once the basis has been genereated
sampling to calculate mean and RMS values of the CPD error.
sampling for all the above tasks with the same sampling method.
The total set of sampling points that is used for a specific task can be composed of chunks of points generated with different methods. To this end one can specify multiple (up to 10) sampling sources or each of the sampling-tasks (SPP, coeff, or test), such as
    sampling-spp = <method 1>
    sampling-spp = <method 2>
    sampling-coeff = <method n>
    sampling-coeff = <method n+1>
    sampling-test = <method N>
This will create several sub-sets of sampling points (see examples below) and subsequently combine them into one large set to perform the calculations. This can for instance be used to add small fractions of uniform sampling to a metropolis sampling or to combine sets of metropolis samplings with different temperatures. In addition, if there are two or more test-sets, detailed statistics will be written in a separate 'cpd-statistics' file for each subset in the test-trajectory.

If the calculation is a new run, either sampling or sampling-spp and sampling-coeff must be given. The test sampling is optional and only used to estimate error measures of the fit. A test is in any case performed with the sampling used for calculating the coefficients or SPP. A test with an independent set of sampling points is, however, highly recommended. Comparison of a test-set with the generating-set is a good measure for the convergence in terms of numbers of sampling points (if the sampling methods are the same).

Note: if PES-Cuts from a previous calulation are used via the 'load-cuts' keaword, then the SPP-sampling must match the sampling the cuts have been created with.

The first argument of the sampling[-S1] keyword is the sampling method, followed by a set of parameters. Possible sampling methods - with parameters - are:

uniform, I
Uniform sampling: each grid point is chosen as a sampling point with the same probability. The integer I defines the total number of sampling points
metropolis, I1, I2, I3, R [,S3]
A Metropolis algorithm is used to generate I1 sampling points. The sampling points are selected from a random walk on the total grid. The initial starting position of the walker is a random point on the primitive grid. The first, I2 initial steps of the random walk are skipped to equilibrate before any sampling points are recorded (burn-in). The integer I3 is a step width. It specifies that only every I3th accepted step from the random walk is actually used as a sampling point. That is, I3-1 accepted Metropolis-steps are not used until the next accepted step is recorded as a sampling point. This reduces the correlation between subsequent sampling points. If I3=1 then all accepted Metropolis-steps (after burn in) are used. R defines the temperature (times the Boltzmann constant, so R is actually an energy) used to create a Boltzmann distribution. It may bear a unit defined by S3. Default unit is au.
Note: If metropolis sampling is used in an MPI run, each of the MPI tasks will create a subset of the total trajectory. All MPI tasks will do the burn-in and have the same step-width, etc. but will start from a different initial position and with a different random-seed (which is calculated from iseed if given in the RUN-SECTION, otherwise the system time of rank 0, and the MPI rank number). The trajectories from the different ranks are then combined into one.
readidx{/path/to/indexfile} [,I1 [,I2 [,I3]]]
The sampling points are read from file. The file can be either ASCII or binary (depricated). It must contain the indices of the primitive grid points in the order as given in the PRIMITIVE-BASIS-SECTION, one sampling point with all indices per line (ASCII) or record (binary). Optionally only the first I1 data points in the file are used. If I2 is given the first I2 sampling points are skipped before I1, sampling points are read. If also I3 is given, then I2 sampling points are skipped and of the remaining sampling points only every I3th point is used until I1 sampling points are read in. If I3 == 1 no sampling points are skipped, in which case I3 might as well be omitted. If no integers are given all data points from the file are read in.
Note: By default, even if the same sampling method with the same parameters is used for all of the sub-tasks, for each of the sub-tasks an independent set of sampling-points will be created. There is one exception, however: often one wishes to use the same set of points to calculate the SPP and the coefficients. In fact, this is recommendet for mccpd. One can simply create the set of sampling methods for the SPP and than assign this the coefficients (or the other way round):
    sampling-coeff = <method 1>
    sampling-coeff = <method 2>
    sampling-spp =  sampling-coeff

    sampling-test = <method N>
The following examples demonstrate the use of the sampling[-S1] keyword.

Example 1

  1. Uniform sampling with 10000 points for generating the SPP
  2. Metropolis sampling with 10000 points created with a temperature of 400 wave numbers. Furthermore the first 1000 sampling points are skipped and then only every 10th sampling point is used for the trajectory. So altogether 101000 points are generated of which 10000 are used for calculating the coefficients.
  3. For testing the natpot, an external file containing the dvr-index is read in. From this file, only the first 15000 data sets are used.
     sampling-spp   = uniform, 10000
     sampling-coeff = metropolis, 10000, 1000, 10, 400.0, cm-1
     sampling-test  = readidx{/path/to/indexfile}, 15000
Example 2

Uniform sampling with 10000 points for generating the SPP, the coefficients and also for testing the natpot. The sampling points will be different for the three individual tasks, only the method of their generation is the same.

     sampling = uniform, 10000
Example 3

Metropolis sampling for the coefficients plus an additional small fraction of uniform sampling. For the SPP the same sampling points as for the coefficients are used. The test is done with an independent metropolis sampling.

     sampling-coeff = metropolis, 100000, 1000, 10, 400.0, cm-1
     sampling-coeff = uniform, 1000
     sampling-spp = sampling-coeff
     sampling-test = metropolis, 100000, 1000, 10, 400.0, cm-1


The MODES-SECTION is used to specify mode combinations, i.e., logical coordinates which are used to generate the PES fit. In general, this is very similar to the NATPOT-BASIS-SECTION within the Potfit or MC-Potfit programs. Within the MODES-SECTION one particle is specified per line by comma separated listing of coordinate labes as specified in the PRIMITIVE-BASIS-SECTION.


     Q1, Q2           # mode 1
     Q3               # mode 2
     Q5, Q4           # mode 3
     Q6               # mode 4

The internal numbering of the modes is simply done according to the line in which the mode is specified and is mostly arbitrary. There is one important exception: If symmetry operations are used and modes are swapped (for instance mode 1 and 3 could swap upon some symmetry operation in the example above) then the last mode must be one that only maps onto itself upon all symmetry operations. It must not be one that swaps.
The reason is, that during the ALS iterations the coeffictions of the CPD are not obtained correctly when modes are swapped. This is, however, automatically 'healed' when the ALS algorithm optimizes the next non-swapping mode. Therefore one non-swaping mode must always be the last one.
(If there are only swapping modes in the system, then there is no choice and the optimization will be performed anyways but the error-extimates in the 'iterations' file will be inaccurate. As the coefficients do not enter the working equations this only affects the output and not the numerics. The correct coefficients will - for performance reasons - then only be calculated once in the very end, after the ALS iterations have been completed.)

The MODES-SECTION is ignored in continuation runs or if the initial guess is loaded from file.


Keyword Status Description
pes = S {pesopts}
S can be either the keyword "usersurf" in which the program assumes that the potential energy routine has been implemented in the file $MCTDH_DIR/source/mcpotfit/userpot.F90 or it can be the name of a potential energy surface which has been encoded in the MCTDH package.
A third option is pes = pes-file{path/to/pes} in which case an existing operator file is re-fitted.
The first option has been provided to allow easy implementation of user provided routines using a FORTRAN 90 interface. Note, that options from the input file are not supported in this case, i.e, there must not be any {pesopts}.

In case that a potential energy from the MCTDH library the name is interpreted in a case sensitive manner. If a surface depends on additional parameters, one can change the encoded default values by adding those parameters via the string '{pesopts}'. Parameter names specified within the braces '{}' are processed in a case sensitive manner. The selected surface determines which parameter names are possible. For a description of the available surfaces see Hamiltonian Documentation -- Available Surfaces. Note that fitting 'vpot' files is not supported at present.

For the lsth surface use 'pes = lsth {tfac = 0.5d0}'. tfac is a mass-dependent parameter which is needed to transform between binding and Jacobian coordinates. Let r1,r2, and r3 denote the binding and rd,rv, and theta the Jacobian coordinates. Then the transformation rules are as follows:


For a homo-nuclear diatomic molecule tfac=0.5d0, in general tfac=m1/(m1+m2). rv denotes the distance between the two atoms of the diatom, rd the distance between the third atom and the center of mass of the diatom, and theta the angle between rd and rv.

Note: tfac is not used if binding coordinates are selected for a calculation.

vcut < R (,S)
Energy cut-off for exact potential energy surface. All potential energy values greater than R are set to R. S denotes the unit of R, default is au.
vcut > R (,S)
Energy cut-off for exact potential energy surface. All potential energy values less than R are set to R. S denotes the unit of R, default is au.

Primitive-Basis-Section. Ordering of arguments

The primitive basis is defined in the same way as it is done in the input file of an MCTDH run (see the MCTDH Primitive-Basis-Section ). As for the traditional Potfit, there is, however, one important difference. The order of the degrees of freedom defines the ordering of the arguments passed to the potential routine. To see, how a particular pes from the MCTDH library is implemented see the file funcsrf.F (just type   mcb funcsrf.F ). Inspect also the log-file log, which is created when running mccpd. There one finds a message like:

BKMP2 PES : 3D model in Jacobian Coordinates
Dissociative Coordinate : rd
Vibrational Coordinate  : rv
Angular Coordinate      : theta
A line like Dissociative Coordinate : theta would hint to a wrong ordering of the degrees of freedom in the Primitive-Basis-Section.
The arguments passed to the potential routine may be reordered with the aid of the order keyword of the Operator-Section.
NB: In MCTDH the ordering of the degrees of freedom in an HAMILTONIAN-SECTION (i.e. the ordering of the mode-line) must be consistent with the potential definition. There the ordering within the Primitive-Basis-Section is arbitrary.

Molecular symmetries

In the traditional POTFIT symmetries of the potential are usually preserved in a natural way if one chooses the primitive grids accordingly. In MC-CPD this is like in MC-Potfit not the case because of the random choice of the Monte Carlo integration points. Hence, symmetries need to be restored. In MC-CPD this is done by enforcing symmetry relations between R-terms of the tensor in such a way that operating a symmetry operation on one product term results in another product term. So in essence, applying a symmetry operation to the tensor results in a permutation of the terms. The permutation rule is hence to be preserved and enforced during the ALS iterations as the random character of the sampling would destroy it. To accomplish this the molecular symmetries must be explicitly specified in the SYMMETRY-SECTION.

The symmetries are given as a table of symmetry operations where each column of the table implements one symmetry operation. The first line of the symmetry table contains the labels by which the symmetry operations are addressed. The label can be chosen freely but must be a single word. The only exception is the first label of the symmetry table which cannot be chosen freely. It must always contain the E operation, the identity. The first column E serves as a reference for all further symmetries.

The columns of the symmetry table under the other symmetry labels implement the symmetry operations. In the column under the label E one must arrange all DOF labels from the Primitive-Basis-Section. The order can be different from the PBASIS-SECTION. If the order of DOFs in a second column is different from the order in the E-column, this is interpreted as an exchange of coordinates within this symmetry operation.

If symmetries are specified one should carefully check the sym.log file. It contains detailed information on the internal representation and mappings of the symmetry-operations. Also a number of tests are performed and logged there to detect if the symmetries are implemented correctly and if the final natpot file is symmetrized correctly.

Grid-based symmetry operations

To implement a symmetry operation one can chose between a grid-based or a coordinate based operation. A grid-based operation exchanges the primitive grid points of two DOFs where additionally the order of the grid points can be reversed. The following example shows a grid-based symmetry operation:

Example: grid-based symmetry operation (1)

   E        SXY1
   x        y
   y        x
The symmetry operation 'SXY' interchanges the grid points of DOF 'x' and 'y'. This exchange does not adhere to any coordinate ranges. If the coordinate 'x' is has grid points in the interval, say, [-1,1], and 'y' in the interval [3,5], it will interchange the first grid point (x1 = -1) of 'x' with the first grid point (y1 = 3) of 'y' (and vice versa), the second grid point of 'x' with the second grid point of 'y' and so forth.

Optionally one may invert the grids. Using the example above with 'x' being defined in [-1,1], and 'y' in [3,5], The following symmetry operation 'SXY2' will map the first grid point (y1 = 3) of 'y' to the first grid point (x1 = -1) of 'x' but the last grid point (xn = +1) of 'x' to the first grid point (y1 = 3) of 'y' and so forth. That is, the 'x' grid is reversed before mapping to 'y'. This again does not adhere to any coordinate ranges.

Example: grid-based symmetry operation (2)

Inversion of the DOF grid in a symmetry operation:
   E        SXY2
   x         y
   y        -x

Coodinate-based symmetry operations

An alternative way to implement symmetries is by arithmetic expressions. Arithmetic expressions are indicated by curly barackets and must be of the form
The opening bracket must be followed by a coordinate label which may optionally bear a sign (scaling of the coordinate is not supported), followed by a constant shift which may be an arithmetic expression of real numbers. Besides real numbers also the symbols pi and 2pi can be used.

Other than grid-based operations coordinate-based operations do adhere to coordinate values: The grid points of the respective DVR are transformed to coordinate values, altered through the expression in the curly brackets and transformed back to grid points which are ultimately used to perform the symmetrization. Unless one uses extended grids (see below) one therefore needs to make sure that the expression in curly brackets applied to a primitive grid point again leads to a primitive grid point.

The following examples implement the same symmetry operations as SXY1 and SXY2 from the examples above in coordinate-based form.

Example: coordinate-based symmetry operation (1)

   E        SXY3   # same as SXY1
   x        {y-4}
   y        {x+4}

Example: coordinate-based symmetry operation (2)

   E        SXY4   # same as SXY2
   x        {y-4}
   y       {-x+4}

Periodic boundary conditions

DVRs with periodic boundary conditions behave a bit different than other DVRs. There is usually not a natural zero with respect to which an inversion should occur. One can therefore specify coordinate labels for which periodic boundary conditions should be observed. If a coordinate is set to "periodic" one must use coordinate-based expressions for it. In this case symmetry operations that lead to values outside the periodic interval are mapped back into it.

One can define one ore more DOF as periodic with the "periodic" keyword:



    periodic = DOF1 [,DOF2 [,DOF3,...]]
The example below shows an example of the use of the "periodic" keyword:

Example: periodic boundary conditions

     E       C2           C4           C4^3
     x       -x            y             -y
     y       -y           -x              x
   phi  {phi+pi}   {phi-pi/2}     {phi+pi/2}
 periodic = phi
Note: Since periodic DOF typically use exponential or FFT DVRs, mccpd will issue a warning if a DOF with another DVR type is marked as periodic, but will continue anyways and apply periodic boundary conditions.

User-implemented symmetry operations

Sometimes, more complicated symmetry operations are needed, for instance when the transformation of one coordinate depends on another. To this end we introduced the keyword usersym. This allows the user to implement their own symmetry operations in the file $MCTDH_DIR/source/mcpotfit/usersym.F90. The input of the routine is a vector of coordinate values from which the equivalent coordinates must be computed. See $MCTDH_DIR/source/mcpotfit/usersym.F90 or the example file $MCTDH_DIR/source/mcpotfit/usersym.F90.zundel and the instructions within these files on how to do this. The file $MCTDH_DIR/source/mcpotfit/usersym.F90.zundel implements the symmetry operations from the example in the following paragraph.

The following rules apply for user-implemented symmetries:

  1. If usersym is given for one coordinate in one symmetry operation in the SYMMETRY-SECTION, it must also be given (and implementd) for all other DOF of the combined mode and for all other symmetry operations to guarantee complete mappings.
  2. User-implemented symmetry operations must either stay within one mode or map all DOF of one mode m1 into another mode m2. There must not be any mode-mixing.

Example: symmetry operations for the Zundel cation (D2d)

All symmetry operations are implemented with respect to column one, E. The symmetry operations S4, S4^3, C2A, and C2B, however do not stay on the primitive grid because of the exchange of the two water molecules. The x,y-plane is fixed to one of the waters so that, if they are exchanged, it must be rotated to account for the relative angle alpha (='a' below) between the two waters. Hence user defined symmetry operations had to be implemented. They were used together with the auto-extgrid keyword in the run-section to automatically exend the mode grid by additional points which are generated by the rotation of the x,y- plane. For this, the three coordinates x,y, and a had to be combined in one mode.
 E        C2        SigmaA      SigmaB     S4        S4^3        C2A        C2B
 z        z          z          z         -z        -z          -z         -z
 R        R          R          R          R         R           R          R
 x        usersym    usersym    usersym    usersym   usersym     usersym    usersym
 y        usersym    usersym    usersym    usersym   usersym     usersym    usersym
 a        usersym    usersym    usersym    usersym   usersym     usersym    usersym
 la       {-la+2pi}  {-la+2pi}  {la}       {-lb+pi}  {lb+pi}     {lb+pi}    {-lb+pi}
 lb       {-lb}      {lb}       {-lb}      {la-pi}   {-la+pi}    {la-pi}    {-la+pi}
 ua      -ua         ua        -ua        -ub        ub         -ub         ub
 ub      -ub        -ub         ub         ua       -ua         -ua         ua
 r1a      r1a        r1a        r1a        r1b       r1b         r1b        r1b
 r2a      r2a        r2a        r2a        r2b       r2b         r2b        r2b
 va      -va         va        -va         vb       -vb          vb        -vb
 r1b      r1b        r1b        r1b        r1a       r1a         r1a        r1a
 r2b      r2b        r2b        r2b        r2a       r2a         r2a        r2a
 vb      -vb        -vb         vb        -va        va          va        -va

Extended grids

As mentioned in the section for user-implemented symmetry operations, sometimes symmetry operations do not stay on the original primitive grid. These can be grid-points that originate from a usersym- or a coordinate-based operations in curly brackets.

To be able to still perform the symmetry operations one can use the keyword auto-extgrid in the RUN-SECTION. If auto-extgrid is set and it is detected that any symmetry operarion leads to points outside the original primitive grid, these extra points will be automatically added to the respective mode-grids.

When extended grids are used, it is important that all symmetry operations of a point group are implemented, such that that symmetry operations are closed. Otherwise the forward-backward mapping of extended grid points would be incomplete and symmetry operations applied twice would again lead to points outside the grid.

Extended grids can be used for all modes.


A number of rules apply for the implementation of the symmetries.
  1. Any DOFs that are interchanged must have the same number of primitive grid points. Different DVR types will not lead to an error but only to a warning; coordinate ranges and other DVR parameters are not checked.
  2. When DOF are interchanged there must not be any mixing of combined modes: DOFs can either be re-arranged within one mode, or all DOF from one mode m1 must map into the same other mode m2 (with arbitrary arrangement within m2 as long as the DVR grids are compatible, see previous point).
  3. All symmetry operations of a point group must be implemented. Completeness of the group is mandatory.
  4. If usersym is set for one DOF, it must also be set for all other DOF of that mode and for all symmetries except E.
  5. If a coordinate-based operation is used for a DOF in one symmetry operation it must also be used in all other symmetry operations except E
  6. Symmetry-Operations for DOFs with the periodic flag must be implemented with coordinate-based operations.
  7. DOFs with the periodic flag can only be interchanged with other periodic DOF.
  8. For external DVRs one must always use usersym.

Example input

Output files

File containing the potential fit in MCTDH compatible format. See also Hamiltonian documentation of MCTDH.

File containing the potential fit that was used as initial guess. Only written if 'save-initguess' is set in RUN-SECTION

File containing detailed statistics of the various test-trajectories. Only written if there are two or more test-trajectories.

Error estimates as a function of ALS iterations.

See MCTDH output documentation

This is a copy of the input file used with the program version progver and the list of options (if any given) added.

This file contains information about the type of calculation performed, the potential energy surface used, memory sizes, arrays that have been allocated, opened data files and error messages - if any. The most important thing in this file is to check the labeling of the parameters of the coordinate system used which should be consistent with the arguments of the potential energy surface used.

If a SYMMETRY-SECTION is present in the input, this files contains some details about the internal representation of the symmetry operations as well as the results of symmetry-tests on a number of random grid points. The symmetries are tested against the routine, i.e., it is tested if the symmetry operations indeed lead to equivalent points, and also if the final natpot has the same symmetries.

If an extended grid has been created for mode I, the complete extended grid of the mode is stored in this file in terms of coordinate valies of the primitive coordinates of the mode: N columns, where N is the number of DOF of the mode, in the same order as defined in the NATPOT-BASIS section.

This file contains various timing statistics of the potential fitting run. For each subroutine, the number of times the routine is called, the total CPU time (user and system) spent in the routine and in all subroutines below it are listed, as well as the CPU time per function call.

One-mode-cuts through Monte-Carlo points in SPP-sampling set J. One record contains energies on all DVR points in mode I, all other coordinates are fixed at the Monte-Carlo point.

This file contains the integer indices of the dvr grid points that were used as sampling points to generate the SPP. Each line in this file represents one sampling point. The index is given in the same order as the DOF in the PRIMITIVE-BASIS-SECTION. If more then one trajectory is used, then the files are numbered as they appear in the SAMPLING-SECTION.

Same as for dvrindex-spp[-I], but contains the sampling used for obtaining the coefficients.

Same as for dvrindex-spp[-I], but contains the sampling used for testing the natpot against the PES routine.

This file contains the single-point energies obtained from the sampling stored in dvrindex-spp[-I]. One energy per line obtained from the PES routine.

Same as energies-spp[-I], but for the coefficients.

Same as energies-spp[-I] but for the set that was used for testing the natpot against the PES routine.

If invert-method=eigen is used in the RUN-SECTION then this file contains the eigenvalues of the Monte-Carlo overlap matrix of the basis functions without mode I ascending order. This file is only written in the last iteration step.

If invert-method=eigen is used in the RUN-SECTION then this file contains the eigenvalues of the Monte-Carlo overlap matrix of the basis functions ascending order. This file is only written If the coefficients are finalized after the last iteration step (i.e., if sampling-coeff is set in the SAMPLING-SECTION).

When symmetrization is used, this file contains the mapping of mode grid-points. One symmetry operation per column in the same order as specified in the SYMMETRY-SECTION. Also includes extended grid points if present.

ASCII file containing the mode-functional (V-Vcpd)2-part. First column contains the iteration number, columns m+1 the functional evaluated on the cuts through modes m after mode m has been optimized.
ASCII file containing the mode-functional regularization -part. First column contains the iteration number, columns m+1 the functional evaluated on the cuts through modes m after mode m has been optimized.
ASCII file containing the total mode-functional. First column contains the iteration number, columns m+1 the functional (sum of the data in the two files above)


Compiling works just as for mctdh:
    compile [options] mccpd

OpenMP support

MC-CPD supports OpenMP. To use OpenMP use the -P option of the compile command
    compile -P mccpd
this will result in the executable mccpd86P

MPI support

MPI support is either enabled by the -m compile option or by specifying an MPI compiler with the -c option.
    compile -m mccpd
The resulting executable will be mcpotfit86.mpi-compiler. MPI support can be combined with OpenMP, of course.

ScaLAPACK support

Since a possibly large system of liner equations needs to be solved, MC-Potfit can be linked to the ScaLAPACK library. The library must be available on the system and does not come with the MCTDH package. Linking to it is somewhat automatized by the -S option of the compile command.
    compile -S -m mccpd
Note that ScaLAPACK requires MPI. If the -S flag is used, the executable will be named with the 'S' suffix, for instance
With the default setup the linker will look in the standard system search path for the file 'libscalapack.a'. If the library is not in the search path or has another name, one can alter this accordingly in the variable MCTDH_SCALAPACK_LIBS of compiler in use in the compile.cnf file. Note that ScaLAPACK uses BLACS for communication, which is usually shipped with the Scalapack package and compiled into the ScaLapack library. On some older systems, especially if ScaLAPACK is not self-compiled but installed from the systems package manager, BLACS may be installed in a separate library, usually called libblacs.a. If this is the case one also needs to add the BLACS library in the compile.cnf file.

External libraries

More sophisticated (possibly vendor specific) linear algebra packages are available which may be very beneficial to use with mccpd. These are packages like ATLAS, Intels MKL, AMDs ACML, Cray Scientific Library etc. which implement the LAPACK (some also ScaLAPACK) API. To use these instead of the Netlib implementation one needs to edit the compile.cnf file in $MCTDH_DIR/install and setup the environment accordingly. See also the -x option of the compile script.

We highly recommend linking sophisticated Lapack packages when building mccpd


  1. When using OpenMP parallelization there is a segmentation fault.
    Try setting and/or increasing the OMP_STACKSIZE environment variable.

  2. When using OpenMP parallelization I see much less CPU usage then theoretically possible.
    Did you use the correct executable (it ends with "P": " mccpd86P ")?
    Try setting or changing the OMP_NUM_THREADS and OMP_SCHEDULE environment variable. Usually OMP_SCHEDULE=static should give good results. Also, try pinning the threads to specific cores, see for instance OMP_PROC_BIND and OMP_PLACES environment variables.

  3. My results are garbage when using parallelization.
    Is the PES routine thread-safe? Try the no-OMPpotential keyword in the RUN-SECTION.

  4. I get different results for different runs with the same input.
    This can have different reasons:

  5. MCCPD won't compile with scalapack support.
    Make sure the Scalapack libraries are present on the system. Depending on the installation Scalapack may come in two libraries libblacs.a and libscalapack.a or just in one library libscalapack.a. If BLACS is linked separately this library has to be added in compile.cnf with -lblacs to the MCTDH_SCALAPACK_LIBS variable. Also, make sure the $LIBRARY_PATH environment variable (or whatever search path the linker uses) is set correctly and the -DSCALAPACK compile flag is set.
    If special (tuned) libraries are used please refer to the manual of those libraries on how to link.

  6. ALS was converging at first but after a number of iterations the error grows again.
    This can happen if the Monte-Carlo overlap matrix of the CPD Basis functions is singular or "almost" singular. Try more sampling points, a higher temperature or adding a few percent uniform sampling.