3.4. QM Management

3.4.1. Scientific parameter

GlobalParameter.redo_QM

Type [string]

If set to ‘redo’, the QM calculation inputs are written in the first part for every molecule which need a QM calculation. The QM calculation should be then done in the second part to be read in the third part. This is the default.

If set to ‘do_not_redo’, Frog will try to check if the QM calculation for a molecule have been already performed. For a molecule which QM calculation have to be performed, it tries to open the expected Dalton result file. It check if the result is readable – meaning the QM calculation endded. If it is, the QM input for this configuration are not written, and the QM calculation is considerated are already performed. The results read in the third part are the one written in this file.

Warning

Frog does not check if the QM parameter for the target molecule (like the functional used) or the neighborhood is the same in the already available file and the input parameter. If you have a doubt, you should use redo_QM = ‘redo’.

Example

GP.redo_QM = 'redo'

GlobalParameter.preference_functional

Type [list]

Defines which functional should be used in the case where several molecule or different MT are merged into one QM calculation. .

Example

GP.preference_functional = [‘FunctionalA’, ‘FunctionalB’]

In this case, if one molecule has for functional the ‘FunctionalA’, and the other has ‘FunctionalB’ for its QMParameter, the final functional used for the graps of the 2 molecule will be ‘FunctionalA’.


3.4.2. Numerical parameter

You can change the following parameter from \texttt{FROG} call to \texttt{FROG} call to deal with the submission of QM calculation safely:

GlobalParameter.file_template_script_run_QM

Type [str]

The file used to create every submission script. The line relative to the QM run are added at the end of the script. Please note that the file name is update with GP.general_path. The created submission files are written in the GP.dir_submission_file directory.


GlobalParameter.scratch_dir

Type [str]

Define the directory where the temporary Dalton file will be written. Note that this directory is NOT the one where the result of the QM simulation is stored. By default, if the QM simulation ended without an error, these temporary file will be deleted.

However, if many QM calculation are running at the same time, a large amount of (disk) memory can be used. Therefore, we recommand to use a /tmp or a /scratch to perform these calculation.

Today, the chosen implementation is to write in the submission script the line: < ‘export DALTON_TMPDIR=’ + scratch_dir >.

Therefore, we recommand to use: scratch_dir = ‘$SCRATCH_DIR’, and to define the variable “SCRATCH_DIR” in the GP.file_template_script_run_QM. This way, you can define very precisly where the temporary file should be written within your (cluister) submission file. You can for instance define an automatic selection within the submission file to chose which scratch directory to use in function of the cluster/node it is run on.


GlobalParameter.max_submission_QM

Type [int]

The number of jobs which can be prepared by the software to perform the QM simulations. This option has been made in order to avoid trouble when submitting a lot of job to a cluster – for instance sending 100 000 jobs to a cluster…

Using this option, you might not be able to perform all the QM simulation at the same time. If it is so, you should wait that the QM calculation already sent end, then re-run the programme in order to treat the rest of the QM simulation and resumit to the cluster the rest of the QM simulation.

Example

GP.max_submission_QM = 100

GlobalParameter.nbr_job_parr_QM

Type [int]

The number of QM simulation which run at the same time on a server.

For exemple, set to 1 to have one QM simulation runing for every jobs submitted to the cluster – for monocore CPUs.

Set 8 to have 8 QM simulations running for every jobs submitted to the cluster – designed to run on multicore CPUs. Note that no memory sharing is needed to perform several QM simulation on a single server (no OpenMPi mandatory) since every QM simulation can be performed independently.

The maximum number of QM simulation witch can be launched simultaneously is: nbr_job_parr_QM*max_submission_QM.

Note

Please be aware that the memory needed for every QM simulation can be large: check the RAM available and where you write the temporary Dalton files. For instance, if 100 QM simulation are running on the same time on the cluster, a lot of reading/writting file will be occuring (temporary Dalton files) and may slow down the cluster. /tmp or a scratch directory should be used for these temporary file, see the scratch_dir variable.

Note

The template used to define how to send job to the cluster (GP.file_template_script_run_QM) must match with this nbr_job_parr_QM. If the template ask only for 4 cores while nbr_job_parr_QM = 8 you may have some trouble. See the Tutorials.

Example

GP.nbr_job_parr_QM = 16

GlobalParameter.command_launch_job

Type [str]

The command to launch a sumission script in your cluster waiting queue. For instance ‘sbatch’ or ‘qsub’.