Installation

Requirements

Quick setup

Start by cloning the git-repository

cd ~/path_to_somewhere
git clone git@gitlab.com:materials-modeling/storq.git

Next you can either proceed to carry out a standard install via pip

pip install ~/path_to_somewhere/storq

or by manually adding the paths to storq and its associated binaries to your .bashrc file

export PYTHONPATH=$PYTHONPATH:~/path_to_somewhere/storq
export PATH=$PATH:~/path_to_somewhere/storq/bin

Next you need to generate a configuration file. storq comes with a command line interface that can take care of the bulk of the configuration for you. To initiate the automatic configuration run

storq configure --auto

The configuration process will alert you as to any settings which could not be automatically detected and hence need to be provided manually. Note that some settings are optional, in which case only a warning is issued while others are critical for the operation of storq and are hence marked as “fatal”. Note that it is not uncommon that the automatic setup fails to find the binaries and POTCAR files since it can be hard to search for many different names across potentially more than one file system. In particular for the POTCAR files, storq requires a folder, which contains subdirectories named potpaw_<XC>. Here, <XC> can be PBE, LDA, or GGA and the subdirectory must contain the corresponding POTCAR files. Naturally, if you wish to use PBE setups exclusively, only potpaw_PBE needs to exist.

Once you have completed the automatic configuration (and addressed any of its complaints) it might be necessary to set your mpirun command in order to be able to run calculations. This is the case if your supercomputing resource uses a wrapped version of the default mpirun command (e.g., mpprun on NSC’s triolith or aprun on PDC’s beskow). To change this option access the json configuration file through

nano ~/.config/storq/vasp.json

or alternatively,

storq configure --f vasp

and change the value of the mpi_command field as needed. It can furthermore be convenient to add a default allocation for your jobs to run on (you can override this later from your ASE script). To select an allocation, edit the value of the batch_account field to the name of your account (e.g. snic20XX-X-XX on a SNIC resource).

More about configuring storq

This section contains a more thorough description of the configuration process and the available options.

The VASP configuration file and its options

The automatic configuration process will generate two configuration files named vasp.json and site.json and place them under ~/.config/storq. You can access and edit these files at any time using the commands (beware that the files will be opened in the editor pointed to by your EDITOR environmental variable and that if it is not set the default is vi)

storq configure --f vasp
storq configure --f site

The generation of the site.json file should be completely automatic. The file itself contains information about the cluster environment (e.g., the number of cores per node). The vasp.json file holds information as to where to find VASP binaries, pseudo potentials etc. and controls the general behavior of the calculator. All settings found in this file can be overridden from Python but in practice only a few settings (walltime, number of nodes etc.) need to be changed from script to script. To access and change the storq configuration from within Python use the Vasp.configure(key=value)() method.

Below follows a list of all currently available options for vasp.json.

  • batch_account: allocation on supercomputing resource to be used

  • batch_options: space-separated string of additional options passed to the sbatch command

  • batch_walltime: walltime of job passed as a string in the format HH:MM:SS

  • batch_nodes: Number of nodes used for your calculation

  • mpi_command:

  • mpi_options:

  • user: user name on the supercomputing resource.

  • vasp_convergence: strictness level of calculators convergence checks; possible values: basic, strict

  • vasp_executable_std: path to standard VASP executable

  • vasp_executable_gam: path to special Gamma-point only VASP executable

  • vasp_executable_ncl: path to special Non-collinear VASP executable

  • vasp_mode: “queue” or “run”, should jobs be run directly or submitted via the queue

  • vasp_potentials: path to potpaw_XC directories containing POTCAR files where XC=PBE, LDA, or GGA

  • vasp_restart_on: possible values: convergence

  • vasp_stdout: name of file to which the stdout from VASP is written; default: vasp.out

  • vasp_validate: if true keywords passed to the calculator will be type checked

  • vasp_vdw_kernel: path to vdw_kernel.bindat file needed to for calculations using vdW-DF functionals

A note on working with multiple file systems

Many supercomputing resources feature multiple file systems.A common setup is having one distributed system and one parallel system (see e.g., Lustre). In this case one needs to be careful when settings up storq to ensure that the necessary files are visible across file systems or, alternatively, located on the same file system.

As a concrete example, consider PDC’s beskow system, which uses a distributed file system called AFS as well as a parallel Lustre system. Software such as VASP must be installed on Lustre in order to run, while the user home directories and configuration files reside in AFS. Even if storq is installed on Lustre, the automatic configuration process will place the configuration files under $HOME on AFS. To resolve this situation one can move the .config folder to Lustre and place a symbolic link in AFS, which points to the new location. Arguably, in this case a better option is to keep the .config folder on AFS but move it from $HOME to a subfolder $HOME/Public that is set to be visible from Lustre and then place a symbolic link in $HOME.