Skip to content
System Status: All systems are operational • Services are available and operational.
Click for detailed status

Comsol

Comsol is a multiphysics simulation software.

Comsol on Euler

On Euler the following versions are available via modules:

Version Module command
6.4 module load comsol/6.4
6.3 module load comsol/6.3
6.2 module load comsol/6.2

Comsol licenses can be obtained through the IT shop of ETH.

Interactive session

You can start the graphical user interface (GUI) of COMSOL in an interactive job with X11 forwarding with:

module load comsol/6.3
srun --pty --x11 [slurm options] bash

Then you can load a COMSOL module:

comsol -np 1

If the COMSOL GUI runs slowly, please have a look at the troubleshooting section of this wiki page. Don't start the COMSOL GUI on a login node, as login nodes have few resources and are shared by many users.

How to submit a serial/parallel job

You need to submit your COMSOL jobs through the batch system. For a single processor calculation a typical command could look like:

sbatch [slurm options] --wrap="comsol batch -inputfile infile.mph -outputfile outfile.mph"

Here you need to replace [slurm options] with Slurm parameters for the resource requirements of the job. Please find documentation about the parameters of sbatch on the wiki page about the batch system.

Please note that if you don't specify the output file, then the results are written to the .mph file specified in the -inputfile parameter of Comsol.

For every COMSOL license, ETH also has access to a COMSOLBATCH licence. For using the COMSOLBATCH licenses, you would need to add the -usebatchlic option:

sbatch [slurm options] --wrap="comsol batch -usebatchlic -inputfile infile.mph -outputfile outfile.mph"

Parallel jobs using shared memory

For parallel jobs using shared memory, you can specify the number of cores to be used with the -np option of COMSOL:

sbatch --ntasks=1 --cpus-per-task=4 [slurm options] --wrap="comsol batch -np 4 -inputfile infile.mph -outputfile outfile.mph"

Please make sure that the value of the -np option of COMSOL always has the same value as the product of --ntasks and --cpus-per-task. Please be careful to not use the -clustersimple option for shared memory jobs as this would make COMSOL start MPI processes instead of using threads.

If you plan to run parallel COMSOL jobs, then we strongly recommend to first do a small scaling study to find out the sweet spot for the parallel efficiency of your model. Since COMSOL can be strongly memory bound, the sweet spot might be to use 4 cores (see scaling study below).

Parallel jobs using distributed memory

If you would like to run COMSOL in distributed memory mode, then you need to add the -mpibootstrap slurm option to your COMSOL command:

sbatch --ntasks=4 [slurm options] --wrap="comsol batch -mpibootstrap slurm -inputfile infile.mph -outputfile outfile.mph"

Please be careful to not mix the -mpibootstrap slurm option with the -np option as this will start too many threads. The -mpibootstrap slurm option will tell COMSOL to get the required MPI information like hostnames etc. directly from the batch system.

If you plan to run parallel COMSOL jobs, then we strongly recommend to first do a small scaling study to find out the sweet spot for the parallel efficiency of your model. Since COMSOL can be strongly memory bound, the sweet spot might be to use 4 cores (see scaling study below).

Parallel scaling

For evaluating the parallel efficiency of COMSOL, we performed a scaling study with a model having 150K mesh elements. The result of the study shows (at least for this model that requires a large amount of memory to run), that COMSOL simulations can be heavily memory bound and for certain cases going above 4 cores can be counterproductive.

For this study, we performed the simulation on 1, 2, 4, 6, 8, 12 and 24 cores. All runs were performed twice. The table below shows the run times and the speedup.

Cores Run 1 (s) Speedup 1 Run 2 (s) Speedup 2
1 26779 - 26811 -
2 15237 1.76 15193 1.76
4 6743 3.97 6719 3.99
6 11815 2.27 11752 2.27
8 13435 1.99 13421 2.00
12 19369 1.38 19365 1.38
24 18761 1.43 18667 1.44

In this case, the scaling is proportional to the number of memory channels of the CPU (and not to the number of CPU cores as one would expect). When using 4 cores, we see an almost linear scaling which is very close the best case. Using more than 4 cores then increases the run time substantially. Therefore, it is very important to perform a small scaling study (you could try 1, 2, 4 and 8 cores) to find the sweet spot in terms of number of cores for your COMSOL job.

Troubleshooting

Opening a support request

Euler users can create an account on the Comsol support website and mention that they are from ETH, then the account will be connected to the Comsol license from ETH and you receive direct support from Comsol.

Problems with the COMSOL GUI

We noticed, that in some cases, the COMSOL GUI updates very slowly when starting it on Euler or sometimes it even crashes. If you experience this kind of problem, then you can try to start the COMSOL GUI with the following option:

comsol -np 1 -3drend sw

Parameters for parallel jobs

Please, do NOT use the comsol parameters -mpmode, -nnhost and -mpi...

Theoretically you can combine the -np and -nn parameters, but unless you REALLY know what you are doing, we suggest that you avoid it.

Finding the results of a simulation

When you run simulations solely through the Comsol GUI the results show up in the Model Builder Tab under Results and Datasets. When you run the simulation on Euler this will not be the case and the results can be called by clicking on Results in the upper menu bar and Press the More Datasets button and choose the Solution button under Base Datasets and the results should appear.

Managing Comsol files in the home directory

COMSOL creates various temporary and configuration files that can quickly consume the home directory quota. Here are the main file types and how to manage them:

Temp directory

COMSOL stores temporary data in /tmp which should not be used on the cluster. There is an option to specify a different path and this should be set to $TMPDIR, which is created by the batch system for each job. If $TMPDIR is used, some scratch space needs to be reserved for the job (--tmp=YYY). Please find here an example for the usage of $TMPDIR:

sbatch -n 4 --time=36:00:00 --mem-per-cpu=2048 --tmp=20000 --wrap="comsol batch -tmpdir \$TMPDIR -inputfile infile.mph -outputfile outfile.mph"

Note: There are quotes around the entire COMSOL command and the $ needs to be masked with a \.

Configuration directory

COMSOL stores configuration information in the directory configuration that is located in $HOME/.comsol. It creates a huge number of directories for each COMSOL job, which can at some point create a problem with the file/directory quota of 100'000 for home directories. It is possible to change the location where the configuration files are saved with the command line option -configuration. Please find below an example where these files are written to $TMPDIR:

sbatch -n 4 --time=36:00:00 --mem-per-cpu=2048 --tmp=20000 --wrap="comsol batch -configuration \$TMPDIR/configuration/comsol_@process.id -inputfile infile.mph -outputfile outfile.mph"

Workspace directory

When a COMSOL job is running, it saves temporary files to the workspace directory located in $HOME/.comsol. This can be avoided by redirecting these file to $TMPDIR for instance. Please find below an example:

sbatch -n 4 --time=36:00:00 --mem-per-cpu=2048 --tmp=20000 --wrap="comsol batch -data \$TMPDIR/data/comsol_@process.id -inputfile infile.mph -outputfile outfile.mph"

Recovery directory

COMSOL saves recovery files in a hidden directory $HOME/.comsol/recovery. Because your home directory is subject to a quota, we recommend to either change the location where the recovery files are saved, or to entirely disable this feature. The options for the recovery files can either be changed in the COMSOL preferences window (permanent) or on the command line, when submitting a job (only for this particular job).

In order to permanently disable the recovery feature, uncheck the check box "Save recovery file" in the COMSOL preferences window. If you would like disable the recovery option just for a particular job, then please add the -autosave off option to your COMSOL command:

sbatch -n 4 --time=36:00:00 --mem-per-cpu=2048 --tmp=20000 --wrap="comsol batch -autosave off -inputfile infile.mph -outputfile outfile.mph"

If the recovery files should be stored temporarily, then saving them in your personal scratch directory would be an option:

sbatch -n 4 --time=36:00:00 --mem-per-cpu=2048 --tmp=20000 --wrap="comsol batch -recoverydir /cluster/scratch/\$USER -inputfile infile.mph -outputfile outfile.mph"

Redirecting almost all files from $HOME/.comsol to $TMPDIR

We strongly recommend to redirect all files and directories from $HOME/.comsol to a different location (e.g., $TMPDIR). The COMSOL part of the command should in this case look like (replacing -recoverydir with -prefsdir):

"comsol batch -tmpdir \$TMPDIR -configuration \$TMPDIR/configuration/comsol_@process.id -data \$TMPDIR/data/comsol_@process.id -prefsdir \$TMPDIR/preferences/comsol_@process.id -inputfile infile.mph -outputfile outfile.mph"

COMSOL Multiphysics can be integrated with MATLAB to extend its modeling with scripting programming in the MATLAB environment. LiveLink for MATLAB allows to utilize the full power of MATLAB and its toolboxes in preprocessing, model manipulation, and postprocessing.

Preparation

In order to prepare your setup for LiveLink jobs, you first need to make sure that COMSOL knows, which MATLAB version it is supposed to use:

  1. Start up the GUI of COMSOL on one of the login node of the cluster.
  2. Go to the Options menu in the top of the window and choose preferences
  3. In the preferences window, there is a navigation section in the left part.
  4. Choose the last entry LiveLink products
  5. Make sure that the MATLAB path is set to the Matlab version you would like to use together with COMSOL

As the COMSOL preferences are saved, this step only needs to be done the first time. Please be aware that there are certain combinations of COMSOL and MATLAB, which are officially supported. For instance:

  • COMSOL 6.2 and MATLAB R2023b

Other combinations might also work, but they are not officially supported by COMSOL. The path for MATLAB R2023b is:

/cluster/software/commercial/matlab/R2023b

You can check supported combinations on the COMSOL system requirements page.

Afterwards, you need to start the COMSOL server on one of the login nodes in interactive mode in order to specify a username and password. The default settings do not require you to ever enter this password again if you do not enable the option for this. Therefore you can specify anything for the username and password.

module load stack/2024-06 comsol/6.2
comsol server

After specifying the username and password, your setup is ready for COMSOL/MATLAB LiveLink jobs.

In a LiveLink job, you first need to start the COMSOL server and afterwards MATLAB. The best solution for running such a job is to have a runscript (for instance run.sh) that takes care of the multiple steps of the workflow.

run.sh could look like:

#!/bin/bash
#SBATCH --time=1:00:00
#SBATCH --mem-per-cpu=3000
#SBATCH -n 1

comsol server -silent -port 12345 -tmpdir $TMPDIR -login never &
sleep 10
matlab --nodesktop -nodisplay -singleCompThread -r my_matlab_script;

wait

Here, the so-called SLURM pragma's (#SBATCH) are used to specify the resource requirements of the job inside the shell script. Then the COMSOL server is started in the background. With the -port you can specify the port that is used by the COMSOL server to communicate with MATLAB. After starting up the COMSOL server, we recommend to run a sleep command, as the server needs some time before it is ready to communicate with MATLAB. In the last step, MATLAB is started.

Matlab script

The MATLAB script that is used in the LiveLink job also needs to contain certain information. First of all it needs to know where to find COMSOL and through which port the communication should take place. You can find a minimal hello world example below:

addpath('/cluster/software/commercial/comsol/6.2/x86_64/mli');
mphstart(12345);
disp("hello world")
exit;

Please make sure that you use the correct path to the COMSOL version that you would like to use in the LiveLink job and that the port number is the same as in the runscript that is described above.

Since the COMSOL server is running in the background, it will continue to run until the run time limit that was specified to Slurm is reached. Therefore it is important that the MATLAB script has an exit command at the end, as this will cause the COMSOL server to stop and finish the job.

In this example, three comsol mphservers are started, using different ports. If multiple comsol instances are running on the same compute node, only 1 license will be checked out. This setup therefore helps to reduce the number of licenses used when running multiple simulations.

#!/bin/bash
#SBATCH --time=1:00:00
#SBATCH --mem-per-cpu=3000
#SBATCH -n 12

comsol mphserver -np 4 -silent -port 2036 -tmpdir $TMPDIR -login never &
comsol mphserver -np 4 -silent -port 2037 -tmpdir $TMPDIR -login never &
comsol mphserver -np 4 -silent -port 2038 -tmpdir $TMPDIR -login never &
sleep 15

cd stage1       
matlab -nodesktop -nosplash -singleCompThread -r my_matlab_script1 -logfile log < /dev/null &
cd ..

cd stage2            
matlab -nodesktop -nosplash -singleCompThread -r my_matlab_script2 -logfile log < /dev/null &
cd ..

cd stage3           
matlab -nodesktop -nosplash -singleCompThread -r my_matlab_script3 -logfile log < /dev/null &

wait

Please note that each of the matlab scripts needs to use the port that is specified for the corresponding comsol mphserver instance.

Submit the job

In order to submit such a job using a runscript run.sh, you would use the following command:

sbatch < runscript