General FAQs

The Command string can specify any command or set of commands that you need to run the software you have selected. Commands that you specify may include scripts that you have uploaded as Input Files for the job. Multiple commands should be separated by semi-colons. Your Command string or script may call your chosen software multiple times. Using a script allows you to specify a more complicated workflow.

This Basic Job Example show a simple example using a script in the command line. This STAR-CCM+ Example calls the simulation software starccm+ directly. This LS-DYNA Example uses the Rescale ls-dyna wrapper script to simplify the command line for launching LS-DYNA. The ls-dyna script is described in detail on FAQ: LS-Dyna.

A Command Template contains suggestions to help you form your Command string and run the software you have selected. Templates may be used to invoke:

  • the Standard or serial version of an application
  • the SMP command
  • the MPI command

Codes that support hybrid parallelization can use the settings from the MPI Command Template.

The Command Template itself consists of either:

a) one of the standard commands that is used to run the software you have selected. It also typically includes the command line options that dictate the number of threads or processes to be used. The Comsol Multiphysics template is an example,

comsol -clustersimple -f $HOME/machinefile batch -inputfile <input-file>

b) a wrapper script command. Wrapper scripts are provided by Rescale to simplify the task of running the software you have selected in a way that helps you make efficient use of the hardware you have chosen. An example of a wrapper script template is LS-DYNA,

ls-dyna -n <mpi-ranks> -s <smp-ranks> -i <input> -p <precision> -a <other args>

We use the < > notation to show fields that you would typically be expected to replace with a real value before submitting a job. You do not always need to use all of the fields. If one is irrelevant to your situation, you can delete it. For instance, a code may support hybrid mode, but if you plan to use MPI alone, you can omit the value.

On Rescale, if the standard command line for a software package is particularly complex, we provide a wrapper script to allow you to access the correct executable more easily. One example of this is LS-DYNA, where the name of the executable you need to call depends on the software version, whether you choose to use single or double precision, and whether you use SMP, hybrid or MPP parallelization. Each wrapper script typically takes a number of simple command line options intended to help you call the exact executable you need and to utilize the hardware you have selected.

Use of wrapper scripts is entirely optional. If you like, you can log in to a cluster and view a wrapper script's content by following the instructions at Connecting to Your Cluster. View your software's wrapper script source by using the which command to locate the wrapper script, and the cat command to concatenate the wrapper script file's contents.

These are presented in the each job's Software Settings page. The Command Template presents the basic options that are available for Standard, SMP and MPI modes where applicable. Wrapper scripts that offer additional options, such as Converge or LS-DYNA, are described in more detail in the Frequently Asked Questions section of our resources page.

Yes, you can. Most of the time, the executable you will need will already be on the $PATH environment variable when your command line or script is run. If necessary, you can login to a cluster to locate an executable not specified in that software's Command Template, by following the instructions at Connecting to Your Cluster, or contact us directly at Rescale Support.

At runtime, host and machine files in the most common formats are located in the $HOME folder of your workspace. If your command line needs a host file as input data, you can specify it by name along with the $HOME prefix. For instance:

--hostfile $HOME/machinefile.openmpi

The **Command Template" for the software you have selected will normally refer to the exact hostfile you will need. If you list the contents of $HOME you will find the following host files:

hosts, mpd.hosts, machinefile, machinefile.gpu, machinefile.openmpi, hosts,
rhosts, mpd.hosts.string, PCF.xml`

Some software requires a special file or format for host information. Wherever possible, Rescale creates environment variables and/or specific files as necessary for such software. For example, when you run Abaqus, Rescale places the host information in the correct format in the abaqus_v6.env file for you. When you run ANSYS, the $MACHINES environment variable contains the hosts information in the necessary format.

Sometimes software requires that the full path to a host (or other) file be placed inside one of your input files. On Rescale, you won't know what the path will be in advance, but you will have access to environment variables representing it. The Shipflow example shows how you might use a sed command to substitute the string

parallel( nprocesses=2, nthreads=1, hostfile="$HOME/machinefile.openmpi" )

in the file hamb_def with its replacement

parallel( nprocesses=2, nthreads=1, hostfile="/enc/uprod_mtmHV/machinefile.openmpi" )

at runtime.

No. GPUs have hundreds of cores. Our pricing for a cluster with GPUs is based on the number of associated CPU cores on the cluster and not the number of GPU cores. A Rescale cluster with GPUs consists of computing nodes with multiple CPUs (typically 8) and one or more GPUs (aka CUDA Capable devices) per-node. There is one GPU per-node in the case of Kepler, and two for the Tesla (M2050).

The standard Rescale cluster lifecycle is described in Connecting to Your Cluster. If you have chosen not to terminate the cluster automatically when your job completes, then only the master node will be kept alive for you. The slaves will be disconnected from your cluster and only the master node listed in the hostfiles will be reachable. If you try to ssh from the master to a slave node at this stage you may see the message no route to host. This is the standard and expected behaviour when your multi-node cluster command line and child processes are complete.

If you want to keep all nodes available for the duration of your login session, you can modify your command script using wait and standard Bash job control features to prevent your command line script from completing and triggering slave node shutdown until you give it a signal from your SSH login session. Contact Rescale Support for more details.

Rescale "cores", with the exception of the "Marble" and "Jade" core types, map to physical cores on the compute node. CPU usage is calculated over the total number of logical cores on the system, so if you are distributing your processes only on physical cores, you will see a maximum of 50% utilization.

Under most circumstances, running a number of tasks equal to the number of physical cores is the most efficient choice. Some software can make efficient use of the logical cores, in which cases running twice as many tasks can be useful. Making use of these virtual extra cores may speed up or slow down your computation; run small test cases before running production jobs to find the best settings for your code.

LOGICAL_CPU_COUNT=$(lscpu -p | egrep -v '^#' | wc -l)
PHYSICAL_CPU_COUNT=$(lscpu -p | egrep -v '^#' | sort -u -t, -k 2,4 | wc -l)

The Linux command lscpu is available on Rescale compute nodes. This command gathers CPU architecture information such as the number of CPUs, threads, cores, sockets, NUMA nodes, information about CPU caches, CPU family, model, bogoMIPS, byte order and stepping from sysfs and /proc/cpuinfo, and prints it in a human-readable format. It supports both online and offline CPUs. It can also print out in a parsable format, including how different caches are shared by different CPUs, which can be fed to other programs.

For example, the Rescale Hardware Settings below are for a compute node with one Rescale core of type "Nickel".


For this hardware selection, lscpu will report (amongst other information):

    CPU(s):                2
    On-line CPU(s) list:   0,1
    Thread(s) per core:    2
    Core(s) per socket:    1
    Socket(s):             1
    NUMA node(s):          1

This is equivalent to one physical core and two logical cores. If your code benefits, you may choose to run with two threads.