SHPC Condo PBS Resource Queues

The Atmospheric Radiation Measurement (ARM) group in the CADES Open SHPC Condo uses Moab/Torque to schedule jobs. The other SHPC Condo groups are in a separate Slurm partition and have different login nodes and batch directives.

ARM Login


This page describes PBS resources queues in the CADES SHPC ARM Condo environment. The hardware in each queue, queue access policies, quality of service specifications, and PBS directives required to access each queue are described. The two tables below list the technical specifications of each resource queue. For more information on submitting a job to a resource queue, view the Execute a Job page. The PBS directives required to submit to each queue are listed in the the PBS Directive section near the bottom of the page.

PBS Queues

Name # Nodes Cores Micro arch. RAM Local Scratch GPU GPU Details
gpu_ssd 2 36 Broadwell 250G 1.8T 1x K80 (GK210) 2x 12G GDDR5, Kepler
arm_high_mem 28 36 Broadwell 250G 1.8T N/A N/A
Total: 30

Node Distribution

Group Nodes
cades-arm gpu_ssd:2,arm_high_mem:28

PBS Resource Queue Access

Scheduled nodes are reserved exclusively for a single user.

PBS Quality of Service (QoS) Specification

There are three Quality of Service levels that can be specified for a PBS job: standard, long, and development. The tables below shows the differences in maximum walltime and priority between each of these service levels:

ARM Condo PBS Partition

QoS Name Priority Max Walltime
devel 4000 00:04:00:00
std 2000 02:00:00:00
long 1500 14:00:00:00

Resource Queue PBS Directives

The PBS directives required to submit jobs to each resource queue for ARM are listed below.

📝 Note: This syntax indicates that you should pick one of the options in brackets. Lines without brackets can be copied without any changes. * [ option_a | option_b | ... ]

Atmospheric Radiation Measurement (ARM)

Standard PBS directives:

#PBS -W group_list=cades-arm
#PBS -A arm
#PBS -q [batch|gpu_ssd|arm_high_mem]
#PBS -l qos=[std|long|devel]

📝 Note: If you do not see your group listed, it may be in the Slurm partition, please contact the CADES team and include: * UCAMS ID or XCAMS ID, contact information, reason for requesting an SHPC Condo allocation, and the name of your directorate and division.

Slurm Transition

Moab has gone out of support, so all of the condo groups will eventually be transitioned to the Slurm partition. As such, ARM has Slurm nodes available to use to ready their codes for that transition.

ARM Condo Slurm Testbed

Slurm is available in the CADES Open SHPC Condo through the load-balanced login nodes These nodes can be ssh'd into directly from the ORNL network:

$ ssh <uid>

Use the following group specific batch directives in your #SBATCH script. Full batch script examples can be found in Execute a Slurm Job.

#SBATCH -A arm
#SBATCH -p testing