|OSU Tier 3|
Connect to the Cluster
Setting Up Your Environment
Submit a CRAB job to the OSU cluster
Naming Conventions & Storage
For additional information about working on a T3, visit the UMD T3 User Guide
To get an account on OSUT3, email Andrew Hart. Include the username you would like to have on the T3.
After you've received an account, you must first log in to the head node before doing anything else:
$ ssh -l <username> cmshead.mps.ohio-state.edu
where <username> is the username you have been assigned for the Tier 3. You will be prompted to create a new password; please do so. Then you will be automatically logged off.
Next, log in one more time to cmshead. This time, you will be prompted to generate public/private rsa key pairs. If you do not wish to use this functionality (we encourage you to not use it), simply hit return three times (for the file, password, and password confirmation).
From this point on, only access cmshead to manage your user account. All analysis should be performed on cms-in0. NEVER RUN RESOURCE INTENSIVE JOBS (such as cmsRun) ON cmshead.
To log into the interactive node (where all analysis is done), type:
$ ssh -l <username> cms-in0.mps.ohio-state.edu
Now that your account has been created, you have two spaces in which to save files:
The first is simply your home directory. The second is a directory hosted on disk-0-1. Currently, neither is limited by user quotas so please be courteous of the other users of these disks. The home directory is backed up once a week, but the data on /store is never backed up. You are responsible for backing up any critical files on /store off-site.
|Connect to the cluster
From a Linux machine:
$ ssh -X firstname.lastname@example.org
From a Windows machine:
PuTTY provides an ssh client for Windows machines. Configuring PuTTY after installation varies a bit from version to version, but the most important setting is the host name: cms-in0.mps.ohio-state.edu. You will also want to turn X11 forwarding on, typically under Connection->SSH->Tunnels and you may want to set your username, typically under Connection. Your settings can be saved for future sessions.
Xming is a light-weight X11 emulator and does support PuTTY. It comes with some versions of PuTTY. It is needed for any software running at hepcms which pops up windows on your machine, such as root. If you plan to run emacs, installing the additional Xming font executable is strongly recommended. If you plan to use Fireworks or other sophisticated graphics software, installing Xming-Mesa is required.
|Setting Up Your Environment
Currently (as of early 2011), the CMS community is "stradling" the 32-bit, 64-bit divide. All of the software, data, and MC from 2010 is designed for 32-bit architecture and all of the software, data, and MC from 2011 is in 64-bit.
Unfortunately, this means that we have to have two different architectures installed side-by-side which means that things aren't as automated as the have been (and will be again once all of the 32-bit code goes away).
For users on the T3, this simply means you must manually select which architecture you want to use. This is done by running one of these scripts each time you log in:
To obtain 32-bit CMSSW environment:
$ source /usr/local/bin/32bit.sh (for bash)
$ source /usr/local/bin/32bit.csh (for tcsh)
To obtain 64-bit CMSSW environment:
$ source /usr/local/bin/64bit.sh (for bash)
$ source /usr/local/bin/64bit.csh (for tcsh)
Until one of these scripts are run, your $PATH is missing important locations so most scripts & applications will not run.
If you ever need to switch between architectures, simply run the opposite script. The instructions for this is also displayed in the welcome message when you first log in.
A CMSSW working environment is created and jobs are executed on the T3 just as on any other:
$ scram list CMSSW
$ cmsrel CMSSW_X_Y_Z
$ cd CMSSW_X_Y_Z/src/subdir
$ cmsRun yourConfig_cfg.py
where yourConfig_cfg.py is the CMSSW config file you would like to run on. Further details on CMSSW can be found at the Workbook and in the tutorials.
NOTE: It is necessary to complete the steps up to and including cmsenv before ROOT will be placed on your path and execute correctly.
|Submit a CRAB job to the OSU cluster
If you're located remotely, you may want to submit jobs to the OSU T3 cluster via CRAB rather than Condor. CRAB jobs can be submitted from any computer which has various grid client tools installed, including CRAB. Consult your site admin if you do not know the appropriate commands to set up the grid and CRAB environment. You need to set two parameters in your crab.cfg file:
se_white_list = T3_US_OSU
ce_white_list = T3_US_OSU
Sometimes syntax for ce_white_list changes. Other common styles are:
ce_white_list = osu.edu, T3_US_OSU
ce_white_list = osu.edu
If CRAB jobs claim they cannot match, try modifying your white list syntax.
The OSU T3 cluster must have the version of CMSSW that you are using installed and must be hosting the data you are attempting to run over (production jobs require no input data). You may request to have a CMSSW version installed by contacting Andrew Hart and may bring DBS-registered data to the cluster by submitting a PhEDEx request (see Naming Conventions & Storage).
|Naming Conventions & Storage
Any dataset that may be useful to other members in the group should be stored on the T3 at
Similarly, any MC set is stored at
Files in these directories are named using the conventions given here.
For other operations, such as running analysis, copying files to and from OSU T3, using CVS, or installing grid certificates, follow the instructions at:
and change any instances of the server name from "hepcms.umd.edu" to "cms-in0.mps.ohio-state.edu".
Our Tier 3 is modeled closely after the Tier 3 at UMD so after logging in & setting up your environment, everything else should generally work the same way. If you find that something isn't working as expected, email the author of this OSU tutorial, Marissa Rodenburg.
|OSUT3 Administrator: ahart at cern dot ch
|Web Administrator: mlr at mps dot ohio-state dot edu|
|The Ohio State University|