Difference between revisions of "Vital-IT"

 
(12 intermediate revisions by the same user not shown)
Line 3: Line 3:
  
  
When you have many many jobs to run, running them on the Vital-IT cluster might be better than running them on shoshana or maya.<br>
+
When you have many many jobs to run, running them on the Vital-IT cluster might be better than running them on shoshana or maya.
Simply because running 300+ jobs on a 16 processor machine, will make your jobs competing with each other. (i.e. each job will not be using 100% of a processor, but will be sharing the resources with others).<br>
 
If your jobs take few minutes to complete, that might not be an issue though.<br>
 
  
Any jobs that do not require huge amount of memory (i.e. more than 2Gb) can be easily run on the Vital-it machines.  
+
Simply because running 300+ jobs on a 16 processor machine, will make your jobs competing with each other. (i.e. each job will not be using 100% of a processor, but will be sharing the resources with the others).
 +
 
 +
If your jobs take few minutes to complete, that might not be an issue though.
 +
 
 +
Any job that does not require massive amount of memory (more than 7-8Gb) can be easily run on the Vital-it machines.  
 
For huge-memory there are few machines available, although only one (rserv) competing with shoshana or maya.
 
For huge-memory there are few machines available, although only one (rserv) competing with shoshana or maya.
  
Line 31: Line 33:
 
These front-end nodes are only to submit jobs and do not have the resources to allow you running your jobs interatively.<br>
 
These front-end nodes are only to submit jobs and do not have the resources to allow you running your jobs interatively.<br>
 
For interactive and/or heavy computation, you can log on rserv.vital-it.ch or noko01.vital-it.ch . <br>
 
For interactive and/or heavy computation, you can log on rserv.vital-it.ch or noko01.vital-it.ch . <br>
The jobs on these machines will share the resources (RAM, CPU, I/O) with all other user's jobs.  
+
The jobs on these machines will share the resources (RAM, CPU, I/O) with all other user's jobs.
 +
 
 +
== Installed softwares ==
 +
 
 +
There are various bioinformatic softwares installed on Vital-IT. Check out there [http://www.vital-it.ch/vitalit-comp-services.htm]
 +
These include :
 +
* R ( /mnt/common/R-BioC/install/Linux/x86_64/R-2.8.0/bin/R or /mnt/common/R-BioC/install/Linux/ia64/R-2.8.0/bin/R )
 +
* Plink, EigenStrat, Merlin ...
 +
* Raxml, Phylip, phyloBayes, phyml, treefinder ...
 +
* Emboss
 +
* lots of sequence analysis tools ( t-coffee, paralign, hmmer, pftools, clustalw, blast, ssaha, blat, fasta, tagger ...)
 +
 
 +
Matlab is not yet installed (mainly due to a licence problem).
 +
One alternative would be to compile the Matlab code on shoshana/maya and to use it on rserv.vital-it.ch.
  
 
= Bsub in a nutshell =
 
= Bsub in a nutshell =
Line 42: Line 57:
  
 
That will submit it to the cluster and return you its job id.
 
That will submit it to the cluster and return you its job id.
 +
 
Here outputs will be redirected to mylog.
 
Here outputs will be redirected to mylog.
 
But you can separate STDOUT and STDERR messages in distinct files with :
 
But you can separate STDOUT and STDERR messages in distinct files with :
Line 48: Line 64:
 
=== Submitting job to a queue ===
 
=== Submitting job to a queue ===
  
You can assign a job to a special queue. ( with bsub -q queue_name)
+
You can assign a job to a special queue, simply like :
 +
bsub -q normal "sh myscript.sh"      # only for jobs needing less than 24h
 +
bsub -q long "sh myscript.sh"        # for long jobs
 +
 
 
By default, each Vital-IT job is submitted to the normal queue, which has a run-time limit of 24hours. After 24h, the job will be killed automatically.
 
By default, each Vital-IT job is submitted to the normal queue, which has a run-time limit of 24hours. After 24h, the job will be killed automatically.
 
For longer jobs, you can submit to the long queue, without time limit, but with a lower priority.
 
For longer jobs, you can submit to the long queue, without time limit, but with a lower priority.
Line 114: Line 133:
 
* either Qstat [http://www.vital-it.ch/prd/www/cgi-bin/Wserver?qstat=0&html=0], which does nothing more than a bjobs
 
* either Qstat [http://www.vital-it.ch/prd/www/cgi-bin/Wserver?qstat=0&html=0], which does nothing more than a bjobs
 
* or Ganglia [http://www.vital-it.ch/prdpub/ganglia-webfrontend/?c=ProdCluster&r=hour&s=by%2520hostname&hc=4], which will tell you how busy (in term of load, mem usage, SFS load etc...) the nodes are
 
* or Ganglia [http://www.vital-it.ch/prdpub/ganglia-webfrontend/?c=ProdCluster&r=hour&s=by%2520hostname&hc=4], which will tell you how busy (in term of load, mem usage, SFS load etc...) the nodes are
 
  
 
= Building nicer bsub =
 
= Building nicer bsub =
Line 124: Line 142:
 
  bsub -J a "sh a.sh"
 
  bsub -J a "sh a.sh"
 
  bsub -J b -w '(done "a")' "sh b.sh"      # start b when a is successfully done
 
  bsub -J b -w '(done "a")' "sh b.sh"      # start b when a is successfully done
  bsub -J c -w '(exit "b")' "sh b.sh"      # start c if b has failed
+
  bsub -J c -w '(exit "b")' "sh c.sh"      # start c if b has failed
And here we go, we have a mini-pipeline.
+
And here we go, we have a mini-pipeline :-).
  
 
=== Job with special requirements ===
 
=== Job with special requirements ===
Line 133: Line 151:
  
 
You can do this with the bsub -R option.
 
You can do this with the bsub -R option.
  bsub -R "select[mem>3500] rusage[mem=3500]" ....    # will start the job on a machine having at least 3.5Gb of RAM [[and]] reserve 3.5Gb for your job. (it's like hidding a cake not to share with the others)
+
  bsub -R "select[mem>3500] rusage[mem=3500]" ....    # will start the job on a machine having at least 3.5Gb of RAM [[and]] reserve 3.5Gb for your job.  
 
  bsub -R "select[model==Xeon5160]" ....              # the job will start on Xeon machine, which is a x86_64 architecture
 
  bsub -R "select[model==Xeon5160]" ....              # the job will start on Xeon machine, which is a x86_64 architecture
  
Line 144: Line 162:
 
This will submit a job array with 1000 jobs (myscript.sh) on an input file name "inputFileNN.txt" where NN is a number from 1 to 1000.
 
This will submit a job array with 1000 jobs (myscript.sh) on an input file name "inputFileNN.txt" where NN is a number from 1 to 1000.
  
bsub -J myjobname"[1-1000]"%50  tells LSF that this is a job array starting at 1 and finishing at 1000. %50 specifies how many jobs are allowed to run at any one time.<br>
+
bsub -J myjobname"[1-1000]"%50  tells LSF that this is a job array starting at 1 and finishing at 1000. %50 specifies how many jobs are allowed to run at any one time. (here it's only 50)<br>
 
The variables %I and %J are used as substitution strings to support file redirection for jobs submitted from a job array. <br>
 
The variables %I and %J are used as substitution strings to support file redirection for jobs submitted from a job array. <br>
 
At execution time, %I is expanded to provide the job array index value of the current job, and %J (not used in the above example) is expanded at to provide the job ID of the job array.<br>
 
At execution time, %I is expanded to provide the job array index value of the current job, and %J (not used in the above example) is expanded at to provide the job ID of the job array.<br>
Line 169: Line 187:
 
==== My ls is painfully slow. Why?====
 
==== My ls is painfully slow. Why?====
  
That inherent to SFS and the fact that files are stripped on many different discs.
+
That is inherent to SFS and the fact that files are stripped on many different discs.
  
 
Apart from avoiding putting thousand of files in a single directory, you can use the /bin/ls or ls --color=none which is much faster than the default ls.
 
Apart from avoiding putting thousand of files in a single directory, you can use the /bin/ls or ls --color=none which is much faster than the default ls.
Line 179: Line 197:
 
  Filesystem            Size  Used  Avail Use% Mounted on
 
  Filesystem            Size  Used  Avail Use% Mounted on
 
  client_o2ib          16T  13T  2.7T  83%  /sfs1
 
  client_o2ib          16T  13T  2.7T  83%  /sfs1
 +
 +
==== How can I make sure, I am using my bash config on the node running my job ?====
 +
bsub -L /bin/bash ....
 +
 +
==== Can I run an interactive job on a Vital-it node? ====
 +
Yes, with bsub -I
 +
 +
bsub -I echo "hello"
 +
''Job <904773> is submitted to default queue <normal>.
 +
<<Waiting for dispatch ...>>
 +
<<Starting on cpt023>>
 +
hello''
  
 
== Known limitations ==
 
== Known limitations ==
Line 185: Line 215:
 
But this means that any file stat operation (i.e. a simple ls), needs to query the various discs where the data stripes are. This can be painfully slow...<br>
 
But this means that any file stat operation (i.e. a simple ls), needs to query the various discs where the data stripes are. This can be painfully slow...<br>
 
This means that a job doing lots of I/O operations will be slower compared to a NFS file system. Still running in parallel 200+ jobs will be much faster than one by one or by small batches on maya/shoshana.
 
This means that a job doing lots of I/O operations will be slower compared to a NFS file system. Still running in parallel 200+ jobs will be much faster than one by one or by small batches on maya/shoshana.
 +
 +
= See also =
 +
 +
* Complete LSF documentation [http://www.vital-it.ch/LSF/]
 +
* Lustre [http://wiki.lustre.org/index.php?title=Main_Page] [http://www.sun.com/software/products/lustre/]
 +
* HP StorageWorks SFS [http://h20311.www2.hp.com/HPC/cache/276636-0-0-0-121.html]
 +
* SFS/Lustre experience from Roland Laifer [http://www.rz.uni-karlsruhe.de/download/SSCK_Workshop_07_Laifer.pdf]

Latest revision as of 10:29, 30 July 2009

How to run jobs on Vital-IT, hints and good practice

When you have many many jobs to run, running them on the Vital-IT cluster might be better than running them on shoshana or maya.

Simply because running 300+ jobs on a 16 processor machine, will make your jobs competing with each other. (i.e. each job will not be using 100% of a processor, but will be sharing the resources with the others).

If your jobs take few minutes to complete, that might not be an issue though.

Any job that does not require massive amount of memory (more than 7-8Gb) can be easily run on the Vital-it machines. For huge-memory there are few machines available, although only one (rserv) competing with shoshana or maya.

Prerequisites

Before working or crashing vital-it, you will need an account.
You can ask for one there [1]

Ways to submit jobs

You can submit jobs through :

  • a web interface [2]
  • or you can use a python script (wsub.py), documentation available at wsub-python[3]
  • or you can log on to a front-end node (dev.vital-it.ch or prd.vital-it.ch) and submit jobs using the bsub command.[4]

Being nice

PLEASE DO NOT RUN ANY COMPUTATION ON THE FRONT_END NODES (dev,prd) !!!
These front-end nodes are only to submit jobs and do not have the resources to allow you running your jobs interatively.
For interactive and/or heavy computation, you can log on rserv.vital-it.ch or noko01.vital-it.ch .
The jobs on these machines will share the resources (RAM, CPU, I/O) with all other user's jobs.

Installed softwares

There are various bioinformatic softwares installed on Vital-IT. Check out there [5] These include :

  • R ( /mnt/common/R-BioC/install/Linux/x86_64/R-2.8.0/bin/R or /mnt/common/R-BioC/install/Linux/ia64/R-2.8.0/bin/R )
  • Plink, EigenStrat, Merlin ...
  • Raxml, Phylip, phyloBayes, phyml, treefinder ...
  • Emboss
  • lots of sequence analysis tools ( t-coffee, paralign, hmmer, pftools, clustalw, blast, ssaha, blat, fasta, tagger ...)

Matlab is not yet installed (mainly due to a licence problem). One alternative would be to compile the Matlab code on shoshana/maya and to use it on rserv.vital-it.ch.

Bsub in a nutshell

Submitting a simple job

bsub "sh myscript.sh > mylog"
Job <903956> is submitted to default queue <normal>.

That will submit it to the cluster and return you its job id.

Here outputs will be redirected to mylog. But you can separate STDOUT and STDERR messages in distinct files with :

bsub -e myerrorfile -o myoutputfile "sh myscript.sh"

Submitting job to a queue

You can assign a job to a special queue, simply like :

bsub -q normal "sh myscript.sh"      # only for jobs needing less than 24h 
bsub -q long "sh myscript.sh"        # for long jobs

By default, each Vital-IT job is submitted to the normal queue, which has a run-time limit of 24hours. After 24h, the job will be killed automatically. For longer jobs, you can submit to the long queue, without time limit, but with a lower priority. Such priority score (known as LSF shares) define how soon a submitted job will start running. Obviously, the more job you submit and the more CPU you have already used, the more your priority score will decrease.

One can also change the queue of a job

bswitch long 666            # put job with id 666 to the long queue
bswitch -q normal long 0    # put all jobs from normal queue to the long queue

Monitoring jobs

You can check it status by doing

bjobs 
JOBID   USER    STAT  QUEUE      FROM_HOST   EXEC_HOST   JOB_NAME   SUBMIT_TIME
903583  avalses RUN   normal     devfrt01    cpt176      698099     Feb 26 14:22
903581  avalses RUN   normal     devfrt01    cpt167      695923     Feb 26 14:22
903580  avalses RUN   normal     devfrt01    cpt166      695889     Feb 26 14:22

The bjobs lists you the job info (id, name , when it was submitted, on which host it is running, on which queue) and more importantly what is its current status. The status you will see most of the time are :

  • RUN : The job is currently running.
  • PEND : The job is pending, that is, it has not yet been started.
  • DONE : The job has terminated with status of 0.
  • EXIT : The job has terminated with a non-zero status - it may have been aborted due to an error in its execution, or killed by its owner or the LSF administrator.

To check your jobs, you might also use

bjobs -a             # list all running and finished jobs (at least the recently finished)
bjobs -r             # list all running jobs
bjobs -d             # list all finished jobs (either successfully completed or failed ones)
bjobs -u marcel      # list all jobs for this user
bjobs -q normal      # list my jobs on this queue


Special status :

These are some special job status, for which you would probably worry a bit :

  • PSUSP : The job has been suspended, either by its owner or the LSF administrator, while pending.
  • USUSP : The job has been suspended, either by its owner or the LSF administrator, while running.
  • SSUSP : The job has been suspended by LSF. Either because
    • The load conditions on the execution host or hosts have exceeded a threshold according to the loadStop vector defined for the host or queue.
    • The run window of the job’s queue is closed.
  • UNKWN : mbatchd has lost contact with the sbatchd on the host on which the job runs.
  • WAIT : For jobs submitted to a chunk job queue, members of a chunk job that are waiting to run.
  • ZOMBI : A job becomes ZOMBI if:
    • A non-rerunnable job is killed by bkill while the sbatchd on the execution host is unreachable and the job is shown as UNKWN.
    • The host on which a rerunnable job is running is unavailable and the job has been requeued by LSF with a new job ID, as if the job were submitted as a new job.
    • After the execution host becomes available, LSF tries to kill the ZOMBI job. Upon successful termination of the ZOMBI job, the job’s status is changed to EXIT.
    • With MultiCluster, when a job running on a remote execution cluster becomes a ZOMBI job, the execution cluster treats the job the same way as local ZOMBI jobs. In addition, it notifies the submission cluster that the job is in ZOMBI state and the submission cluster requeues the job.

What to do when a job goes nuts

Bkill is your best friend, when something goes wrong, you can kill your job(s) with :

bkill 007             # kill job id's 007
bkill 0               # kill all my jobs
bkill -q normal 0     # kill all my jobs from the normal queue
bkill -J "toto"       # kill job called toto

Monitoring Vital-IT

To check the sanity of Vital-IT before or during job submission, you can use the online tools [6] :

  • either Qstat [7], which does nothing more than a bjobs
  • or Ganglia [8], which will tell you how busy (in term of load, mem usage, SFS load etc...) the nodes are

Building nicer bsub

Linking jobs

You can submit many jobs and ensure some start after the completion of some other. i.e. if you want to run a,b,c and b needs the output from a, and c is to do only when b failed, Then you can use the -w bsub option

bsub -J a "sh a.sh"
bsub -J b -w '(done "a")' "sh b.sh"      # start b when a is successfully done
bsub -J c -w '(exit "b")' "sh c.sh"      # start c if b has failed

And here we go, we have a mini-pipeline :-).

Job with special requirements

When a job has special needs, you can ask LSF to start running it only if some conditions are satisfied. This can be a minimal amount of free memory, a particular host architecture (i.e X86_64 and not ia64) etc..

You can do this with the bsub -R option.

bsub -R "select[mem>3500] rusage[mem=3500]" ....     # will start the job on a machine having at least 3.5Gb of RAM and reserve 3.5Gb for your job. 
bsub -R "select[model==Xeon5160]" ....               # the job will start on Xeon machine, which is a x86_64 architecture

Job arrays

If I want to submit about a thousand job just changing one parameter or one input file, I could do a thousands bsub. But using a job array is better as it is just much faster.

  bsub -J myjobname"[1-1000]"%50 -e log/%I.err -o log/%I.out "sh myscript.sh inputFile${LSB_JOBINDEX}.txt"    

This will submit a job array with 1000 jobs (myscript.sh) on an input file name "inputFileNN.txt" where NN is a number from 1 to 1000.

bsub -J myjobname"[1-1000]"%50 tells LSF that this is a job array starting at 1 and finishing at 1000. %50 specifies how many jobs are allowed to run at any one time. (here it's only 50)
The variables %I and %J are used as substitution strings to support file redirection for jobs submitted from a job array.
At execution time, %I is expanded to provide the job array index value of the current job, and %J (not used in the above example) is expanded at to provide the job ID of the job array.
The ${LSB_JOBINDEX} is an environment variable incremented automatically by LSF.

By default the max number of job per array is 1000, but the sys admin can increase it up to ~64k.


Killing a job array can be done with :

bkill "myjobname"                # kill the complete array called myjobname
bkill "myjobname[10]"            # only kill the 10th job of the array
bkill "myjobname[1-10,77]"       # kill the 10 first jobs and the 77th

FAQ

Can I submit LSF jobs from rserv or noko01?

No, use dev or prd instead.

Can I run jobs directly on dev or prd ?

Never ! Use rserv or noko01 !

My ls is painfully slow. Why?

That is inherent to SFS and the fact that files are stripped on many different discs.

Apart from avoiding putting thousand of files in a single directory, you can use the /bin/ls or ls --color=none which is much faster than the default ls.

How to I check the space left?

Please note, that Vital-it will crash if the space left is less than 1Tb !!! Because, there are some webservices relying on this minimal free space.

df -h .
Filesystem            Size  Used  Avail Use% Mounted on
client_o2ib           16T   13T   2.7T   83%  /sfs1

How can I make sure, I am using my bash config on the node running my job ?

bsub -L /bin/bash ....

Can I run an interactive job on a Vital-it node?

Yes, with bsub -I

bsub -I echo "hello"
Job <904773> is submitted to default queue <normal>.
<<Waiting for dispatch ...>>
<<Starting on cpt023>>
hello

Known limitations

Vital-it uses the SFS file system [9], files are stripped to many discs for backup reasons.
But this means that any file stat operation (i.e. a simple ls), needs to query the various discs where the data stripes are. This can be painfully slow...
This means that a job doing lots of I/O operations will be slower compared to a NFS file system. Still running in parallel 200+ jobs will be much faster than one by one or by small batches on maya/shoshana.

See also

  • Complete LSF documentation [10]
  • Lustre [11] [12]
  • HP StorageWorks SFS [13]
  • SFS/Lustre experience from Roland Laifer [14]