Módosítások

PRACE User Support

1 839 bájt törölve, 2019. október 29., 15:56
a
Acknowledgement in publications
 
 
== User Guide to obtain a digital certificate ==
<code>
gsissh -p 2222 prace-login.budapest.hpcsc.niif.hu
</code>
<code>
globus-url-copy file://task/myfile.c gsiftp://prace-login.budapestsc.hpcniif.hu/home/taskprace/pr1hrocz/myfile.c
</code>
* -stripe Use this parameter to initiate a “striped” GridFTP transfer that uses more than one node at the source and destination. As multiple nodes contribute to the transfer, each using its own network interface, a larger amount of the network bandwidth can be consumed than with a single system. Thus, at least for “big” (> 100 MB) files, striping can considerably improve performance.
==Usage of the Sun Grid Engine SLURM scheduler ==  Basically the SGE is a scheduler, which divides the resources, computers into resource partitions. These are called queues. A queue can’t be larger than a physical resource; it can’t expand its borders. SGE registers a waiting list for the resources managed by itself, to which the posted computing tasks are directed. The scheduler searches for the resource defined by the description of the task and starts it. The task-resource coupling depends on the ability of the resources and the parameters of the tasks. In case the resources are overloaded, the tasks have to wait while the requested processor and memory becomes available.  The detailed documentation of the SGE can be found [Website: http://docsslurm.oracleschedmd.com/cd/E24901_01/doc.62/e21976.pdf here].  SGE version on all HPC sites: [http://gridscheduler.sourceforge.net/documentation.html Open Grid Scheduler (OGS/GE 2011.11p1)] 
=== The most simple commands =schedule of the HPCs are CPU hour based. This means that the available core hours are divided between users on a monthly basis. All UNIX users are connected to one or more account. This scheduler account is connected to an HPC project and a UNIX group. HPC jobs can only be sent by using one of the accounts. The core hours are calculated by the multiplication of wall time (time spent running the job) and the CPU cores requested.For example reserving 2 nodes (48 cpu cores) at the NIIFI SC for 30 minutes gives 48 * 30 =1440 core minutes = 24 core hours. Core hours are measured between the start and and the end of the jobs.
The most simple SGE command '''It is very important to be sure the display application maximally uses the allocated resources. An empty or non-optimal job will consume allocated core time very fast. If the account run out of the cluster data: allocated time, no new jobs can be submitted until the beginning of the next accounting period. Account limits are regenerated the beginning of each month.'''
Information about an account can be listed with the following command:
<code>
qhostsbalance
</code>
A possible outcome of this command can be: {| class="wikitable" border="1" == Example ====|- |HOSTNAME|ARCH|NCPU|LOAD|MEMTOT|MEMUSE|SWAPTO|SWAPUS|-|global |<nowiki>-</nowiki> After executing the command, the following table shows up for Bob. The user can access, and run jobs by using two different accounts (foobar, barfoo). He can see his name marked with * in the table. He shares both accounts with alice (Account column). The consumed core hours for the users are displayed in the second row (Usage), and the consumption for the jobs ran as the account is displayed in the 4th row. The last two row defines the allocated maximum time (Account limit), and the time available for the machine (Available).
|<nowikipre>Scheduler Account Balance---------- ----------- + ---------------- ----------- + ------------- -----------</nowiki> User Usage | Account Usage |<nowiki>Account Limit Available (CPU hrs)---------- ----------- + ---------------- ----------- + ------------- -----------</nowiki> alice 0 | foobar 0 | 0 0bob * 0 | foobar 0 |<nowiki>-</nowiki> 0 0
bob * 7 |<nowiki>- barfoo 7 | 1,000 993alice 0 | barfoo 7 | 1,000 993</nowikipre>
|<nowiki>-</nowiki> === Estimating core time ===|<nowiki>-</nowiki>|-|cn01 |linux-x64 |24 |5.00 |62.9G |8.6G |0.0 |0.0|-|cn02 |linux-x64 |24 |0.01 |62.9G |1.2G |0.0 |0.0|-|cn03 |linux-x64 |24 |0.03 |62.9G |1.5G |0.0 |0.0|}   The first two columns define the names and types of the computersBefore production runs, which are in the clusterit is advised to have a core time estimate. The NCPU column shows the number of the available processor cores. LOAD shows the computer’s load for the moment (this value equals with the value demonstrated by the uptime UNIX following command). The rest of the cells are: overall physical memory, the actual can be used memory, the available swap-memory, and the used swap. The global line marks all the information in total regarding the cluster. We can have a look at the available queue-s with the following commandfor getting estimate:
<code>
qconf sestimate -sqlN NODES -t WALLTIME
</code>
where <code>NODES</code> are the number of nodes to be reserved, <code>WALLTIME</code> is the maximal time spent running the job.
One probable outcome of '''It is important to provide the core time to be reserved most precisely, because the scheduler queue the jobs based on this value. Generally, a job with shorter core time will be run sooner. It is advised to check the time used to run the job after completion with <code>sacct</code> command: .'''
<code> parael.q serial.q test.q </code>==== Example ====
Alice want to reserve 2 days 10 hours and 2 nodes, she checks, if she have enough time on her account.
<pre>
sestimate -N 2 -t 2-10:00:00
To get more info about the state of the system use Estimated CPU hours: 2784</pre>Unfortunately, she couldn't afford to run this job.
<code> qstat -f</code> It shows which jobs run in which queues, and you can also get detailed info about the queues themselves (state, environment). The command can be used without the -f switch too, but it is less informative, since in this case only the jobs’ states will appear. The command’s outcome: === Status information ===
Jobs in the queue can be listed with <code>squeue</code> command, the status of the cluster can be retrieved with the <code>sinfo</code> command. All jobs sent will get a JOBID. The properties of a job can be retrieved by using this id. Status of a running or waiting job:
<code>
scontrol show job JOBID
queuename qtype resv/used/tot. load_avg arch states
<nowiki>-------------------------------------------------------------------------------- </nowiki>
test.q@cn.32 BIP 0/3/24 3.15 linux-x64
905 1.00000 PI_SEQ_TES stefan r 06/04/2011 09:12:14 1
 
</code>
 
The first column of this table shows the name of the row, the second column marks the type (B-batch, I-interactive, C-checkpointing, P-parallel environment, E-error state). The third part of the column shows how many jobs can be run at the same time in the row. All in all, these values fit to the number of overall processor cores in the system. The second item of the column shows the free compartments at the moment.
 
If a running (scheduled) job is to be found in the queue, it is directly next to the name of the row, like the recent "PI_SEQ_TES", which runs in the test.q row. The tasks waiting for the resources, because it is overwhelmed or the preliminary conditions are not prompt, appear behind the sum row, listed as pending jobs. For example:
 
<code>
queuename qtype resv/used/tot. load_avg arch states
<nowiki>--------------------------------------------------------------------------------- </nowiki>
 
parallel.q@cn31 BIP 0/24/24 22.3 linux-x64
<nowiki>--------------------------------------------------------------------------------- </nowiki>
 
test.q@cn32 BIP 0/24/24 23.5 linux-x64
<nowiki>############################################################################ </nowiki>
- PENDING JOBS - PENDING JOBS - PENDING JOBS - PENDING JOBS - PENDING JOBS
<nowiki>############################################################################ </nowiki>
905 0.00000 PI_SEQ_TES stefan qw 06/04/2011 09:12:04 1
</code>
 
 
 
 
Each task is given an identifier, which is a number (a job ID, or j_id), this is followed by the job’s priority (0 in both cases), then the job’s name, and the user who posted the job, and the qw marks, that the job is waiting for the queue. Finally the date of the registration for the waiting queue is next
 
When a job finishes running, this is created: jobname.ojobnumber in our actual catalog, which contains the error messages and stapled outputs created by the program..
 
=== Job submission ===
 
Back then, the SGE scheduler was designed to be able to operate different types of architectures. That’s why you can’t post binary files directly, only scripts, like the
 
<code>
qsub script.sh
</code>
commandAll jobs will be inserted into an accounting database. The script describes the task, the main parameters properties of it, and its running. For example in the following script, the described ''hostnamecompleted jobs can be retrieved from this database.sh'' taskDetailed statistics can be viewed by using this command:  
<code>
#!/bin/sh #$ sacct -N HOSTNAME /bin/hostname l -j JOBID
</code>
 Memory used can be posted with the following command: retrieved by using
<code>
qsub hostname.shsmemory JOBID
</code>
 The scripts Disk usage can be used for separating the different binariesretrieved by this command:  
<code>
#!/bin/sh case `uname` in SunOS) ./pi_sun FreeBSD) ./pi_bsd esacsdisk JOBID
</code>
With the following command, we can define the queue where the scheduler puts the job:==== Example ====
<code> qsub -q serialThere are 3 jobs in the queue.q rangeThe first is an array job which is waiting for resources (PENDING). The second is an MPI job running on 4 nodes for 25 minutes now. The third is an OMP run running on one node, just started. The NAME of the jobs can be freely given, it is advised to use short, informative names.sh</code>
<pre>
squeue -l
The command qsub can be issued with a number of different switches, which are gathered in the following tableWed Oct 16 08:30:07 2013 JOBID PARTITION NAME USER STATE TIME TIMELIMIT NODES NODELIST(REASON)591_[1-96] normal array alice PENDING 0:00 30:00 1 (None) 589 normal mpi bob RUNNING 25:55 2:00:00 4 cn[05-08] 590 normal omp alice RUNNING 0:25 1:00:00 1 cn09</pre>
{| class="wikitable" border="1"|This two-|Parameter|Possible example|Result|-| -N name| -N Flow|The job will appear under this name in the queue.|-| -cwd| -cwd|The output and the error files will appear in this actual catalog.|-| -S shell| -S /bin/tcsh|The shell in which the scripts run.|-| -j {y|n}| -j y|Joining the error and the output in one file.|-| -r {y|n}| -r y|After a restart, should the job restart too (from the beginning).|-| -M e-mail| -M stefan@niif.hu|Scheduler information will be sent to this address about the node batch job.|-| -l| -l h_cpu=0:15:0|Chooses had a queue for the job where 15 minutes typical load of CPU time could be ensured10GB virtual, and 6. (hour:minute:second)|-| -l| -l h_vmem=1G|Chooses a computer for the job where 1 GB 5GB RSS memory is available. In the case of parallel jobs its value is extended with the required number of slots. If this parameter is not given, the default setting will be the number of the maximum memory cores set up in the computers.|-| -l| -l in|Consuming resources, complex request. (This will be defined in the documentation written for the system administrators)|-| -binding| -binding linear:4|Chooses 4 CPU cores on the worker per node-on and assignes in a fix way. Further information: [http://docs.oracle.com/cd/E24901_01/doc.62/e21976/chapter2.htm#autoId75 here].|-| -l| -l exclusive=true|Demand of exclusive task execution (another job will not be scheduled on the chosen computers). It can be used in the following sites: Szeged, Budapest és Debrecen.|-| -P| -P niifi|Chooses a HPC project. This command will list the available HPC projects: ''qconf -sprjl''|-| -R | -R y|Resource reservation. This will cause that bigger parallel jobs will get higher priority.|}
<pre>
smemory 430
qsub command arguments can be added to the ~ MaxVMSize MaxVMSizeNode AveVMSize MaxRSS MaxRSSNode AveRSS---------- -------------- ---------- ---------- ---------- ----------10271792K cn06 10271792K 6544524K cn06 6544524K 10085152K cn07 10085152K 6538492K cn07 6534876K </.sge_request file. If this file exists then it will be added to the qsub arument list.pre>
Sometimes we want to delete a job before its running. For this you can use the ==== Checking jobs ====
It is important to be sure the application fully uses the core time reserved. A running application can be monitored with the following command:
<code>
qdel job_idsjobcheck JOBID
</code>
command. ===== Example =====
This job runs on 4 nodes. The LOAD group provides information about the general load of the machine, this is more or less equal to the number of cores. The CPU group gives you information about the exact usage. Ideally, values of the <code> qdel 903User</code> The column are over 90. If the value is below that, there is a problem with the application, or it is not optimal, and the run should be ended. This example deletes job fully using ("maxing out") the job number 903available resources.
<codepre>Hostname LOAD CPU Gexec CPUs (Procs/Total) [ 1, qdel -f 9035, 15min] [ User, Nice, System, Idle, Wio]cn08 24 ( 25/ 529) [ 24.83, 24.84, 20.98] [ 99.8, 0.0, 0.2, 0.0, 0.0] OFFcn07 24 ( 25/ 529) [ 24.93, 24.88, 20.98] [ 99.8, 0.0, 0.2, 0.0, 0.0] OFFcn06 24 ( 25/ 529) [ 25.00, 24.90, 20.97] [ 99.9, 0.0, 0.1, 0.0, 0.0] OFFcn05 24 ( 25/ 544) [ 25.11, 24.96, 20.97] [ 99.8, 0.0, 0.2, 0.0, 0.0] OFF</codepre>
It can delete the running jobs immediately.==== Checking licenses ====
For pending The used and then continuing jobs, use qmod {-s,-us}. available licenses can be retrieved with this command:
<code>
qmod -s 903 qmod -us 903 slicenses
</code>
==== Checking downtime ====
The previous one suspends In downtime periods, the running of number 903 (SIGSTOP)scheduler doesn't start new jobs, while the latter one allows (SIGCONT)but jobs can be sent.  If there is a need to change the features (resource requirements) of a job put into the waiting list, it The periods can be done with retrieved by using the following command: ''qalter''  
<code>
qalter -l h_cpu=0:12:0 903 sreservations
</code>
=== Running jobs ===
The previous command alternates Running applications in the hard-CPU requirements of HPC can be done in batch mode. This means all runs must have a job script containing the job number 903 (h_cpu) resources and changes it to 12 minutescommands needed. The switches parameters of the qalter command are mainly overlap scheduler (resource definitions) can be given with the ones <code>#SBATCH</code> directive. Comparison of the qsub commandschedulers, and the directives available at slurm are available at this [http://slurm.schedmd.com/rosetta.pdf table].
==== Obligatory parameters ====
The following parameters are obligatory to provide:
<pre>
#!/bin/bash
#SBATCH -A ACCOUNT
#SBATCH --job-name=NAME
#SBATCH --time=TIME
</pre>
In a special case, we have where <code>ACCOUNT</code> is the name of the account to execute use (available accounts can be retrieved with the same task<code>sbalance</code> command), but on different data. These tasks are <code>NAME</code> is the short name of the array jobs. With SGE we can upload several jobs to job, <code>TIME</code> is the waitingmaximum walltime using <code>DD-HH:MM:SS</code> syntax. For example in the pi task shown in previous chapterAcceptable time formats include "minutes", "minutes:seconds", it can be posted multiple times"hours:minutes:seconds", with different parameters"days-hours", with the following script"days-hours:minutes" and "days-hours:minutes:''arrayseconds".sh''  
The following command submit jobs:
<code>
#!/bin/sbatch jobscript.sh #$ -N PI_ARRAY_TEST ./pi_gcc `expr $SGE_TASK_ID \* 100000`
</code>
The SGE_TASK_ID is an internal integer used by If the SGEsubmission was successful, which created values for each running job. The interval can be set up when posting the blockfollowing is outputted: <pre>Submitted batch job JOBID</pre>where <code>JOBID</code> is the unique id of the job
The following commmand cancels the job:
<code>
qsub -t 1-7 array.sh scancel JOBID
</code>
==== Job queues ====
meaning that There are two separate queue (partition) available in the array.sh program will run in seven issuesHPC, the <code>test</code> queue and the SGE_TASK_ID will have <code>prod</code> queue. Tha latter is for the value of 1production runs, 2the former is for testing purposes. In the test queue, ...1 node can be allocated for the maximum of half hours, 7 in every running issueThe default queue is <code>prod</code>. The qstat -f shows how Test partition can be chosen with the block tasks are splitfollowing directive: <pre>#SBATCH --partition=test</pre>
<code>==== Quality of Service (QoS) ====
There is an option for submitting low priority jobs. These jobs can be interrupted by any normal priority job at any time, but only the half of the time is billed to the account. Interrupted jobs will be automatically queued again. Therefore it is important to only run jobs that can be interrupted at any time, periodically saves their states (checkpoint) and can restart quickly.The default QoS is <nowikicode>--------------------------------------------------------------------------------- normal</nowikicode> , non-interruptable.
parallel.q@cn30 BIP 0/0/24 0 linux-x64 The following directive choses low priority: <nowikipre>#SBATCH --------------------------------------------------------------------------------- qos=lowpri</nowikipre>
test.q@cn32 BIP 0/7/24 7.15 linux-x64 907 1.00000 PI_ARRAY_T stefan r 06/04/2011 10:34:14 1 1 907 0.50000 PI_ARRAY_T stefan t 06/04/2011 10:34:14 1 2 907 0.33333 PI_ARRAY_T stefan t 06/04/2011 10:34:14 1 3 907 0.25000 PI_ARRAY_T stefan t 06/04/2011 10:34:14 1 4 907 0.20000 PI_ARRAY_T stefan t 06/04/2011 10:34:14 1 5 907 0.16667 PI_ARRAY_T stefan t 06/04/2011 10:34:14 1 6 907 0.14286 PI_ARRAY_T stefan t 06/04/2011 10:34:14 1 7 </code>==== Memory settings ====
It 1000 MB memory is clearallocated for 1 CPU core by default, that behind the tasks there are their array index more can be allocated with which we can refer to the components to the taskfollowing directive:<pre>#SBATCH --mem-per-cpu=MEMORY</pre>where <code>MEMORY</code> is given in MB. For example, in the case of block tasks, there The maximum memory/core at NIIFI SC is a possibility to delete particular parts of the block2600 MB. If we want to delete the subtasks from 5-7 of the previous task, the command
==== Email notification ====Sending mail when the status of the job change (start, stop, error):<codepre> qdel #SBATCH -f 907.5-7 mail-type=ALL#SBATCH --mail-user=EMAIL</pre>where <code>EMAIL</code> will delete chosen components, but leaves is the tasks 907.1e-4 intactmail to notify.The result of the running is seven individual files, with seven different running solutions: It can happen; that the task placed in the queue won’t start. This case the:
==== Array jobs ====Array jobs are needed, when multiple one threaded (serial) jobs are to be sent (with different data). Slurm stores unique id of the instances in the <code>SLURM_ARRAY_TASK_ID</code> enviromnemt variable. It is possible to seperate threads of the array job by retrieving these ids. Output of the threads are written into <code> qstat slurm-SLURM_ARRAY_JOB_ID-j job_id SLURM_ARRAY_TASK_ID.out</code>files. The scheduler uploads outputs tightly. It is useful to use multiply threads for a CPU core. [http://slurm.schedmd.com/job_array.html More on this topic]
command will show ===== Example =====Alice user submits 96 serial job for a maximum of 24 hour run. on the detailed scheduling informationexpenses of 'foobar' account. The <code>#SBATCH --array=1-96</code> directive indicates, containing which running parameters are unfulfilled by that it is an array job. The application can be run with the task<code>srun</code> command. This is a shell script in this example.<pre>#!/bin/bash#SBATCH -A foobar#SBATCH --time=24:00:00#SBATCH --job-name=array#SBATCH --array=1-96srun envtest.sh</pre>
The priority ==== MPI jobs ====Using MPI jobs, the number of the different tasks only means the gradiation listed in the pending jobsMPI processes running on a node is to be given (<code>#SBATCH --ntasks-per-node=</code>). The scheduler will analyze most frequent case is to provide the tasks in this ordernumber of CPU cores. Since it requires the reservation of resources, it is not sure, that the tasks will run exactly the same orderParallel programs should be started by using <code>mpirun</code> command.
If we wonder why a certain ===== Example =====Bob user allocates 2 nodes, 12 hour for an MPI job won’t start, here’s how you can get information: billing 'barfoo' account. 24 MPI thread will be started on each node. The stdout output is piped to <code>slurm.out</code> file (<code>#SBATCH -o</code>).
<codepre> qalter #!/bin/bash#SBATCH -A barfoo#SBATCH -w v job_id-job-name=mpi#SBATCH -N 2#SBATCH --ntasks-per-node=24#SBATCH --time=12:00:00#SBATCH -o slurm.outmpirun ./a.out</codepre>
One possible outcome ==== CPU binding ====Generally, the performance of MPI application can be optimized with CPU core binding. In this case, the threads of the paralel program won't be scheduled by the OS between the CPU cores, and the memory localization can be made better (less cache miss). It is advised to use memory binding. Tests can be run to define, what binding strategy gives the best performance for our application. The following settings are valid for OpenMPI environment. Further information on binding can be retrieved with <code>--report-bindings</code> MPI option. Along with the running commands, few lines of the detailed binding information are shown. It is important, that one should not use task_binding of the scheduler!
<code>===== Binding per CPU core ===== Job 53505 cannot run in queue "parallel.q" because it is not contained in its hard queue list (-q) Job 53505 In this case, MPI fills CPU cores by the order of threads (-l NONErank) cannot run in queue "cn30.budapest.hpc.niif.hu" because exclusive resource (exclusive) is already in use Job 53505 (-l NONE) cannot run in queue "cn31.budapest.hpc.niif.hu" because exclusive resource (exclusive) is already in use Job 53505 cannot run in PE "mpi" because it only offers 0 slots verification: no suitable queues</code>
You can check with this command where the jobs are running<pre>Command to run: mpirun --bind-to-core --bycore
<code>[cn05:05493] MCW rank 0 bound to socket 0[core 0]: [B . . . . . . . . . . .][. . . . . . . . . . . .][cn05:05493] MCW rank 1 bound to socket 0[core 1]: [. B . . . . . . . . . .][. . . . . . . . . . . .][cn05:05493] MCW rank 2 bound to socket 0[core 2]: [. . B . . . . . . . . .][. . . . . . . . . . . .] qhost -j -q[cn05:05493] MCW rank 3 bound to socket 0[core 3]: [. . . B . . . . . . . .][. . . . . . . . . . . .]</codepre>
<code>===== Binding based on CPU socket ===== HOSTNAME ARCH NCPU LOAD MEMTOT MEMUSE SWAPTO SWAPUS In this case, MPI threads are filling CPUs alternately. <nowiki>------------------------------------------------------------------------------- </nowikipre> global - Command to run: mpirun - - bind- to- core - - bysocket
cn01 linux-x64 24 24[cn05:05659] MCW rank 0 bound to socket 0[core 0]: [B . . . . . . . . . . .][. . . . . . .43 62.9G 3.0G 0.0 0.0 serial.q BI 0/42/48 ] 120087 [cn05:05659] MCW rank 1 bound to socket 1[core 0]: [. . . . . . . . . . . .][B . . . . . . . . . .15501 run.sh roczei r 09/23/2012 14][cn05:2505659] MCW rank 2 bound to socket 0[core 1]:51 MASTER 22 120087 0[. B . . . . . . . . . .][. . . . . . .15501 run.sh roczei r 09/23/2012 15:02:21 MASTER 78 120087 0.15501 run.sh roczei r 10/01/2012 07:58:21 MASTER 143 120087 0.15501 run.sh roczei r 10/01/2012 08][cn05:2805659] MCW rank 3 bound to socket 1[core 1]:51 MASTER 144 120087 0[.15501 run.sh roczei r 10/04/2012 17:41:51 MASTER 158 120340 0.13970 pwhg.sh roczei r 09/24/2012 23:24:51 MASTER 3 120340 0.13970 pwhg.sh roczei r 09/24/2012 23:24:51 MASTER 5 120340 0.13970 pwhg.sh roczei r 09/24/2012 23:24:51 MASTER 19 120340 0.13970 pwhg.sh roczei r 09/24/2012 23:24:51 MASTER 23 120340 0.13970 pwhg.sh roczei r 09/24/2012 23:24:51 MASTER 31 120340 0][.13970 pwhgB .sh roczei r 09/24/2012 23:24:51 MASTER 33 120340 0.13970 pwhg.sh roczei r 09/26/2012 13:42:51 MASTER 113 120340 0.13970 pwhg.sh roczei r 10/01/2012 07:43:06 MASTER 186 120340 0.13970 pwhg.sh roczei r 10/01/2012 07:58:36 MASTER 187 ... ]</codepre>
=== Queue types == Binding by nodes =====In this case, MPI threads are filling nodes alternately. At least 2 nodes needs to be allocated.<pre>''parallel.q'' Command to run: mpirun --bind-to- for paralel jobs (jobs can run maximum 31 days)  ''serial.q'' core - for serial jobs (jobs can run maximum 31 days)  ''test.q'' - test queue, the job will be killed after 2 hours    Getting information on the waiting line’s status:bynode
[cn05:05904] MCW rank 0 bound to socket 0[core 0]: [B . . . . . . . . . . .][. . . . . . . . . . . .]
[cn05:05904] MCW rank 2 bound to socket 0[core 1]: [. B . . . . . . . . . .][. . . . . . . . . . . .]
[cn06:05969] MCW rank 1 bound to socket 0[core 0]: [B . . . . . . . . . . .][. . . . . . . . . . . .]
[cn06:05969] MCW rank 3 bound to socket 0[core 1]: [. B . . . . . . . . . .][. . . . . . . . . . . .]
</pre>
==== OpenMP (OMP) jobs ====
For OpenMP paralell applications, 1 node needs to be allocated, and the number of OMP threads needs to be provided with the <code>OMP_NUM_THREADS</code> environment variable. The variable needs to be written before the application (see example), or needs to be exported before executing the command:
<code>
qstat -g c export OMP_NUM_THREADS=24
</code>
===== Example =====
Alice user starts a 24 threaded OMP application for maximum 6 hours on the expenses of foobar account.
<pre>
#!/bin/bash
#SBATCH -A foobar
#SBATCH --job-name=omp
#SBATCH --time=06:00:00
#SBATCH -N 1
OMP_NUM_THREADS=24 ./a.out
</pre>
<code> CUSTER QUEUE CQLOAD USED RES AVAIL TOTAL aoACDS cdsuE <nowiki>==== Hybrid MPI-------------------------------------------------------------------------------- </nowiki>OMP jobs ====
parallelWhen an application uses MPI and OMP it is running in hybrid MPI-OMP mode.q 0Good to know that Intel MKL linked applications MKL calls are OpenMP capable.52 368 0 280 648 0 0 serial.q 0.05 5 0 91 96 0 0 testGenerally, the following distribution suggested: MPI process number is from 1 to the CPU socket number, OMP thread number is the number of CPU cores in a node, or the half or quarter of that (it depends on code).q 0For the job script, the parameters of these two needs to be combined.00 0 0 24 24 0 0</code>
=== Running PVM == Example =====Alice user sent a hybrid job on the expenses of the 'foobar' account for 8 hours, and 2 nodes. 1 MPI process is running on one node using 24 OMP thread per node. For the 2 nodes, 2 MPI process is running, with 2x24 OMP threads<pre>#!/bin/bash#SBATCH -A foobar#SBATCH --job -name=mpiomp#SBATCH -N 2#SBATCH --time=08:00:00#SBATCH --ntasks-per-node=1#SBATCH -o slurm.outexport OMP_NUM_THREADS=24mpirun ./a.out</pre>
==== Maple Grid jobs ====
Maple can be run - similarly to OMP jobs - on one node. Maple module need to be loaded for using it. A grid server needs to be started, because Maple is working in client-server mode (<code>${MAPLE}/toolbox/Grid/bin/startserver</code>). This application needs to use license, which have to be given in the jobscript (<code>#SBATCH --licenses=maplegrid:1</code>). Starting of a Maple job is done by using
<code>${MAPLE}/toolbox/Grid/bin/joblauncher</code> code.
To run the previously shown and translated gexample ===== Example =====Alice user is running a Maple Grid application, we need for 6 hours on the following task-describing expenses of 'foobar'gexampleaccount:<pre>#!/bin/bash#SBATCH -A foobar#SBATCH --job-name=maple#SBATCH -N 1#SBATCH --ntasks-per-node=24#SBATCH --time=06:00:00#SBATCH -o slurm.sh'' scriptout#SBATCH --licenses=maplegrid: 1
<code> #!/bin/sh #$ -N GEXAMPLE ./gexample << EOL 30 5 EOL </code>module load maple
We can submit this with the following command: ${MAPLE}/toolbox/Grid/bin/startserver${MAPLE}/toolbox/Grid/bin/joblauncher ${MAPLE}/toolbox/Grid/samples/Simple.mpl</pre>
==== GPU compute nodes ====The Szeged site accomodates 2 GPU enabled compute nodes. Each GPU node has 6 Nvidia Tesla M2070 cards. The GPU nodes reside in a separate job queue (<code> qsub -pe pvm 5 gexample-partition gpu</code>).sh To specify the number of GPUs set <code>--gres gpu:#</code>directive.
The -pe pvm 5 command will tell ===== Example =====Alice user submits to the SGE to create foobar account a PVM parallel computer machine with 5 virtual processors4 GPU, and run the application in this6 hour job. <pre>#!/bin/bash#SBATCH -A foobar#SBATCH --job-name=GPU#SBATCH --partition gpu#SBATCH --gres gpu:4#SBATCH --time=06:00:00
<code> parallel.q@cn31 BIP 0/5/24 5.15 linux-x64 908 1.00000 GEXAMPLE stefan r 06/04$PWD/2011 13:05:14 5 gpu_burnout 3600</codepre>
Also note that after the running two output files were created: one containing an attached standard error and standard output (GEXAMPLE.o908), another describing the working method of the (GEXAMLE.po908). The latter one is mainly for finding errors.
=== Running MPI jobs =Extensions ==Extensions should be asked for at the Execution site (NIIF) at prace-support@niif.hu. All requests will be carefully reviewed and decided if eligable.
== Reporting after finishing project ==
A report must be created after using PRACE resources. Please contact prace-support@niif.hu for further details.
All computers are set up with several installations of the MPI system: vendor-specific MPI implementations, and MPICH system too. The default setup is the vendor-specific MPI.== Acknowledgement in publications ==
Running in the MPI environment is similar to the PVM environment. Let’s have a look at the example shown in the previous chapter connectivity. A very simple task which tests the MPI tasks’internal communication. Use the following connectivity.sh script to run it: PRACE
<code> #!/bin/sh #$ -N CONNECTIVITY </code> <code> mpirun -np $NSLOTS ./connectivity </code> Here, the $NLOTS variable indicates that how many processors should be used in the MPI environment. This equals with that number what we have reuired for the parallel environment.  The job can be submitted with the following command:  <code> qsub -pe mpi 20 connectivity.sh </code> With this command we instruct the scheduler to create a parallel MPI environment containing 20 processors, and reserve space for it in one of the queues. Once the space is available, the job starts: <code> uv.q@uv BIP 0/20/1110 20.30 linux-x64 910 1.00000 CONNECTOVI stefan r 06/04/2011 14:03:14 20 </code> Running the program will result in two files: the first one (CONNECTIVITY.o910) is the overlap of the result of the already run program standard output and standard error, while the second one (CONNECTIVITY.po910) is for the follow-up of the operation of the parallel environment. If the running is successful, this file is empty. The command -pe mpi 20 can be given in the script too with the directive #$ -pe mpi 20 '''Important notes: you should use mpirun.sge by SGI MPT on the Debrecen supercomputer''' when you run a job under SGE. This can automatic parse which machines have been selected by SGE. This way you can check that you are using SGI MPT or not:  DEBRECENWe acknowledge [service0PRACE/KIFÜ] ~ (1)$ type mpirun  mpirun is hashed (/opt/nce/packages/global/sgi/mpt/2.04/bin/mpirun)  DEBRECENfor awarding us access to resource based in Hungary at [service0] ~ (0)$ type mpirun.sge  mpirun.sge is hashed (Budapest/opt/nce/packagesDebrecen/globalPécs/sgi/mpt/2.04/bin/mpirun.sge)  DEBRECEN[service0Szeged] ~ (0)$  You should use mpirun binary directly if you are using SHF3 environment or you would like to use a more complex MPI run. However, you need to parse the SGE's PE_HOSTFILE environment variable in this case. === Running OpenMP jobs === There are applications that either use the solutions of the operation system for multi-threaded program execution, or use a special library designed for this, like OpenMP. These applications have to be instructed how many threads they can use. The matrix multiplication algorithm presented in the previous chapter can be described with the following ''omp_mm.sh'' script  <code> #!/bin/sh #$ -N OPENMP_MM </code> <code> ./omp_mm </code> it can be submitted with this command which will use 6 threads  <code> qsub -pe openmp 6 omp_mm.sh</code>
KIFÜ
=== Checkpointing support ==='''We acknowledge KIFÜ for awarding us access to resource based in Hungary at [Budapest/Debrecen/Pécs/Szeged].'''
Where technical support has been received the following additional text should also be used:
'''The support of [name of person/people] from KIFÜ, Hungary to the technical work is gratefully acknowledged.'''
At the moment the system doesn’t support any automatic checkpointing/restarting mechanism. If it is need, the application has to take care of it.[[Category: HPC]]

Navigációs menü