Using the T30 Haswell QueueBack to documentation index
T30 Class Compute Server - Intel® Haswell
• Intel Xeon® CPU E5-2660 v3 @ 2.60GHz
• 20 Bare-metail Compute Cores Per Node
• 128 GB DDR4 RAM Per Node
• QDR IB non-blocking fabric
• 10GbE Storage Network
This section assumes that you hav worked with POD's job queues already. If you need a brief introduction on building job scripts for job submission to using POD's queues, please see: POD 101: Quick Start for POD.
POD's T30, Haswell queue allows for single node jobs to have 1 through 20 cores. Each core is allotted 6.4GB of RAM. For example, a single node, 10 core, 64GB RAM job for a maximum of 1 hour is submitted as such:
qsub -q T30 -l nodes=1:ppn=10,walltime=01:00:00 jobscript.sub
Multi-node jobs must request all 20 cores for each server.
qsub -q T30 -l nodes=5:ppn=20,walltime=01:00:00 mpi-jobscript.sub
Interactive jobs for compiling applications can be requested with the -I flag, and must not include a walltime request. Please note that all interactive sessions are billed for until you exit the interactive shell to terminate the job.
[user@loginnode ~]$ qsub -q T30 -I -l nodes=1:ppn=20 qsub: waiting for job 14701947.pod to start qsub: job 14701947.pod ready [user@n492 ~]$ [user@n492 ~]$ logout qsub: job 14701947.pod completed [user@loginnode ~]$
Compiling for Haswell
The T30 queue's processors allow for both AVX and AVX2 extention sets. The use of AVX2 extention sets requires at least GCC 4.9.0. When loading GCC 4.9.0 into your environment, both a compatible binutils and zlib package will be loaded as well.
[user@loginnode ~]$ qsub -q T30 -I -l nodes=1:ppn=20 [user@n492 ~]$ module load gcc/4.9.0 [user@n492 ~]$ module list Currently Loaded Modulefiles: 1) zlib/1.2.8/gcc.4.9.0 2) binutils/2.25.1/gcc.4.9.0 3) gcc/4.9.0 # AVX can be leveraged with -mavx [user@n492 ~]$ gcc -mavx mycode.c # AVX2 can be leveraged with -mavx2 [user@n492 ~]$ gcc -mavx2 mycode.c # Alternatively, use arch or tune native options [user@n492 ~]$ gcc -march=native -mtune=native mycode.c
Tundra Open Compute Platform
POD's new Intel® Haswell cores are delivered through Tundra, Penguin Computing's Open Compute HPC platform. Penguin Computing’s Tundra Extreme Scale Series provides density, serviceability, reliability and optimized total cost of ownership for highly-demanding computing requirements. As such, Penguin Computing is able to pass these savings to POD customers.
Read more about Penguin Computing's Tundra solutions which was awarded the Department of Energy’s National Nuclear Security Administration (NNSA) Advanced Simulation and Computing (ASC) CTS-1 contract
• NNSA CTS-1 Tundra Award Press Release
• Tundra Extreme Scale (ES) Open Compute Solutions Guide
• Penguin Computing Showcases OCP Platforms for HPC at SC15
• Tundra Extreme Scale (ES) Series Specifications