Introduction

About Our Locations

Penguin Computing On Demand (POD) provides two distinct cluster locations (MT1 & MT2) for cloud-enabled HPC computing that can both be accessed from the POD Portal. They have localized storage and high-speed networking to facilitate easy migration of data. While login nodes and user home directories are local to each location, a single global POD account username provides usage reporting for HPC cluster resources at each location.

Cluster

Available Technology

Data Center Details

MT1

Intel® X5600 Series (Westmere)
Intel® E5-2600 Series (Sandy Bridge)
Intel® E5-2600v3 Series (Haswell)
Intel® QDR 40 Gb/s InfiniBand interconnect
High Speed NAS storage volumes

US Hosted
Tier-III, N+1 Redundancy
Blended Tier-I network providers
AICPA’s TSP section 100 Compliant
SSAE SOC 1 and SOC 2 Compliant

MT2

Intel® E5-2600v4 Series (Broadwell)
Intel® Xeon® Gold 6148 (Skylake)
Intel® Omni-Path 100 Gb/s interconnect
High Speed distributed Lustre file system

POD Queues

The job queues on POD are organized by CPU architecture and cluster location. The MT1 cluster offers the T30, M40, H30, and H30G queues, while MT2 offers the S30 and B30 queues as described below:

MT1 Queue

Compute Node Architecture

Cores

Memory

Interconnect

Storage

T30

Dual Intel® E5-2600v3 Series (Haswell)

20

128 GB

QDR InfiniBand

Private NAS Volume

M40

Dual Intel® X5600 Series (Westmere)

12

48 GB

QDR InfiniBand

Private NAS Volume

H30

Dual Intel® E5-2600 Series (Sandy Bridge)

16

64 GB

QDR InfiniBand

Private NAS Volume

H30G

Dual Intel® E5-2600 Series (Sandy Bridge)
Dual NVIDIA® Tesla® K40 GPU

16
2 GPUs

64 GB

QDR InfiniBand

Private NAS Volume

MT2 Queue

Compute Node Architecture

Cores

Memory

Interconnect

Storage

S30

Dual Intel® Xeon® Gold 6148 (Skylake)

40

384 GB

Intel Omni-Path

Lustre File System

B30

Dual Intel® E5-2600v4 Series (Broadwell)

28

256 GB

Intel Omni-Path

Lustre File System

POD MT2 Cluster

In addition to newer CPU architectures with higher core-counts and available memory the POD MT2 cluster also provides: a high-performance Lustre parallel file system, Intel Omni-Path interconnect, and Scyld Cloud Workstations. New and existing users of POD are encouraged to use these capabilities of the POD MT2 cluster.

Scyld Cloud Workstations (SCW)

Our Scyld Cloud Workstations are very popular! We have made them available on every MT2 login node of Penguin Computing On-Demand. You can now view your results in either 2D or 3D on all POD MT2 login nodes. Our 2D and 3D remote workstations are Linux machines connected to the cluster fabric. You no longer have to wait to download large data files to an on premise workstations to post-process simulations. No plugins or application clients are necessary. Access is provided through all web browsers with HTML5 capabilities over HTTPS requiring no additional ports through the firewall. This unique architecture saves bandwidth, improves image quality, and ensures near-universal accessibility for users.

If you require 3D accelerated graphics for data visualization, you can easily create a 3D remote workstation backed by a GPU that can be powered on and off on-demand. To setup your 3D workstation on MT2:

  1. Login into the POD portal

  2. Click “POD MT2” from the left-side menu

  3. Scroll down and click the blue +Create a Login Node button

  4. Enter a name for your new SCW

  5. Select the CentOS 7 image radio button

  6. Choose an instance type of pod.scw for 3D accelerated SCW

  7. Click the blue Continue button to start the provisioning process

Once your SCW is ready to use you will see a clickable link in the Hostname / IP column of your POD MT2 resources page. You can SSH directly to that hostname for CLI access or click the link to open a Scyld Cloud Workstation session in your browser. Use your POD portal e-mail address and password to login to the SCW instance. Please Note: Remember to “Power Off” your pod.scw login node when NOT in use. This will ensure that you are not charged while you are not using it.

Sharing a Directory with Managed Users

In some circumstances, a POD account owner with managed users will wish to share files with the some or all of his/her managed users. The best way to accomplish this is through the use of UNIX user groups and updated group permissions. The following instructions are to be followed by the POD account owner on MT2:

  1. In the POD Portal, create a new Account Group from the Users & Groups page.

  2. Add yourself and all other managed account users needing access to the new shared directory.

  3. SSH into the your login node and create the new directory group_dir to be used for shared access in $HOME.

  4. Grant group access to the new directory by changing the group ownership and permissions. Substitute newgroup in the example below with the name of the group you created in Step 1.

  5. Allow group members access to the shared directory you just created by changing the group ownership of $HOME.

$ mkdir ~/group_dir
$ chgrp newgroup ~/group_dir
$ chmod 2750 ~/group_dir
$ chgrp newgroup ~

Your default $HOME directory permissions are 0750. This allows members of newgroup to read any files in your $HOME and to read and write files in the $HOME/group_dir. Alternatively you can set your $HOME permissions to 0710 to prevent members of newgroup from reading files in your $HOME. They would need to change into $HOME/group_dir for reading and writing.

Migrating to MT2 from MT1

MT1 and MT2 storage are in different data centers, cross connected with fiber for easy data migration. To use MT2 you will have to create new login nodes and submit your jobs to the B30 or S30 queues. You will need to update your submission scripts to take advantage of the additional CPU cores available for use on MT2 compute nodes.

Provision MT2 Login Node

If you don’t already have a free login node on MT2, from the portal click on the POD MT2 location link in the left hand menu. You will then see a blue button to create a new login node. Most users will need a pod.free login node, but you can also create other more powerful login nodes if necessary. Here you can also set a storage quota for the Lustre file system on MT2 if you would like.

Install your SSH Keys on MT2

You should also visit the SSH Keys page in the portal to make sure your SSH keys are installed on the MT2 cluster. Just select the checkbox under the Installed @ MT2? column for your keys make them available for authentication on MT2.

Move your Data to MT2

Your MT1 home directory is available on your MT2 login nodes at /mt1home/$USER. If you are transferring files to MT2, you can use Lustre-optimized versions of the cp and rsync commands by loading the appropriate lustreoptimized/cp or lustreoptimized/rsync environment module. These versions will move data using multi-threaded copies and intelligent striping on the destination side. Sometimes you may need to trigger the automounter to make your MT1 home directory available on your MT2 login node. To do this run a command like ls -d /mt1home/$USER.

Contact Support for Assistance

If you would like all or most of your data moved from MT1 to MT2, the POD support team can move your data over for you. If your MT1 login node has been customized and you would like customizations replicated on MT2 please contact POD support. You can open a new ticket with the POD support team by sending e-mail to pod@penguincomputing.com

POD Usage Charges and Billing

The POD billing cycle begins at midnight on the 26th of each month and ends at 11:59 on the 25th of the following month with all times expressed in Greenwich Mean Time (GMT). Invoicing takes place within 5 business days of the end of the billing cycle. Your account is charged for the utilization of the following resources on POD. More details on how these charges are calculated below:

Compute Node Core Hours

Compute node core hours are computed by adding the core hour usage of each successfully run job that completes within a billing period, regardless of when it started. The core hour usage for a particular job is computed by multiplying the job runtime by the number of scheduled cores. Since job runtime is expressed in seconds, dividing this product by the number of seconds in one hour gives the single job compute node core hour total:

  • Single job core hour total: (runtime in seconds x total number of cores) / 3600.0

Login Node Instance Server Hours

For non-free login node instances, usage charges occur for the periods where the instance is powered on and available. The server instance uptime, expressed in seconds, is used in the calculation. To minimize your cost power off non-free login node instances when not in use.

MT1 Storage Volume Capacity

On POD MT1, storage charges are based on the size of your allocated storage volume. Your monthly bill will reflect the volume capacity; not the amount of storage you are actually using inside the volume. You can request a resize of your MT1 storage volume at any time. To calculate your monthly bill, we compute the sum of the daily charges for each day in the billing cycle. To compute the charge for a given day, we take the maximum allocated storage volume capacity on that calendar day and multiply by a daily factor, representing an average number of days per billing cycle.

For example, the monthly bill for an MT1 storage volume of 10 GB where the billing cycle consisted of 30 days would be computed as follows:

  • Average number of days per billing cycle: 365 days / 12.0 billing cycles

  • Daily storage amount: 10 GB / AVG_DAYS_PER_CYCLE = 0.328767123

  • Multiply by the number of days in the billing cycle: 0.328767123 x 30 days = 9.86301369

  • Multiply by the $0.10 GB/month rate: 9.86301369 x $0.10 = 0.986301369

  • For a rounded total monthly bill of: $0.99

MT2 Home Directory Utilization

On POD MT2, storage charges are based on the size of your home directory. There are no storage volumes to worry about on the MT2 cluster so you are charged for the storage you are actually using in your home directory. The calculation for your monthly bill is the same as above where we compute the sum of the daily charges for each day in the billing cycle based on the maximum home directory utilization on that calendar day. To check your current home directory utilization run the following command on your MT2 login node:

$ lfs quota -uh $USER $HOME
Disk quotas for user penguin (uid 9999999):
     Filesystem       used   quota   limit   grace   files   quota   limit   grace
    /home/penguin    11.2G      0k      0k       -   40391       0       0       -

Usage and Reports

Usage amounts and charges are available in the POD Portal on the My Account Usage page. These numbers are generally subject to a delay (particularly for core hours). From this page you can interact with a graph of the different charges for the current billing period. You can also scroll down to see Compute Node Core Hour Usage and Storage charges listed by individual user accounts. You can also see Compute Node Core Hour charges listed by project and Login Node Server Hours listed by instance.

You can also look at previous billing periods by using the Billing Period drop-down menu at the top of the page to select a billing period by Month and Year. For the most complete view into your billing use the blue Download XLS and Custom XLS buttons to download detailed spreadsheets including Core Hour Usage billed for individual jobs.

Please Note: Billing estimates displayed in the portal do not reflect any potential account credits. This could include rollover credits or credits applied due to failed jobs. Credits will be reflected in your monthly invoices.