MATLAB

Back to documentation index

MATLAB MDCS Remote Submission

Users who have written parallel execution code using PCT (Parallel computing toolbox) can take advantage of MDCS on POD. They need to have a locally-installed MATLAB client, running on a Linux machine. They need to be licensed locally for MATLAB, PCT, and any additional toolboxes they want to use. With that local setup, the MATLAB GUI can be configured to submit PCT jobs to POD. These can only be batch jobs as interactive jobs are disabled because we do not allow TCP connections to be made between compute nodes and external IP addresses.

This documentation describes how to set up a MATLAB cluster configuration on your local client in order to run parallel jobs on POD using MDCS.


Licensing

In order to utilize MDCS on POD, the user must be licensed for MDCS as well. There are additional options here. We can access a local license (that you have provided to us and we host with Flex), or a remote license server (either publicly exposed, or via VPN). Email POD Support for assistance in configuring a VPN tunnel to a remote license server.

Additionally, the user may have access to MHLM (MathWorks Hosted License Manager). In this scenario, the user is paying for MDCS licenses in either a fixed amount (say, 32 workers for a year), or an on-demand amount (however many are checked out, up to some limit; for instance, 64 workers at a time). In the on-demand scenario, jobs are tracked to 1/10 minute (6 seconds), and all time is added up and billed at the end of the month.

When using MHLM, the client will be prompted for their MathWorks credentials when submitting the job. The use of MHLM is configured completely client-side and no work has to be done on POD to support it.


Initial Setup

First, download Media:MATLAB_POD.tar.gz

Then extract its contents and enter the resultant directory:

$ tar -xzvf MATLAB_POD.tar.gz
$ cd MATLAB_POD

Before proceeding, have your MDCS license number handy if you will be using Mathworks Hosted License Management. You will also need to know the IP address of your POD login node and the username you use to login. Your IP can be found on POD Portal under Manage My Login Nodes. Your username is displayed under My POD System Accounts and Groups.

Close down any MatLab instances you have open, as the program only loads the Cluster Profile list when it starts up. Run the setup script, and follow its instructions.

$ ./setup_remote_cluster.sh

                            < M A T L A B (R) >
                  Copyright 1984-2013 The MathWorks, Inc.
                    R2013b (8.2.0.701) 64-bit (glnxa64)
                              August 13, 2013


To get started, type one of these: helpwin, helpdesk, or demo.
For product information, visit www.mathworks.com.

Depending on your Matlab setup, either a dialog box will appear or you will be prompted the following questions:

Enter the address of your POD virtual machine: <Your POD Login Node IP>
Enter your POD user name: <Your POD Unix Username>
Are you using Mathworks Hosted License Management? [y/N] y
Enter your MHLM license number: <Your HMLM License Number>

The setup script will create a cluster profile named POD_remote_r2013b and will set it as your default cluster profile. In addition, the setup script will add the MATLAB_POD folder to your MATLAB search path.

Validation

As a quick check that the setup succeeded, you'll need to validate it. Open up MATLAB, navigate to the Cluster Profile Manager (Parallel->Manage Cluster Profiles), select the POD_remote_r2013b profile, and click the "Validate" button. During this process, you will be asked to provide a an identity file (your private SSH key used to log into POD) and whether the identity file is password protected. The answers will be saved and you will not need to enter them again.

The validation will run 5 tests. For security reasons, our firewall prevents starting hanging parallel pools on POD, so the final test (parpools) is expected to fail as POD does not support interactive parallel pools.


Customization

The setup script sets several default values that you may want to change. If you wish to modify the configuration, you may do so from the Cluster Profile Manager. Select the POD profile in the manager and click the Edit button to change any of the following parameters:

  • POD IP - If your POD IP changes, you may correct your configuration by modifying the second field in both the independentSubmitFcn and communicatingSubmitFcn tuples. You can always check your POD IP on the POD Portal under Manage My Login Nodes.
  • Worker Count - If you need to change the number of workers you use, you can do so by modifying the NumWorkers entry. This value is set to 48 by the setup script.
  • Remote Directory - If you need to use a different remote directory for job data storage, you may change it by editing the third field of both the independentSubmitFcn and communicatingSubmitFcn tuples. The default is your $HOME/MdcsDataLocation/POD/R2013b/remote on POD.
  • Local Directory - If you need to change the local job data storage directory, you can do so by modifying the JobStorageLocation entry.

Queue and Other Submission Options

The PBS job submission options can be controlled using the Clusterinfo class of functions provided by the MATLAB_POD environment.

  • Queue The queue is initially set to H30. If you wish to submit to a different queue, you may enter the command ClusterInfo.setQueueName(<queue_name>). For instance:
ClusterInfo.setQueueName('M40')
  • ppn The number of processors per node (ppn) is initially set to 16, the appropriate value for the H30 queue. If you change the submission queue, remember to adjust the ppn value using the ClusterInfo.SetProcsPerNode(ppn) command:
ClusterInfo.setQueueName('M40')
ClusterInfo.setProcsPerNode(12)
ClusterInfo.setQueueName('FREE')
ClusterInfo.setProcsPerNode(12)
  • walltime The wall clock time for the job can be set by the command ClusterInfo.SetWallTime('HH:MM:SS'), for instance:
 ClusterInfo.setWallTime('01:20:00')

Enter the Matlab command ClusterInfo.state() to display the current values:

>> ClusterInfo.state()

                          Arch : 
                   ClusterHost : xxx.xxx.xxx.xxx
                  EmailAddress : 
                   GpusPerNode : 
                      MemUsage : 
                PrivateKeyFile : /home/user/.ssh/id_rsa
   PrivateKeyFileHasPassPhrase : 1
                  ProcsPerNode : 16
                   ProjectName : 
                     QueueName : H30
                   Reservation : 
                        UseGpu : 0
            UserDefinedOptions : 
             UserNameOnCluster : mypoduser
                      WallTime :

Please note that some of the configuration options provided by clusterinfo, for instance the GPU selection facilities, are not yet available on the MT1 cluster, thus activating these fields might result in jobs being rejected by the MT1 scheduler, or jobs waiting forever in queue. At this stage, we recommend using only the QueueName, ProcsPerNode and WallTime properties.

MATLAB in Batch Mode

For MATLAB, users have the ability to submit batch jobs on POD. This means that their job is non-interactive, and can be run completely at the command line. There is no access to the GUI. For instance, if a user had a matlab file named analyze.m on POD, they could run matlab -r analyze in their PBS script.


Licensing

As of right now, in order to run MATLAB on POD, you would need to either bring your own license to POD so that we can host it, or allow us to tunnel back into some remote license server (either publicly exposed, or via VPN). Email POD Support if for assistance in configuring a VPN tunnel to a remote license server.


Example

Assuming the user has their MATLAB license information in $HOME/.matlab.lic and a matlab file named analyze.m, this is an example job that can be submitted with qsub.

#PBS -S /bin/bash
#PBS -N MATLAB
#PBS -q H30
#PBS -l nodes=1:ppn=16
#PBS -j oe

module load matlab/R2013b

cd $PBS_O_WORKDIR

matlab -r analyze
exit $?