STAR-CCM+ Visualization

In the previous section we used the headnode to connect to a running job on the compute nodes. This was done because the headnode of the cluster was too small. This is similar practice to many on-premise clusters where the headnode is reserved just for logging in and submiting jobs.

Connecting to a job in this fashion is possible however the lack of a GPU on the headnode does mean the time to render the images will be slower. It is also true that for pre and post-processing you may want an interactive session where rendering the mesh and flow-field is the main activity.

For this situation it is best to use the GPU visualization node we created in Section 4. There are two ways we can use this node.

Open up a simulation locally

We can firstly simply open up a job locally using as many cores are available on the visualization node. In this example we picked the G4dn.4xlarge which has 16vCPU and 64GB RAM, however we could have picked the g4dn.16xlarge which has 64vCPU and 256GB RAM (and there are instances with even higher number of cores and RAM).

To illustrate this, connect to the GPU node using the the NICE DCV client application as described in Section 4. Which means go to the EC2 console to note the public IP address and enter this as the IP to connect to. Next enter the username (ec2-user) and password.

starnice-2

You can then open up the terminal application as shown below: starnice-2

You can then open up STAR-CCM+ in parallel on 8 cores. You can either head to the installation path of the code or create an alias as we did in the previous section:

To do this place the following line in your ‘~/.bashrc’ file:

 alias starccm='/fsx/STAR-CCM+/15.06.007/STAR-CCM+15.06.007/star/bin/starccm+'

Everytime you open up a new terminal tab you’ll be able to use this.

starnice-2

This is now just like running STAR-CCM+ on any other machine, there is no SLURM scheduler and you can use this for pre and post-processing if needed. Note that because we attached the same file system you can see all the files on the cluster and this GPU node under /fsx.

Connect to a running job

The other way is to connect to a job that was submitted to the HPC cluster in the section ‘Connecting to the job (server mode). Please follow those instructions until the point where you note the hostname, which should look like the following:

compute-dy-c5n18xlarge-1.cfd.pcluster

Where the port number is 47827.

You can then load up STAR-CCM+ by typing the following (assuming you made an alias for the STAR-CCM+ installation path in the previous STAR-CCM+ installation section).

 starccm

Then head to ‘File, Connect to Server’ and enter that hostname:

starnice-2 starnice-2

You can now do anything you want e.g open up some of the post-processing scenes, continue running etc. When you are finished you can just exit and accept the message for it to kill the job. If you wish to let it carry on you can instead just click ‘File, Disconnect’, but please remember to kill the job via ‘scancel’ when you are finished as described next.

To do this you’ll have to head to the cluster using the instructions in the previous section. When you are finished make sure you then do the following, where the JOBID, can be seen by firstly typing ‘squeue’. If its empty then nothing is running:

 scancel JOBID

This was just a basic example of STAR-CCM+ but hopefully it gives you an example of how to use several STAR-CCM+ use modes on AWS.