Configuration : FSx for Lustre

So lets now connect to this instance via SSH using AWS CloudShell. Just note the public IPv4address of your instance as shown in the image below.

visual

On CloudShell just type the following command where your SSH key name should replace cfd_ireland if you have a diffferent key name and the IPV4ADDRESS by the IP address shown in your console.

ssh -i cfd_ireland.pem ec2-user@IPV4ADDRESS

We need to do two tasks; firstly to make sure you have access to the same file system that your HPC cluster sees (Fsx Lustre) and to also setup NICE DCV so you can remotely connect to the instance. We also want these settings to be done once and still be there if you stop and start your instance (to save costs).

FSx for Lustre Configuration

Firstly type df to see what drives are mounted. You can see that there isn’t a /fsx mount that we had on the cluster. You should see something like the following:

[ec2-user@ip-10-0-0-118 ~]$ df
Filesystem     1K-blocks     Used Available Use% Mounted on
devtmpfs        32563796        0  32563796   0% /dev
tmpfs           32574564        0  32574564   0% /dev/shm
tmpfs           32574564      584  32573980   1% /run
tmpfs           32574564        0  32574564   0% /sys/fs/cgroup
/dev/nvme0n1p1  26202092 13680496  12521596  53% /
tmpfs            6514916        0   6514916   0% /run/user/0
tmpfs            6514916        0   6514916   0% /run/user/1000

So firstly lets install the Fsx for Lustre packages by typing the following:

$ sudo amazon-linux-extras install -y lustre2.10

Then you create a new directory:

$ sudo mkdir /fsx

Next you can mount your FSx drive. The command for this can be found by heading to the Fsx console (search for Fsx in the searach page on a new page).

visual

You can click ‘attach’ to see the instructions to follow for your particular FSx drive.

visual

So now type the command that is listed, which should be something like the following (please note you will have different entries for fs-xxx and mjxxx):

sudo mount -t lustre -o noatime,flock fs-xxxxx.fsx.eu-west-1.amazonaws.com@tcp:/mjxxxxx /fsx

Now type df again and now you should see the /fsx directory.

[ec2-user@ip-10-0-0-228 ~]$ df
Filesystem               1K-blocks     Used  Available Use% Mounted on
devtmpfs                  32563796        0   32563796   0% /dev
tmpfs                     32574564        0   32574564   0% /dev/shm
tmpfs                     32574564      528   32574036   1% /run
tmpfs                     32574564        0   32574564   0% /sys/fs/cgroup
/dev/nvme0n1p1            26202092 13399460   12802632  52% /
tmpfs                      6514916        0    6514916   0% /run/user/1000
10.0.0.70@tcp:/mjxxxxxxx 1168351232 23109888 1145239296   2% /fsx

We also want to make sure that you don’t need to redo this everytime you reboot the instance (you can read more here). To do this we need to edit the following file:

sudo vi /etc/fstab

You need to add the following line at the end of this file. Please make sure you replace the fs-xxxx by what you used in your previous command (keeping the .fsx.eu-west-1.amazonaws.com bit) and also the mount point mjxxxx by what you used.

fs-0c7d45b01a3cbaae4.fsx.eu-west-1.amazonaws.com@tcp:/mjxxxx /fsx lustre defaults,noatime,flock,_netdev 0 0

Save this file and you can make sure if you stop and start the instance the FSx drive will be mounted.

S3 check

You should also be able to read the S3 bucket, as you could on ParallelCluster by typing:

aws s3 ls

This permission is possible because of adding the IAM role to this instance.