Deep Learning on your VM
Advanced Data and Workflows Group
National Center for Computational Sciences
Oak Ridge National Laboratory
While it may be straightforward to install / configure popular analytics / visualization packages in python, it can be challenging to install / configure deep learning frameworks like TensorFlow, MXNet etc. such that the hardware is best utilized for deep learning. One could still manually install these frameworks via pip but such installation may not take full advantage of the hardware.
Important: Resize your instance to a smaller flavor and shut it down if and when you are not using it.
A virtual machine is like a public-use desktop or a laptop. It costs money to run VMs and reserving resources for your VM precludes others from utilizing resources. Please be considerate.
The most popular method for efficiently using these frameworks is by running inside a Docker container. Here are the steps that need to be followed to easily and efficiently get started with deep learning: 1. Follow the official instructions to create a virtual machine on the CADES cloud. 2. Install Docker 3. Pull the Docker image of choice. In this example, we will pull the official TensorFlow container for CPUs:
bash $ sudo docker pull tensorflow/tensorflow
You can find containers for other deep learning frameworks on Docker too such as:
Get your data from your local machine via SCP or SFTP. Please refer to this guide for instructions. Let us assume that the data and code sits in a folder with the absolute path:
- Run the container using the command below.
bash $ sudo docker run -it -p 8888:8888 -v /home/cades/deep_learning:/notebooks/my_data tensorflow/tensorflow
-iflag requests the container to run in
-pflag forwards the
8888port on the container to the
8888port on your VM. You will need to connect the port on your VM to your local machine in a later step to access the Jupyter notebook.
-vflag tells your container to mount your folder at
/home/cades/deep_learningon your VM containing all your deep learning data + code to
/notebooks/my_data. Note that absolute paths must be provided. The folder
my_datawill be clearly visible and accessible when you are on the Jupyter server on your local machine's browser.
- For more information and other options on running the container, type:
bash $ sudo docker run --help
- In this specific example, the container is set up to start a Jupyter server on port
8888. Do not close this terminal window. You can either note down the token number or copy paste the complete web address. You will need this on step 9.
bash Copy/paste this URL into your browser when you connect for the first time, to login with a token: http://localhost:8888/?token=a-giant-alpha-numeric-string
- On your local machine (laptop / desktop) - start an SSH tunnel via the following command on the Terminal. I've shown instructions for Linux / Mac here. For more instructions on this step + simplifications, please refer to my instructions on setting up a python analytics VM
$ ssh -N -L localhost:8888:localhost:8888 cades@ip-address-of-your-vm
Access the Jupyter server running on your VM using your favorite internet browser (Internet explorer and Safari not recommended)- You can do this in one of two ways:
You can either copy paste the entire web address (http://localhost:8888/?token=a-giant-alpha-numeric-string from step 6)
Go to: http://localhost:8888 and paste the token number (with the option of setting your own password).
Once you are done working on the VM and want to shut down the container:
C twice to kill the Jupyter server on your VM
b. Check to make sure that your container has been killed via:
bash $ sudo docker container ls
If you see something like:
bash CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 77e04f817e88 tensorflow/tensorflow "/run_jupyter.sh -..." 2 minutes ago Up 2 minutes 6006/tcp, 0.0.0.0:8888->8888/tcp jovial_albattani
Kill the container via:
bash $ sudo docker container kill 77e04f817e88
c. You are now free to type
$ exit to close your ssh connection to your VM.