Running an AI pipeline is a computationally demanding process that heavily relies on the power of GPUs (Graphics Processing Units). These specialized processors are designed to handle the complex mathematical calculations and parallel processing required for training and running AI models efficiently.
The intensive nature of AI workloads, especially in tasks such as deep learning, computer vision, and natural language processing, necessitates the use of high-performance GPUs and CPUs to ensure timely and accurate results.
E2E networks provides advanced cloud GPU servers for building and using foundational AI models, with highly optimized GPU containers and pre-configured environments for popular AI frameworks like PyTorch and TensorFlow. This way you can efficiently run your AI pipelines without the need for expensive hardware investments.
This article will guide you through the process of using the services provided by E2E Networks to build your AI pipelines.
Option 1: Launching a GPU Node
You will first need to generate a SSH Key pair. Open you bash terminal and write the following command:
ssh-keygen
In your home folder (~/) you will see a folder by the name .ssh, which will have the public (id_rsa.pub) and private key (id_rsa) pair. Remember to never share your private key with anyone.
Then head over to myaccount.e2enetworks.com. On the left side you’ll see a Settings tab. Inside that tab click on the “SSH Keys” option.
Click on the Add New Key on the top right.
Select the “Load from file” option and add the id_rsa.pub file. It will automatically generate your key name, and will also grab its content. Then click on “Add Key”.
Then go to the Compute tab on the left, and click on GPU.
Select your GPU Type, the OS version for your machine and the GPU Card (Memory). In the image below we selected the Tesla T4 GPU, with Debian 11 as the OS, and the 16GB card. Then click on “Create”.
Select your billing plan and click on “Create” again.
You can give a name to your instance. Then, under the Node Security option, go to SSH Keys, and checkbox your SSH key. This will upload your public key to this instance.
Then scroll over to the bottom and click on “Create My Node”. Your node will then be launched on a public IP address.
Now go over to your VSCode IDE and download these two extensions: Remote - SSH and Remote Explorer. This will help you connect to your GPU server from within your IDE environment.
Once downloaded, click on the Remote Explorer icon on the left side in your VSCode:
Click on the + symbol under Remotes. Then, in the SSH Connection Command, enter the following command:
ssh root@serveri
Add the destination to your config file (select first option). It will be automatically generated.
When you click on the refresh symbol, you’ll see that a new Remote has been added. To enter the remote server, click on -> (“arrow”).
You are now ready to build your AI pipeline on E2E’s GPU server.
Option 2: Using the TIR AI platform
If you want to avoid going through the multiple steps of connecting to a GPU node and then integrating it with your VS Code, you can use the TIR AI platform. It’s a readymade platform that comes with its in-built Jupyter Notebook Environment, which makes building AI applications a breeze.
There are many other features on the platform like Datasets, Pipelines, Foundational Studio that makes it easier to design complicated AI workflows (for e.g., fine-tuning a model).
Launch the platform by clicking on the TIR - AI Platform button.
Click on Nodes and then Create Node.
You can select a pre-loaded image (e.g., PyTorch). It will automatically download all the PyTorch modules for you on the Node.
Then select your GPU type.
You can disable SSH access since you’ll be launching a notebook within the platform itself.
Once your node is ready, you can go to Actions and Launch Notebook to begin building your AI application.
There are many other features on the TIR - AI platform as well.
If you want a high-level explanation of each of these features, you can read our article here.