Retrieval-Augmented Generation (RAG) systems are becoming increasingly popular for building AI applications that leverage company knowledge bases. RAG operates in two steps: first, the retrieval step, where relevant data is extracted from a store; and second, the generation step, where the retrieved context is incorporated into the prompt for the LLM, enabling it to generate a response that is both accurate and extends beyond its pre-training data.
When building data-sovereign AI applications, you’ll want the RAG system to use a large language model (LLM) deployed on your own cloud infrastructure. You should also use a vector store that can be easily deployed, rather than relying solely on SaaS-based vector stores. By keeping the entire stack within your cloud infrastructure in India, you gain the benefits of building data-sovereign AI that adheres to compliance regulations, without the risk of leaking sensitive data to external platform companies.
In this article, we will guide you through the process of creating RAG applications using TIR AI Studio. TIR is a no-code AI development platform that allows you to deploy and perform inference using advanced LLMs without the hassle of managing infrastructure. By the end of this article, you will have all the tools needed to build RAG applications on your company’s data.
Why TIR AI Studio?
When using LLMs, you need advanced cloud GPUs, such as A100, H100, or L4, to lower the latency of your application. As a data scientist or AI developer, this means you need a workflow that simplifies LLM model deployment and inference without requiring any programming effort.
This is where TIR AI Studio excels. It provides an intuitive, no-code interface for deploying any model from Hugging Face, automating training pipelines, building AI workflows, and integrating them seamlessly with vector databases like Qdrant or PGVector. With TIR, you can focus entirely on your AI models and workflows, while the platform manages the complexities of scaling, deployment, and optimization.
Best of all, you can leverage advanced cluster cloud GPUs, like InfiniBand-powered 8xH100. This is especially important when launching applications in production, where high performance is critical. In terms of cost, TIR is far more cost-effective than other AI studios, making it an ideal choice—so give it a spin!
About Llama 3.1-8B
Llama 3.1-8B is part of an advanced family of multilingual large language models (LLMs), which include models with 8 billion, 70 billion, and 405 billion parameters. The Llama 3.1-8B model, in particular, is instruction-tuned, optimized for generating high-quality text, and is suited for tasks involving multilingual dialogue, making it ideal for use cases such as virtual assistants, chatbots, and more.
Key Features of Llama 3.1-8B
- Multilingual Capabilities: The model supports multiple languages, including English, Spanish, French, Hindi, German, Portuguese, Italian, and Thai, with further fine-tuning options to extend its language range.
- Context Length: It features a significantly extended context length of up to 128K tokens, making it highly efficient for handling large inputs or long dialogues.
- Training Data: Llama 3.1-8B was pretrained on over 15 trillion tokens, using publicly available datasets. The model is also fine-tuned with over 25 million synthetic instruction examples to improve its alignment with human feedback.
- Efficiency and Performance: Llama 3.1-8B outperforms many open-source and closed-source models in benchmarks for tasks like commonsense reasoning (e.g., CommonSenseQA), reading comprehension (e.g., SQuAD), and code generation (e.g., HumanEval).
We will use Llama 3.1-8B for this tutorial.
Guide to Building RAG on TIR
In this project, we will use the Llama 3.1-8B Instruct model and the Qdrant vector store to build the RAG application.
Let’s get started.
Step 1: Launch Llama 3.1-8B Endpoint
Head to TIR AI Studio, click on Model Endpoints on the left sidebar, and then click on Create Endpoint.
You’ll need to add your hf_token from Hugging Face in the next step. We will assume that you have applied for access to the Llama 3.1 model.
Provide the token below.
You will also need to choose the GPU node. You will have to select this based on the number of parameters your model has. We will choose the Llama 3.1-8B with the L4 series of cloud GPUs.
Once that’s done, select the Plan Details. You should select a minimum 30GB disk replica size (however, we recommend minimum 100GB if you are going to train the model).
Finally, you can set the environment variables if required.
You can now launch the endpoint.
Step 2: Launch Jupyter Notebook
We will launch a Jupyter Notebook to build our RAG application. For that, select ‘Nodes’ from the sidebar.
Now select the CPU or GPU needed according to your preference. Since the model endpoint is launched already, we can go with a CPU node here.
You will see the Jupyter Notebook launched in the list of Nodes.
Select the Python3 (IPY Python3 Kernel).
You’re all set with your Jupyter notebook on your node.
Step 3: Building RAG
First, install the required libraries. We will also install a PyPDF2 so that you can parse any PDF document.
Import the libraries.
Let’s also initialize Qdrant.
We will use the embedding model all-mpnet-base-v2. Also, we will use Qdrant in :memory: mode for testing. You can also launch a Qdrant node on TIR and provide that as an endpoint.
Now, we will provide a PDF to the PDF reader and add it to the corpus. You can do the same for multiple PDFs.
This will read the PDF and store it pagewise in a list. The structure will be the following:
Next, let’s tokenize the paragraphs in the corpus.
Above, we are breaking down a single-page content into sections and adding metadata of page_no and section_no with the paragraph. This will help us with the retrieval process later, and allow us to create citations.
Let’s now generate the embeddings.
We can also create a convenience function.
This is the structure of our final data, which will be inserted into Qdrant.
Now, we can insert the data into Qdrant.
Let’s also build a function to query Qdrant. This function will perform similarity search using Qdrant, where it looks at the query vector embeddings and returns the embeddings similar to it.
We will now prepare the LLM context from the data returned. The result from the vector store query will be in the following format.
We have to transform this into a format that the LLM can use.
Finally, let’s query the LLM with the context and the user query as part of the prompt. I have used Llama 3.1-8B here–you can choose any other model endpoint you create on TIR.
Let’s bring it all together. This is our ingestion pipeline.
Step 4: Querying the RAG
This is how you query your RAG system.
That’s all. You can pass on any RAG query, and it will respond with context from the PDF. The whole exercise will take you less than 20 minutes on TIR! TIR’s no-code model endpoint launch and integrated notebook environment dramatically simplifies the development process.
Conclusion
Building a Retrieval-Augmented Generation (RAG) system can often seem like a complex task, especially when handling large language models (LLMs) and vector stores. However, with TIR AI Studio, the process becomes streamlined and efficient. TIR’s no-code interface allows you to focus on what truly matters—your AI models and workflows—while it manages the intricacies of infrastructure, deployment, and scaling.
By following this guide, you’ve learnt how to launch the Llama 3.1-8B model on TIR, initialize a Qdrant vector store, and create a complete RAG pipeline that draws upon your company’s data to generate accurate, context-rich responses. Whether you're developing AI chatbots, virtual assistants, or knowledge management systems, this approach ensures your applications are not only powerful but also data sovereign—an essential factor in today’s compliance-driven world.
With TIR's advanced cloud GPUs, such as the A100, H100, and L4 series, along with cost-effective pricing, deploying large-scale AI applications has never been easier. Start exploring RAG development with TIR AI Studio and transform your company’s knowledge assets into a strategic advantage.