Introduction to Mixtral 8x7B
Mixtral 8x7B, developed by Mistral AI, presents a significant advancement in the field of AI language models. Its architecture and performance have placed it in the spotlight, especially in comparison with established models like GPT-3.5 and Llama 2. Mixtral employs a unique Sparse Mixture-of-Experts (SMoE) architecture, a decoder-only model that selects from a set of 8 distinct groups of parameters, termed ‘experts’. This innovative approach allows the model to process inputs efficiently, using only a fraction of its total parameters per token. With 46.7 billion total parameters, it manages to operate as efficiently as a model with only 12.9 billion parameters, balancing speed and computational cost effectively.
In terms of performance, Mixtral has shown impressive results in various benchmarks, surpassing Llama 2 70B and equating or outperforming GPT-3.5 in most standard tests. The model excels in handling a large token context of up to 32k tokens and demonstrates proficiency in multiple languages, including English, French, Italian, German, and Spanish. Furthermore, it is particularly strong in code generation and can be fine-tuned for instruction-following applications.

One of the most notable features of Mixtral 8x7B is its efficiency and the ability to run on hardware with lower capabilities. This includes machines without dedicated GPUs, such as the latest Apple Mac computers, making it more accessible for a wider range of users and applications. This accessibility is a step towards democratizing advanced AI technology, expanding its potential uses beyond high-end servers to more modest computing environments.
Mixtral 8x7B's open-source nature, being released under an Apache 2.0 license, stands in contrast to other major AI models that are often closed-source. This approach aligns with Mistral’s commitment to an open, responsible, and decentralized approach to technology, offering more transparency and flexibility for developers and researchers.
However, the model's openness and advanced capabilities come with their own set of concerns, particularly regarding ethical considerations. The absence of built-in safety guardrails in Mixtral 8x7B raises concerns about the potential generation of unsafe content, a challenge that needs careful attention, especially in applications where content moderation is crucial.
In summary, Mixtral 8x7B is a powerful and innovative AI language model that combines technical sophistication with practical design. Its performance, efficiency, and open-source availability make it a notable addition to the AI landscape. However, the lack of safety measures necessitates a cautious approach in its application, especially in scenarios requiring stringent content moderation.
A Note on Mixtral’s SMoE Architecture
The Sparse Mixture-of-Experts (SMoE) architecture used by Mixtral is an advanced AI model design that represents a significant shift from traditional neural network structures. To understand this, let's break down the key components and principles behind this architecture:
1. Mixture-of-Experts (MoE) Concept: The MoE approach involves a collection of 'expert' networks, each specialized in different tasks or types of data. In traditional neural networks, all inputs pass through the same layers and neurons, regardless of their nature. However, in an MoE system, different inputs are processed by different 'experts', depending on their relevance to the input.
2. Sparsity in SMoE: The term 'sparse' in this context refers to the fact that not all experts are engaged for every input. At any given time, only a subset of experts is activated to process a specific input. This sparsity is crucial for efficiency, as it reduces the computational load compared to a situation where all experts are active for every input.
3. Decoder-Only Model: Mixtral, being a decoder-only model, means that it focuses on generating outputs based on the input it receives, unlike encoder-decoder models which first encode an input into a representation and then decode it to an output. This structure is particularly suited for tasks like language generation, where the model produces text based on the input context.
4. Efficient Parameter Usage: Mixtral has a total of 46.7 billion parameters, but due to its sparse nature, it only uses around 12.9 billion parameters per token. This means that for any given input token, the model dynamically selects which subset of parameters (or experts) to use. This selective engagement of parameters allows Mixtral to operate with the efficiency of a much smaller model, while still retaining the capability of a large model.
5. Balancing Speed and Computational Cost: By employing a sparse architecture, Mixtral is able to balance speed and computational cost effectively. The model can process inputs quickly because it doesn't need to engage its entire parameter set for every token, thereby reducing the computational load and improving efficiency.
In summary, the Sparse Mixture-of-Experts architecture in Mixtral represents a sophisticated approach to AI model design, enabling high efficiency and effectiveness by selectively using parts of its vast parameter set as needed. This architecture is particularly beneficial for large-scale models, allowing them to maintain high performance without incurring the full computational cost of their size.
In this article, we’ll delve into Mixtral’s capabilities by building a simple RAG pipeline to query the latest cricket news articles. If you want to understand what RAG pipelines are, you can read this article.
Launching a GPU Node
Head over to https://myaccount.e2enetworks.com/ to check out the cloud GPUs you might need for implementing the code in this article.

We will be selecting NVIDIA’s V100 GPU as the choice for our node.
Setting Up the Environment
!pip install -q torch datasets
!pip install -q accelerate==0.21.0 \
peft==0.4.0 \
bitsandbytes==0.40.2 \
trl==0.4.7
!pip install -U scipy
!pip install -U transformers
!pip install -U langchain
!pip install html2text
!pip install -U sentence-transformers
!pip install faiss-gpu
Loading a Quantized Mixtral 8x7B Model
We’ll use Hugging Face to load our model.
import os
import torch
import transformers
from transformers import (
AutoTokenizer,
AutoModelForCausalLM,
BitsAndBytesConfig
pipeline
)
model_name='mistralai/Mixtral-8x7B-Instruct-v0.1'
model_config = transformers.AutoConfig.from_pretrained(
model_name,
)
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "right"
Adjusting the Model Quantization Parameters to Improve Speed
#################################################################
# bitsandbytes parameters
#################################################################
# Activate 4-bit precision base model loading
use_4bit = True
# Compute dtype for 4-bit base models
bnb_4bit_compute_dtype = "float16"
# Quantization type (fp4 or nf4)
bnb_4bit_quant_type = "nf4"
# Activate nested quantization for 4-bit base models (double quantization)
use_nested_quant = False
#################################################################
# Set up quantization config
#################################################################
compute_dtype = getattr(torch, bnb_4bit_compute_dtype)
bnb_config = BitsAndBytesConfig(
load_in_4bit=use_4bit,
bnb_4bit_quant_type=bnb_4bit_quant_type,
bnb_4bit_compute_dtype=compute_dtype,
bnb_4bit_use_double_quant=use_nested_quant,
)
# Check GPU compatibility with bfloat16
if compute_dtype == torch.float16 and use_4bit:
major, _ = torch.cuda.get_device_capability()
if major >= 8:
print("=" * 80)
print("Your GPU supports bfloat16: accelerate training with bf16=True")
print("=" * 80)
Let’s load the model onto our GPU.
#################################################################
# Load pre-trained config
#################################################################
model = AutoModelForCausalLM.from_pretrained(
model_name,
quantization_config=bnb_config,
)
Let’s check how much our Mixtral model knows about cricket before moving forward to give it context about the latest news.
inputs_not_chat = tokenizer.encode_plus("[INST] Tell me about Cricket? [/INST]", return_tensors="pt")['input_ids'].to('cuda')
generated_ids = model.generate(inputs_not_chat,
max_new_tokens=1000,
do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
decoded
[' [INST] Tell me about Cricket? [/INST] Cricket is a bat-and-ball game played between two teams, each consisting of eleven players, in a field with a rectangular 22-yard-long pitch at the center. The game is played in innings, with each inning consisting of one or two phases in which one team bats and the other team fields.\n\nIn the batting phase, two batsmen are present on the field at a time and aim to score runs by hitting the ball, which is bowled toward them by a bowler from the opposing team, with a cricket bat made of willow. The batsmen may run between the wickets to score runs, and they may also hit the ball far enough to score boundary runs.\n\nIn the fielding phase, the fielding team aims to prevent the batsmen from scoring runs, while also trying to get the batsmen out (a situation known as a wicket). A batsman is out if they are bowled (the ball hits the wicket directly), caught (the ball is hit in the air and caught by a member of the fielding team without it bouncing), run out (the ball is thrown to the wicket before the batsman completes their run), or dismissed through one of several other methods.\n\nCricket is a popular sport in many Commonwealth countries, particularly in the United Kingdom, India, Australia, South Africa, and the West Indies. Test cricket, the longest and most traditional form of the game, is played over the course of five days, while one-day and Twenty20 cricket are shorter formats that can be completed in a single day.']
Creating a RAG Using LangChain and FAISS
Langchain is a comprehensive framework for designing RAG applications. You can read up more on the Langchain API here.
FAISS or Facebook AI Similarity Search is a vector database developed by Facebook specifically designed for efficiency and accuracy in similarity search and clustering in high-dimensional spaces. It's particularly useful for a variety of tasks, including image retrieval, recommendation systems, and natural language processing.
In this article, we’ll implement a RAG pipeline using LangChain and FAISS. First let's chunk our documents and convert them into vector embeddings.
from langchain.text_splitter import CharacterTextSplitter
from langchain.document_loaders import AsyncChromiumLoader
from langchain.document_transformers import Html2TextTransformer
from langchain.vectorstores import FAISS
from langchain.embeddings.huggingface import HuggingFaceEmbeddings
import nest_asyncio
nest_asyncio.apply()
articles = ["https://indianexpress.com/article/sports/cricket/india-vs-south-africa-playing-xi-tip-off-rajat-patidar-shreyas-iyer-sai-sudharsan-9073113/",
"https://indianexpress.com/article/sports/cricket/ipl-auction-2024-live-streaming-when-and-where-to-watch-in-india-9073091/",
"https://indianexpress.com/article/sports/cricket/india-vs-south-africa-odi-live-streaming-2nd-odi-9072839/",
"https://indianexpress.com/article/sports/cricket/why-australian-media-outraged-ricky-ponting-trevor-baylisss-visit-ipl-auction-9072573/",
"https://indianexpress.com/article/sports/cricket/arshdeep-singh-on-his-player-of-the-match-vs-south-africa-9072530/",
"https://indianexpress.com/article/sports/cricket/australia-vs-pakistan-pat-cummins-wants-nathan-lyon-to-beat-shane-warnes-record-9072426/",
"https://indianexpress.com/article/sports/cricket/australia-vs-pakistan-ian-healy-backs-david-warner-to-play-one-more-year-of-test-cricket-9072374/",
"https://indianexpress.com/article/sports/cricket/ind-vs-sa-first-odi-shaun-pollock-unhappy-south-africa-bowlers-vs-india-9071857/",
"https://indianexpress.com/article/sports/cricket/nathan-lyon-500-test-wickets-journey-self-doubts-emerging-from-shane-warnes-shadow-9072047/",
"https://indianexpress.com/article/sports/cricket/india-versus-south-africa-1st-odi-arshdeep-singh-shows-signs-of-cracking-code-9072008/",
]
# Scrapes the blogs above
loader = AsyncChromiumLoader(articles)
docs = loader.load()
# Converts HTML to plain text
html2text = Html2TextTransformer()
docs_transformed = html2text.transform_documents(docs)
# Chunk text
text_splitter = CharacterTextSplitter(chunk_size=100,
chunk_overlap=0)
chunked_documents = text_splitter.split_documents(docs_transformed)
# Load chunked documents into the FAISS index
db = FAISS.from_documents(chunked_documents,
HuggingFaceEmbeddings(model_name='sentence-transformers/all-mpnet-base-v2'))
# Connect query to FAISS index using a retriever
retriever = db.as_retriever(
search_type="similarity",
search_kwargs={'k': 4}
)
In Python, the asyncio module provides support for writing asynchronous code using the async/await syntax. However, there are situations where nested calls to asyncio.run() might result in an error. This is because the event loop is already running, and calling run() again can lead to conflicts.
nest_asyncio is a workaround for this issue. The apply() function patches the asyncio module to allow nested calls to asyncio.run() without raising an error.
Now let’s check our vector database and see if it can retrieve similar chunks of content.
query = "Talk about IPL?"
docs = db.similarity_search(query)
print(docs[0].page_content)

As you can see, similarity search gave us a relevant chunk of data.
Building an LLM Chain for Question-Answering
from langchain.llms import HuggingFacePipeline
from langchain.prompts import PromptTemplate
from langchain.embeddings.huggingface import HuggingFaceEmbeddings
from langchain.chains import LLMChain
text_generation_pipeline = transformers.pipeline(
model=model,
tokenizer=tokenizer,
task="text-generation",
temperature=0.2,
repetition_penalty=1.1,
return_full_text=True,
max_new_tokens=300,
)
prompt_template = """
### [INST]
Instruction: Answer the question based on your
cricket knowledge. Here is context to help:
{context}
### QUESTION:
{question}
[/INST]
"""
mixtral_llm = HuggingFacePipeline(pipeline=text_generation_pipeline)
# Create prompt from prompt template
prompt = PromptTemplate(
input_variables=["context", "question"],
template=prompt_template,
)
# Create llm chain
llm_chain = LLMChain(llm=mixtral_llm, prompt=prompt)
Let’s first try to work with our LLM Chain without giving it any context.
llm_chain.invoke({"context":"",
"question": "Who is Shreyas Iyer and what is his role in the IPL?"})
{'context': '',
'question': 'Who is Shreyas Iyer and what is his role in the IPL?',
'text': 'Shreyas Iyer is an Indian cricketer who plays as a right-handed batsman. He is the captain of the Delhi Capitals team in the Indian Premier League (IPL). Iyer has been playing for Delhi Capitals since the 2015 season, and he was appointed as the captain of the team in 2018. Under his leadership, the team reached the final of the IPL in 2020. Iyer is known for his aggressive batting style and has been a consistent performer for Delhi Capitals over the years.'}
We get a generic answer about Shreyas Iyer without any information about the recent news articles, because there was no context provided.
Creating a RAG Chain
Now let’s create a RAG chain with our LLM so it has context available for the questions we ask.
from langchain_core.runnables import RunnablePassthrough
query = "Who is Shreyas Iyer and what is his role in the IPL"
retriever = db.as_retriever(
search_type="similarity",
search_kwargs={'k': 20}
)
rag_chain = (
{"context": retriever, "question": RunnablePassthrough()}
| llm_chain
)
rag_chain.invoke(query)
'question': 'Who is Shreyas Iyer and what is his role in the IPL',
'text': 'Based on my cricket knowledge and the provided documents, Shreyas Iyer is an uncapped player who is a big-hitting batter from Kanpur. He is one of the five uncapped players that Sports5 has suggested tracking during the IPL auction. However, there is no specific information about his role in the IPL mentioned in the documents.'
Now we get some information about Shreyas Iyer in context to recent news.
Let’s take a look at some examples:
query = "What’s happening in the IPL Auction?"
rag_chain.invoke(query)
'question': 'What’s happening in the IPL Auction?',
'text': "The IPL 2024 auction is scheduled to take place on December 19, 2023, in Coca Cola Arena in Dubai. A total of 333 players, including 214 Indian players and 119 foreign players, will be part of the final auction pool, with a maximum of 77 slots available to all the 10 franchises, out of which 30 slots are reserved for overseas players. Among the players, there are 116 capped players and 215 uncapped players. The live coverage of the auction will begin at 1 PM (IST) and can be watched on Star Sports Network, Viacom 18's Jio Cinema, or followed through indianexpress.com for live coverage and updates.
query = "What are the latest updates about the Australian Cricket team?"
rag_chain.invoke(query)
'question': 'What are the latest updates about the Australian Cricket team?',
'text': "Based on the provided documents, here are the latest updates about the Australian Cricket team:\n\n1. Nathan Lyon, the off-spinner for the Australian cricket team, has recently taken his 500th Test wicket during the first cricket test between Australia and Pakistan in Perth, Australia. He became the eighth cricketer overall and the third Aussie to reach this spin-tastic milestone.\n2. Pat Cummins, the captain of the Australian cricket team, has expressed his desire for Nathan Lyon to surpass Shane Warne's record of 708 Test wickets.\n3. The Australian cricket team won the 2023 World Cup by beating India in the final match, becoming ODI champions for the sixth time.\n4. In the recent pink-ball Test against South Africa, Nathan Lyon's spell in the second innings helped turn the game in Australia's favor.\n5. Arshdeep Singh, an Indian cricketer, showed promising signs of cracking the ODI code while playing against South Africa.\n6. According to a report, clearly, Shane Warne's shadow over Nathan Lyon has receded, and it's the batsmen who are now grappling with self-doubts."
Our project is complete. Hope you enjoyed this tutorial.
References
https://medium.com/@thakermadhav/build-your-own-rag-with-mistral-7b-and-langchain-97d0c92fa146