Introduction
Optical Character Recognition (OCR) is a pivotal technology for digitizing content, enabling faster document processing, real-time data extraction, and automation across sectors such as healthcare, logistics, and finance.
OCR systems are especially useful when handling unstructured text in images and PDFs. By 2025, it's estimated that 80-90% of the world's data will be unstructured, which includes text, images, audio, and other formats that don't adhere to traditional database structures. Currently, only about 0.5% of this unstructured data is being analyzed and utilized, indicating a massive untapped potential for businesses.
Traditional OCR vs Multimodal AI
Unfortunately, traditional OCR models fall short when the task involves understanding context, handling mixed content, or processing complex visuals. They fail at performing reasoning tasks where the knowledge is interconnected between the text, visuals, charts, and diagrams in unstructured documents.
That’s why multimodal vision language models hold so much potential. Architecturally, they use neural networks that integrate visual features with text features, which allows them to understand and generate responses that are contextually relevant across both domains. The future of AI is multimodal, and in 2024, we have already seen the emergence of a number of VLLMs.
The latest such model is Pixtral-12B. The model is open-weight, Apache 2.0 licensed, and can be used to build advanced AI workflows that incorporate both text and image. In this guide, we will test the Pixtral-12B model and show you how to build OCR systems using this technology. Its multimodal capability, generous license for commercial use, and small size make it ideal for real-world applications where text doesn’t exist in isolation, such as scanned contracts, technical diagrams, or multi-language documents.
Let’s get started!
What is Pixtral-12B?
Pixtral-12B is Mistral AI’s latest multimodal vision-language model designed for tasks that require both image and text processing. You will find that Pixtral-12B is particularly powerful in OCR tasks, outperforming closed models across multiple benchmarks such as DocVQA and ChartQA. Pixtral-12B leverages a long context window (128k tokens) to process large documents with interleaved text and images, making it highly efficient for large-scale OCR.
Key Features of Pixtral-12B:
- Multimodal Processing: Understands images and text simultaneously, allowing you to extract meaning from complex document layouts.
- Variable Image Resolution: You can feed Pixtral-12B images of any size, whether they’re low-resolution scans or high-resolution documents.
- Long Context Window: With a 128k token context window, Pixtral-12B excels at handling long documents, maintaining context throughout the text extraction process.
- Open-Weight Architecture: The model is released under an Apache 2.0 license, giving you flexibility to integrate and customize the model for your specific OCR use cases.
Guide to Deploying Pixtral-12B
In this guide, you’ll learn how to deploy Pixtral-12B on a real-world OCR dataset. You will be setting up an environment on E2E Cloud’s high-performance GPU nodes, optimizing for OCR workloads and running the Pixtral-12B model for advanced text extraction. By the end of this tutorial, you’ll know how to have a functional OCR system capable of handling both text and image content in a unified manner.
Prerequisites - Setting Up a GPU Node on E2E Cloud
Launching a GPU Node
To run Pixtral-12B efficiently, you need a powerful cloud GPU. E2E Cloud offers HGX H100 and A100 GPUs, which are ideal for running large models like Pixtral-12B. Follow these steps to launch a GPU instance:
- Sign up for E2E Cloud and access your dashboard.
- Launch a cloud GPU instance with at least 40GB RAM. You can choose between an HGX H100 or an A100 GPU for optimal performance.
- Once the instance is running, access it via SSH and install the necessary software for OCR and Pixtral-12B deployments.
Installation Requirements
Before deploying Pixtral-12B, ensure that your environment is properly configured:
- Python 3.10+
- Setup virtual environment using Python
- Dataset: Download a dataset containing images or documents that require text extraction. We will use a research report as well as a handwritten text.
Now, install Jupyter Lab and launch it.
You can now create an SSH tunnel with your local system to develop on the Jupyter environment.
Using Pixtral-12B for OCR and Image Understanding
Now that we have our environment set up, let’s get started.
Step 1: Install vLLM and mistral_common
We will use vLLM to serve the Pixtral-12B model. Let’s install that first. We will also install the mistral_common library, which is the preferred method to query this model.
Step 2: Import Libraries
Now let’s import the libraries.
Step 3: Login to Hugging Face and Download Model
You will need an access token from Hugging Face (huggingface.co). Get that first, and then download the model in the following way:
Step 4: Query the Model with Image
You can directly pass an image url to Pixtral, or you can provide the image in base64 encoded format. We will do the former for simplicity.
The image looks like this:
As you can see, it has a considerable amount of handwritten text. Along with that, it is also rotated. We will perform OCR on this.
You can do that by querying the model in the following way:
Now let’s see what the multimodal model outputs.
This is what we get:
Now, we will try with a more complex image - a handwritten invoice with multiple images.
The output we get is the following:
This result has gone a little off here, especially the numbers. However, it has done spectacularly well on the text.
Final Notes
As you can see, Pixtral-12B is an outstanding vision language model, performing OCR on handwritten text fairly well. We used an A100 cloud GPU to run and test this model.
If you want to get started on Pixtral-12B today, sign up to E2E Cloud and follow the instructions above.