In this tutorial, we will set up monitoring and metrics collection for an AI model endpoint using open-source technologies Prometheus and Grafana. The endpoint will be built with Flask and Ollama, providing a simple API for prompting an AI assistant.
The Need to Monitor a Model Endpoint
Monitoring a model endpoint is crucial in MLOps for several reasons:
1. Performance Tracking: This is done to ensure that the model is performing as expected post-deployment. Performance can degrade over time due to concept drift, data drift, or changes in the environment where the model is deployed.
2. Data Quality: We need to monitor the model for data quality and check if there are anomalies in input data. Anomalies might indicate issues with data pipelines or unexpected changes in data patterns if the model wasn't trained to handle them.
3. Resource Utilization: To monitor the compute resources being used by the model endpoint, such as CPU, memory, and disk I/O, this helps in optimizing resource allocation and can reduce costs.
4. Latency: Monitoring helps in measuring the response time of the model endpoint. High latency could affect user experience and the overall performance of the application using the model.
5. Throughput: Model monitoring helps in tracking the number of requests that the model endpoint can handle in a given time frame. This helps in understanding the scalability needs and whether the current infrastructure is sufficient.
6. Error Rates: It is also used to identify the rate of failed requests or prediction errors, which could indicate problems with the model or with the infrastructure.
7. Versioning: It manages and keeps track of different versions of models that are deployed, making it easier to roll back to previous versions if needed.
8. Security: It ensures that the model endpoint is secure from external threats and that data privacy is maintained, especially if sensitive data is being used.
9. Compliance and Auditing: It maintains logs and records of the model's predictions and performance, which may be necessary for regulatory compliance in certain industries.
10. Feedback Loop: It collects information on the model's performance that can be used to further refine and improve the model through retraining.
In essence, monitoring a model endpoint is about maintaining the quality and reliability of machine learning services in production. It enables the team to respond quickly to issues, update models as needed, and ensure that the model meets its objectives.
In this article, we will use the open-source technologies Prometheus and Grafana to monitor a Llama2 endpoint deployed using Ollama and served via the Flask framework.
A Note on Prometheus and Grafana
Prometheus is an open-source monitoring and alerting toolkit widely used in dynamic service-oriented environments. It was originally built by SoundCloud and has a large community of developers and users. Prometheus is designed to collect and process metrics in real time. Prometheus stores data as time series, and each time series is identified by a metric name and a set of key-value pairs, known as labels.
Grafana is an open-source analytics and interactive visualization web application. It provides charts, graphs, and alerts for the web when connected to supported data sources, one of the most popular being Prometheus. Grafana allows users to create insightful and beautiful dashboards that are customizable and shareable.
Prometheus and Grafana are often used together to deliver effective monitoring and visualization capabilities. Prometheus collects and stores metrics, while Grafana provides a powerful interface to visualize the data stored in Prometheus. The combination allows developers and system administrators to detect and respond to issues in their environments, understand system behavior, and improve the performance of applications. Grafana's ability to pull data from multiple sources also allows for a combined view across different systems and components, providing a comprehensive overview of the infrastructure's health and performance.
Launching a GPU Node
Since we will be working with a local model pulled from Ollama, we will be needing a GPU with sufficiently large memory. For running the code in this blog, I used a V100 GPU node launched on E2E networks.
Head over to https://myaccount.e2enetworks.com/ to sign up.
Getting into the Code
Start an Ollama Server.
This will start an instance of Ollama on the default port 11434.
Then pull the llama2:7b model.
Ensure the following dependencies.
Now create a file called app.py and paste the following into it.
The given Python code is for a Flask web application that integrates with Prometheus for metrics monitoring while using Ollama to generate responses to prompts. The code is broken down into several parts:
1. Imports: The necessary modules and functions are imported from Flask, Prometheus, Werkzeug, a custom Flask Prometheus metrics utility, and `ollama`.
2. Flask App Initialization: A Flask app instance is created.
3. Ollama Client Initialization: A client for the Ollama service is initialized outside of the request handling function to prevent reinitialization for every request, which would be inefficient.
4. Route Definition (`/generate`):
- The `/generate` endpoint is defined to accept POST requests.
- It extracts a `prompt` from the JSON body of the request.
- If no prompt is provided, it returns a 400 Bad Request response.
- If a prompt is provided, it calls Ollama to generate content based on the prompt, streaming the response. This is particularly useful for long-running generation tasks, as it can provide output incrementally.
- An inner function `generate_stream` is defined to use the Ollama client to generate responses. It uses a generator and the `yield` keyword to stream the response back to the client.
- If an exception occurs during generation, it is caught, and the exception message is sent back in the stream.
- The Flask `stream_with_context` decorator is used to ensure that the Flask context is not lost during streaming. This decorator is necessary because the streaming process is a generator, and without the decorator, Flask might lose the context of the current application or request.
- The `/generate` endpoint returns a `Response` object with streaming content.
5. Prometheus Metrics Registration:
- The `register_metrics` function is called, adding Prometheus metrics tracking to the Flask app. The application version (`v0.1.2`) and configuration (`staging`) are passed as labels for these metrics.
6. DispatcherMiddleware:
- The `DispatcherMiddleware` is used to combine the Flask app with another WSGI application. Here, it is set up to route requests to `/metrics` to a Prometheus WSGI application created by `make_wsgi_app()`. This allows the Flask app to serve its endpoints while also serving the Prometheus metrics endpoint under the same WSGI server.
7. Running the Application:
- The `run_simple` function from Werkzeug is used to run a development WSGI server. The server listens on all available IP addresses (`0.0.0.0`) at port `5000`, serving the combined Flask and Prometheus app via the `dispatcher`.
This application setup allows for a simple AI content generation service with streaming responses and monitoring capabilities via Prometheus. The Flask app serves the application's main functionality and the metrics endpoint, making it a self-contained service suitable for development and possibly staging environments.
Now run the Flask application.
A Flask server will be up and running on http://localhost:5000. You can query your llama2:7b model by sending a cURL request as follows:
You will see the tokens streaming in through your terminal window as below.
Now create a file called docker-compose.yaml and paste the following into it.
The `docker-compose.yml` file defines how your Prometheus service should run within a Docker container, including which image to use, which ports to expose, and where to find its configuration file.
Make another file called prometheus.yml and paste the following into it.
The `prometheus.yml` file is the configuration for Prometheus itself, detailing how it should scrape metrics. In the targets, we need to plug in the <server-ip>:<port> of our flask application since Prometheus will try to scrape it externally.
Now run the following commands:
This will start a Prometheus server at http://<server-ip>:9090
You can go to the target endpoint of this server to see that it is monitoring your Flask application.