Introduction
In today's fast-paced business landscape, the influence of AI-powered solutions has become increasingly prominent, especially in the finance sector. The progress of chatbots has been remarkable, thanks to the AI models associated with transformers, allowing them to better understand user behaviour for more personalized services. This evolution is not just about language; it's about creating chatbots that can truly connect with users. Fintech startups are adopting advanced AI-based chatbots, experiencing improved communication with clients, and gaining valuable insights into their needs. The integration of rules, natural language processing (NLP), and machine learning (ML) empowers these chatbots to evaluate data efficiently and address a wide range of customer requests.
- In the domain of conversational finance, a generative AI chatbot personalised for fintech startups can enhance user engagement by understanding and responding to complex financial queries, providing clients with a seamless and personalized conversational experience.
- When it comes to financial analysis, a generative AI chatbot proves invaluable for fintech startups by swiftly processing and interpreting large datasets, offering real-time results, and assisting in informed decision-making for investment strategies and risk management.
- For fintech startups looking to generate synthetic data for testing and development purposes, a generative AI chatbot can efficiently create diverse circumstances, helping to simulate various financial situations and ensuring the robustness and adaptability of their systems.
In this blog, we'll explore the journey of building an AI-powered financial chatbot, tailored specifically for fintech startups, and how it can revolutionize client interactions in the finance industry.
Architecture
Our approach involves implementing the Retrieval-Augmented Generation (RAG) model, which combines traditional language models with an information retrieval step. In simple terms, it enhances language generation by first fetching relevant information from a database and then using that data to create more contextually relevant responses through a large language model from hugging face. This two-step process ensures the generated responses are not only based on the model's general knowledge but also on specific information retrieved for a given query.
To execute this, we've chosen the Zephyr 7B Beta language model from the Hugging Face library. Zephyr 7B stands out as a cutting-edge language model with an impressive 7 billion parameters. This extensive capacity enables it to understand and produce text that closely resembles human language, showcasing exceptional accuracy and consistency.
In conjunction with Zephyr 7B Beta, we integrate a widely used open-source relational database, PgVector, which is an extension of PostgreSQL. Specifically designed for managing high-dimensional vector data, like what's generated by language models such as Zephyr 7B Beta, PgVector excels at efficiently storing, indexing, and searching through this type of data. This makes it a crucial tool for projects dealing with large datasets and complex queries.
Lastly, for an enhanced user interface in our AI-powered chatbot development, we employ Streamlit. Simplifying the development process, Streamlit enables the foundation of interactive dashboards and data visualization. This ensures a more intuitive and engaging experience for users interacting with the chatbot.
Step-by-Step Guide
The partnership between NVIDIA GPU Cloud and E2E Cloud brings a full-fledged solution for those looking for top-notch GPU performance in their cloud computing projects. This collaboration combines advanced GPU technology with a dependable cloud infrastructure, guaranteeing a smooth and effective computing experience across various applications.
You can visit https://myaccount.e2enetworks.com/ and register to get your NVIDIA GPU suite.
Requirements
Let’s get into the code
This code snippet installs and imports the necessary Python packages. It includes packages for interacting with the operating system, a PDF document loader for text extraction, PostgreSQL database connectivity, PyTorch functionalities, numerical operations, vector indexing, pretrained language models, and Streamlit for web application development.
The first part of this code is establishing the connection to the PGVector database. The function ‘psycopg2.connect’ is used to connect to PGVector DB. The parameter ‘dbname’ specifies the name of the database to connect to. Other parameters ‘user’ and ‘password’ specify the username and password used for the authentication when connecting to the database, respectively.
The other part of the code sets up a table for storing embeddings. It sends a SQL query to create a table named ‘embeddings’ with two columns: ‘id' and 'vector'. The 'id' column is of type serial, which is an auto-incrementing integer, and it is set as the primary key. The 'vector' column is of type vector (512), indicating a vector of 512 elements. You can adjust the size based on your model’s output.
This code snippet defines a function, extract_text_from_pdf, that uses the PyPDFLoader class to extract text from PDF files. It iterates through a specified directory, reads PDF files named ‘Insurance Doc.pdf’, and appends the extracted text to a list named data.
This code initializes a text generation model and tokenizer from Hugging Face's model hub using the identifier ‘HuggingFaceH4/zephyr-7b-beta’. The ‘AutoTokenizer’ and ‘AutoModel’ classes are used for this purpose. Additionally, a text generation pipeline is set up using the ‘pipeline’ function from the ‘transformers’ library. The pipeline uses the same model identifier and is configured to use Torch's bfloat16 data type and automatic device mapping.
This Python code defines a function, ‘generate_embeddings’, which uses a pretrained model and tokenizer to convert text into embeddings. The embeddings are then stored in a PostgreSQL database table named ‘embeddings’. The code creates an index for these embeddings and inserts each embedding into the database.
This code defines a function, ‘your_retrieval_condition’, which takes a query embedding and an optional threshold as parameters. It converts the query embedding to a string format suitable for an SQL query and generates a retrieval condition based on cosine similarity. The condition is formulated as a comparison between the query embedding and the vectors stored in the ‘vector’ column of a PostgreSQL database table. The generated SQL condition checks if the cosine similarity between the query embedding and each stored vector is greater than the specified threshold (default is 0.7).
This code defines a function, ‘rag_query_with_generation’, which performs a Retrieval-Augmented Generation (RAG)-style query with text generation. It tokenizes and encodes the input query, generates a query embedding, and retrieves relevant embeddings from a PostgreSQL database using a specified retrieval condition. The retrieved embeddings are converted into a tensor, combined with the input_ids, and used as input for a text generation pipeline. The function then generates a response by decoding the model's output and returning the generated text. Parameters such as temperature, top-k, and top-p are set for controlling the text generation process.
This Streamlit app code creates a fintech chatbot interface. It configures the app with a title, icon, wide layout, and initial sidebar state. The background is set using a background image. The main content includes a title, an image, and a text input for user queries. Upon clicking the ‘Send’ button, the user's input is processed by the ‘rag_query_with_generation’ function, and the generated response is displayed in a text area labeled ‘Bot’. The app is initiated when the script is run, presenting an AI-powered financial chatbot interface to users.
This is how an AI-powered financial chatbot application is built using Streamlit. The design includes a prominent title introducing the AI chatbot and a user-friendly input field for interacting with the bot. Users can type queries and receive responses by clicking the ‘Send’ button.
Conclusion
In conclusion, combining technologies like the RAG model, Zephyr 7B Beta, PgVector, and Streamlit creates a strong foundation for building a personalized AI-powered financial chatbot for fintech startups. This collaboration streamlines data retrieval, language generation, and user interfaces, resulting in a sophisticated chatbot adept at understanding and responding to user queries. The integrated approach also optimizes the handling of high-dimensional vector data, ensuring the chatbot delivers personalized and relevant information in the dynamic fintech domain.
References
https://medium.com/@shaikhrayyan123/how-to-build-an-llm-rag-pipeline-with-llama-2-pgvector-and-llamaindex-4494b54eb17dhttps://medium.com/google-cloud/question-and-answer-chat-apps-with-our-own-data-pdf-with-vertexai-langchain-strimlit-db63735f5ab4https://medium.com/@stefnestor/python-streamlit-local-llm-2aaa75961d03