How to create your own RAG AI with LangFlow

2024-6-11 on Tuesday
1297 words
7 minutes

Today I’d like to start my series of articles on using the LangChain open source tool-set.

LangChain is one of the most used frameworks to create AI agents with, if not THE most used. With nearly 90.000 stars and almost 14000 forks on GitHub at the time of this article, we can safely say it is popular.

Reason enough for me to start covering it. To kick things off with, let’s answer a few essential questions.

TLDR: How to get started with LangFlow, here.

What we're covering today:


Introduction

What is a RAG AI Agent?

Retrieval Augmented Generation (RAG) is a technique to enhance the accuracy and reliability of AI with facts retrieved from other sources.

Originally the RAG name was coined in this paper, the authors stated:

RAG could be employed in a wide variety of scenarios with direct benefit to society, for example by endowing it with a medical index and asking it open-domain questions on that topic, or by helping people be more effective at their jobs

Essentially you can create your own set of sources that you have vetted for accuracy and fit for purpose. Creating applications with your specific needs in mind, and without potential bias or opinion from the original AI.

RAG’s are a great way to provide business solutions that can run locally with open source AI models such as Llama, or large cloud scale AI’s such as OpenAI.

What is LangChain and LangFlow?

LangChain is an open source Python library that can integrate with AI models using API’s and functions. With LangChain you can create natural language interactions with an AI using concepts such as Chains, and Prompts to weave together a custom RAG AI.

LangFlow is a GUI that can diagram a LangChain flow to experiment and prototype an AI solution with quickly. It offers drag and drop components and a chat box to immediately start interacting. This simplifies the initial development process of iteration and experimentation.

How do I get started?

To get started with LangFlow creating your AI agent there are 3 options you can choose from:

  1. SAAS Installation using Huggingface spaces:
    • Private: Huggingface account, then duplicate LangFlow Spaces , leave everything on default, click duplicate space.
      • This will take a bit of time (30+ minutes, not included in the 5 minutes 😜)
        • In the mean time head over to the Spaces settings > Variables and secrets > New secret
        • Name: OPENAI_API_KEY, Value: your OpenAI key we just copied.
      • Go drink a coffee, walk outside, do other things while the Huggingface space is created.
      • Then, you can skip to this chapter.
    • Public: access the Public LangFlow service, ready to go but everyone can read it!
      • Do not use your OpenAI API keys here!
      • Then, you can skip to this chapter.
  2. A Google Cloud installation. If preferred, go here.
    • Then you can skip to this chapter.
  3. Or, run it on your own machine using VirtualBox.
    • This is my preferred method of choice, it’s low cost and you have full control. Follow along in the next chapter to find out how to quickly set this up.

How to install LangFlow on your own machine

  1. Install Vagrant and VirtualBox:
  1. Install via my LangFlow repo:
    • git clone https://github.com/rpstreef/langflow-setup
    • Within the cloned repo; execute; vagrant up to install the LangFlow VM with docker and docker-compose.
      • Type make for all the commands available.
    • Optionally:
      • Run: make vm to get access to the Virtual Machine.
        • Note if you do not have VBoxManage on your command-line, this will not work.
        • Sometimes you could get a message like this; ssh: connect to host 192.168.1.39 port 22: No route to host.
          • Retry the connection in that case.
      • Run: dry to see all the containers running
    • open your browser and navigate to; http://localhost:7860/flows
LangFlow running on VirtualBox and Docker

Now we can start creating a chat bot

To use the store, you need to register and get an API key here

How to create our first RAG AI with LangFlow

This is what we’re going to build today.

LangFlow Example RAG AI

To start with, what is the aim of this particular RAG? What do we want it to do?

A chat bot that can answer detailed questions about any web-page we choose.

A perfect use case for this is article analysis via questions and answers. You can formulate the questions such that the bot can provide additional explanation on harder to understand concepts.

I’m sure there are other uses cases like this, let me know on Twitter what you would use it for.

Basically there are two parts to building this RAG chat-bot:

  1. We need to retrieve the information we want to ask questions about,
  2. then make that information available to the OpenAI chat bot.

Part 1: Processing the web-page data

To start of with, we need to store website text, as a vector, in a database such that we can use it later to ask questions about.

I’ve added links to the official LangChain documentation for each of the concepts / parts discussed.

  1. WebBaseLoader: Enter the Web Page we want to analyze.
  2. OpenAIEmbeddings: This will “embed” the information read from the website. We use OpenAI’s text-embedding-3-small Model to do this. Which should be ample for most use-cases. This creates a vector representation of the documents we’re about to create. The TikToken tokenizer to make it more accurate for OpenAI to split the text.
  3. CharacterTextSplitter: This part will cut up the text into chunks with a bit of overlap, and outputs these parts as documents to FAISS the vector database we’re using in this scenario.
  4. FAISS: Is the Facebook AI Similarity Search (FAISS) vector database that can store and search vectorized documents in memory.

Part 2: Let’s chat with your RAG

Now that we have the basic information retrieval and vector storage created. We can start adding a Chat AI and chaining the documents we’ve created with a Retriever (Search via a question and retrieve documents using a prompt via the chat box.

  1. ChatOpenAI: We use the Chat feature offered by OpenAI and set the model to gpt-3.5-turbo-0125 to save on cost. Additionally the temperature is set to 0.2 to inform the AI we need answers with high probability and not too much creativity (> 0.5 - 1.0)
  2. CombineDocsChain: We use chain type stuff, which basically means the context is the document. For efficiency you could change this to map_reduce, but it would mean we need to complicate the Prompt parts to combine the documents with.
  3. RetrievalQAWithSourcesChain: Chains together and creates the Context from the user question, and the documents retrieved from the Vector database.
  4. Let’s chat: Now we can ask what our webpage is about:

Experiment and Test

This is just a starting point, and a working example of how quickly a RAG can be created for a specific purpose. A few things to consider that can optimize the solution;

  • Experiment with the token sizes of the OpenAIEmbedding and ChatOpenAI.
  • Changing the chain type of the CombineDocsChain, to map_reduce.
  • Improving the website text data quality by reducing the token size of the source text (lemmatization and stemming).

Data input quality has a big impact on RAG output quality.

Conclusion

In conclusion, let’s summarize what’s been covered and learned:

  • Use LangFlow to quickly create a chat-bot that can answer questions on the content of your choice.
  • LangFlow consists of many components of the LangChain framework. Understanding these components, can open up new possibilities to automate with LangFlow.
  • LangFlow is a playground to experiment with AI. To prototype an idea quickly, then build it with LangChain.
  • (bonus) Huggingface spaces host a variety of AI generative applications such as image generation with Stable diffusion or Midjourney, and GPT4o chat. Check it out!

For the next article I’d like to dive deeper into creating a RAG agent using LangGraph, and deploy that to a Docker container. Stay tuned if you’re interested.

For now, lets discuss on Twitter;

  • What have you build?
  • What can be improved in this example and why?

Appreciate your time and till the next one!

Title:How to create your own RAG AI with LangFlow

Author:Rolf Streefkerk

Article link: https://rolfstreefkerk.com/article/how-to-create-your-own-rag-ai-with-langflow [copy]

Last Modified:


© 2024 Rolf Streefkerk. All rights reserved.

Share & Support

Join Our Newsletter

Get weekly insights on IT career growth. Join our community of successful professionals.

    Join The IT Insider for actionable tips and exclusive content.