How to Create a Local RAG Agent with Ollama and LangChain

What is a RAG?

RAG stands for Retrieval-Augmented Generation, a powerful technique designed to enhance the performance of large language models (LLMs) by providing them with specific, relevant context in the form of documents. Unlike traditi…


This content originally appeared on DEV Community and was authored by Dylan Muraco

What is a RAG?

RAG stands for Retrieval-Augmented Generation, a powerful technique designed to enhance the performance of large language models (LLMs) by providing them with specific, relevant context in the form of documents. Unlike traditional LLMs that generate responses purely based on their pre-trained knowledge, RAG allows you to align the model’s output more closely with your desired outcomes by retrieving and utilizing real-time data or domain-specific information.

RAG vs Fine-Tuning

While both RAG and fine-tuning aim to improve the performance of LLMs, RAG is often a more efficient and resource-friendly method. Fine-tuning involves retraining a model on a specialized dataset, which requires significant computational resources, time, and expertise. RAG, on the other hand, dynamically retrieves relevant information and incorporates it into the generation process, allowing for more flexible and cost-effective adaptation to new tasks without extensive retraining.

Building a RAG Agent

Installing the Requirements

Install Ollama

Ollama provides the backend infrastructure needed to run LLaMA locally. To get started, head to Ollama's website and download the application. Follow the instructions to set it up on your local machine.

Install LangChain Requirements

LangChain is a Python framework designed to work with various LLMs and vector databases, making it ideal for building RAG agents. Install LangChain and its dependencies by running the following command:

pip install langchain

Coding the RAG Agent

Create an API Function

First, you’ll need a function to interact with your local LLaMA instance. Here’s how you can set it up:

from requests import post as rpost

def call_llama(prompt):
    headers = {"Content-Type": "application/json"}
    payload = {
        "model": "llama3.1",
        "prompt": prompt,
        "stream": False,
    }

    response = rpost(
        "http://localhost:11434/api/generate",
        headers=headers,
        json=payload
    )
    return response.json()["response"]

Create a LangChain LLM

Next, integrate this function into a custom LLM class within LangChain:

from langchain_core.language_models.llms import LLM

class LLaMa(LLM):
    def _call(self, prompt, **kwargs):
        return call_llama(prompt)

    @property
    def _llm_type(self):
        return "llama-3.1-8b"

Integrating the RAG Agent

Setting Up the Retriever

The retriever is responsible for fetching relevant documents based on the user’s query. Here’s how to set it up using FAISS for vector storage and HuggingFace’s pre-trained embeddings:

from langchain.vectorstores import FAISS
from langchain_huggingface import HuggingFaceEmbeddings

documents = [
    {"content": "What is your return policy? ..."},
    {"content": "How long does shipping take? ..."},
    # Add more documents as needed
]

texts = [doc["content"] for doc in documents]

retriever = FAISS.from_texts(
    texts,
    HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")
).as_retriever(k=5)

Create the Prompt Template

Define the prompt template that the RAG agent will use to generate responses based on the retrieved documents:

from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder

faq_template = """
You are a chat agent for my E-Commerce Company. As a chat agent, it is your duty to help the human with their inquiry and make them a happy customer.

Help them, using the following context:
<context>
{context}
</context>
"""

faq_prompt = ChatPromptTemplate.from_messages([
    ("system", faq_template),
    MessagesPlaceholder("messages")
])

Create Document and Retriever Chains

Combine the document retrieval and LLaMA generation into a cohesive chain:

from langchain.chains.combine_documents import create_stuff_documents_chain

document_chain = create_stuff_documents_chain(LLaMa(), faq_prompt)

def parse_retriever_input(params):
    return params["messages"][-1].content

retrieval_chain = RunnablePassthrough.assign(
    context=parse_retriever_input | retriever
).assign(answer=document_chain)

Start Your Ollama Server

Before running your RAG agent, make sure the Ollama server is up and running. Start the server with the following command:

ollama serve

Prompt Your RAG Agent

Now, you can test your RAG agent by sending a query:

from langchain.schema import HumanMessage

response = retrieval_chain.invoke({
    "messages": [
        HumanMessage("I received a damaged item. I want my money back.")
    ]
})

print(response)

Response:
"I'm so sorry to hear that you received a damaged item. According to our policy, if you receive a damaged item, please contact our customer service team immediately with photos of the damage. We will arrange a replacement or refund for you. Would you like me to assist you in getting a refund? I'll need some information from you, such as your order number and details about the damaged item. Can you please provide that so I can help process your request?"

By following these steps, you can create a fully functional local RAG agent capable of enhancing your LLM's performance with real-time context. This setup can be adapted to various domains and tasks, making it a versatile solution for any application where context-aware generation is crucial.


This content originally appeared on DEV Community and was authored by Dylan Muraco


Print Share Comment Cite Upload Translate Updates
APA

Dylan Muraco | Sciencx (2024-08-13T17:53:11+00:00) How to Create a Local RAG Agent with Ollama and LangChain. Retrieved from https://www.scien.cx/2024/08/13/how-to-create-a-local-rag-agent-with-ollama-and-langchain/

MLA
" » How to Create a Local RAG Agent with Ollama and LangChain." Dylan Muraco | Sciencx - Tuesday August 13, 2024, https://www.scien.cx/2024/08/13/how-to-create-a-local-rag-agent-with-ollama-and-langchain/
HARVARD
Dylan Muraco | Sciencx Tuesday August 13, 2024 » How to Create a Local RAG Agent with Ollama and LangChain., viewed ,<https://www.scien.cx/2024/08/13/how-to-create-a-local-rag-agent-with-ollama-and-langchain/>
VANCOUVER
Dylan Muraco | Sciencx - » How to Create a Local RAG Agent with Ollama and LangChain. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2024/08/13/how-to-create-a-local-rag-agent-with-ollama-and-langchain/
CHICAGO
" » How to Create a Local RAG Agent with Ollama and LangChain." Dylan Muraco | Sciencx - Accessed . https://www.scien.cx/2024/08/13/how-to-create-a-local-rag-agent-with-ollama-and-langchain/
IEEE
" » How to Create a Local RAG Agent with Ollama and LangChain." Dylan Muraco | Sciencx [Online]. Available: https://www.scien.cx/2024/08/13/how-to-create-a-local-rag-agent-with-ollama-and-langchain/. [Accessed: ]
rf:citation
» How to Create a Local RAG Agent with Ollama and LangChain | Dylan Muraco | Sciencx | https://www.scien.cx/2024/08/13/how-to-create-a-local-rag-agent-with-ollama-and-langchain/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.