Taming LLMs with Langchain + Langgraph

Introduction
Large language models (LLMs) handle the role of chatbots quite well on their own—just provide a well-crafted system prompt, and you can engage in dialogue with anyone from an English teacher to a real estate agent. However, when it comes t…


This content originally appeared on HackerNoon and was authored by postovalov

Introduction

Large language models (LLMs) handle the role of chatbots quite well on their own—just provide a well-crafted system prompt, and you can engage in dialogue with anyone from an English teacher to a real estate agent. However, when it comes to setting constraints on the chatbot's behavior or tightly controlling the direction of the dialogue, challenges arise. Regardless of how well-defined the behavior is in the initial prompt, there are no guarantees that the model will always perform as intended.

We encountered such challenges during the development of a chatbot, evolving from a simple chatter to a multifunctional and stable bot that meets the client's requirements.

To address these issues, we adhered to four practices:

  1. Breaking Down Prompts: Splitting one large prompt into several smaller ones or using a chain of LLM calls. This approach breaks a large task into smaller, more manageable tasks that are easier for the LLM to handle.
  2. Using Finite State Machines (FSM): Implementing LLM calls in specific nodes of the graph, while other nodes allow us to control the dialogue as needed.
  3. Few-Shot Prompting: Essential for controlling LLM actions, as new functionality may introduce cases that are resolved only by adding specific examples to the prompt.
  4. Function Calling: Providing external tools to the LLM to handle various tasks.

Popular and well-documented frameworks like Langchain and Langgraph are available for implementing these practices. This article will use these frameworks but will not delve into their basic concepts.

Practical Example of Chatbot Implementation

Let's move on to the code, focusing on a step-by-step example of developing a chatbot using the practices described above. You can track the code through the commit history in the following repository. Note that this article will focus on certain modules, so please refer to the repository as needed. To keep our focus, we'll use the terminal as the interface for interacting with the bot and the OpenAI API for LLM calls (using the gpt-3.5-turbo model).

Assume a client wants a chatbot with expertise in cats that can:

  1. Introduce itself using a pre-defined message: "I am a cat expert from 'Cats Inc.'. How can I help?" This ensures that the brand's key terms are embedded.
  2. Estimate the cost of a cat based on specific input attributes. The calculation formula provided is: cost = coef_gender(gender) * coef_breed(breed) * weight_kg, where gender, breed, and weight of the cat (in kilograms) are collected from the user.

First, we will set up the following dependencies for the project:

langchain==0.2.7  # LLM
langchain-core==0.2.12  # LLM
langchain-openai==0.1.14  # use OpenAI LLMs
langgraph==0.1.6  # main graph
openai==1.35.13  # OpenAI API
pydantic-settings==2.1.0  # for env

First Stage

We will proceed with increasing complexity: the first stage involves creating a system prompt that defines the chatbot's basic behavior and expertise. Note that chatbot expertise can be enhanced using Retrieval-Augmented Generation (RAG), but this article will not cover this approach—only the knowledge embedded in the initial LLM will be used.

First, create a module with a class to manage environment variables (env_settings.py):

"""Env Settings."""

from pydantic_settings import BaseSettings


class EnvSettings(BaseSettings):
    """Class for environment."""

    # OpenAI
    OPENAI_API_KEY: str
    OPENAI_MODEL: str


env = EnvSettings()

Next, let's move on to the chatbot's core functionality, based on Langchain (chat_bot.py):

"""Chat bot."""

from langchain_core.messages import SystemMessage
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_openai import ChatOpenAI

from env_settings import env

llm = ChatOpenAI(openai_api_key=env.OPENAI_API_KEY, model=env.OPENAI_MODEL, temperature=0)

system_message = SystemMessage(
    content="""
You are a cat expert with extensive knowledge in feline behavior, health, and care.
You have years of experience working with different breeds of cats, understanding their unique personalities and needs.
You can provide detailed advice on everything from feeding and grooming to health issues and training.
When answering questions, offer practical, evidence-based advice, and explain your reasoning clearly.
"""
)

prompt = ChatPromptTemplate.from_messages(
    [
        system_message,
        MessagesPlaceholder("history"),  # we'll add the memory in the following steps
    ]
)

chat_bot_chain = prompt | llm

Here are a few key points:

  1. The temperature parameter is set to zero for deterministic LLM responses. Depending on requirements, it can be increased for more creative responses.
  2. The system message can vary based on your requirements. Typically, an appropriate prompt is determined empirically after several iterations. One option is to ask ChatGPT to generate a base prompt and then iteratively refine it. Also, if creating a bot for a specific language, writing prompts in that language rather than English can be beneficial, as it allows better use of few-shot prompting and makes it easier to communicate your requirements to the model.
  3. The prompt includes a placeholder for message history, but we'll add memory to the chatbot in the next steps. For now, we'll pass an array with a single message.

To complete the first stage, we need to write an interface for interacting with the bot. We'll use the built-in input function to capture user input and print for displaying the dialogue (main.py):

"""Entrypoint."""

from langchain_core.messages import HumanMessage

from chat_bot import chat_bot_chain


def get_response_from_bot(user_message: str) -> str:
    """
    Generate a response from the chatbot based on the user's message.

    Args:
        user_message (str): The message input from the user.

    Returns:
        str: The response generated by the chatbot.
    """
    response = chat_bot_chain.invoke({"history": [HumanMessage(content=user_message)]})
    return response.content


if __name__ == "__main__":
    print("BOT: Hi, I know all about cats!")

    while True:
        user_message = input("YOU: ")
        print(f"BOT: {get_response_from_bot(user_message)}")

That's it! Set the required environment variables (OPENAIAPIKEY and OPENAI_MODEL), and you can run the main.py module using the Python interpreter.

Let’s start our chatbot and interact with it:

BOT: Hi, I know all about cats!
YOU: Hi, please tell me about the Siamese breed
BOT: The Siamese breed is one of the most recognizable and popular breeds of cats. Here are some key characteristics of Siamese cats:

1. **Physical Appearance**: Siamese cats are known for their striking appearance with a sleek, slender body, almond-shaped blue eyes, and a short coat that comes in various color points such as seal, chocolate, blue, and lilac.

2. **Personality**: Siamese cats are highly social, affectionate, and vocal. They are known for their strong bond with their human companions and their desire for attention. They are often described as "dog-like" in their loyalty and behavior.

3. **Intelligence**: Siamese cats are highly intelligent and curious. They enjoy interactive play and mental stimulation. They can be trained to perform tricks and are known for their problem-solving abilities.

4. **Communication**: Siamese cats are very vocal and will often "talk" to their owners with a loud, distinctive voice. They are not shy about expressing their needs and desires.

5. **Health**: Siamese cats are generally healthy, but they may be prone to certain genetic health issues such as dental problems, respiratory issues, and heart conditions. Regular veterinary check-ups are important to ensure their well-being.

6. **Grooming**: Due to their short coat, Siamese cats are relatively low-maintenance when it comes to grooming. Regular brushing to remove loose hair and occasional nail trimming are usually sufficient to keep them looking their best.

7. **Exercise**: Siamese cats are active and playful, so providing them with toys and opportunities for exercise is important to keep them mentally and physically stimulated.

Overall, Siamese cats make wonderful companions for those who are looking for an affectionate, intelligent, and social feline friend. Their unique personality and striking appearance make them a favorite among cat lovers.
YOU: Thanks
BOT: You're welcome! If you have any questions about cats or need advice on any feline-related topic, feel free to ask. I'm here to help!
YOU: Tell me about yourself
BOT: I am a cat expert with a passion for feline behavior, health, and care. I have spent years working with various breeds of cats, understanding their unique personalities and needs. My experience includes providing guidance on feeding, grooming, health issues, and training for cats of all ages. I stay updated on the latest research and developments in feline care to ensure that my advice is practical, evidence-based, and tailored to each cat's individual requirements. I am here to help cat owners provide the best possible care for their feline companions.

It’s working well! However, the bot's response to the request to "Tell me about yourself" doesn't match the client's expectations. Although adding the client's message to the prompt could resolve this, it might not be a stable solution—LLM responses can still vary. Additionally, the client might want to edit messages in the future, perhaps through an admin panel or a database.

Second Stage

To address the issue from the first stage, it's better to use function calling. For OpenAI models, this functionality is supported by the API, and many other LLMs also support this feature. Detailed information about implementing function calling in Langchain can be found in the documentation. Often, the description of function calling usage boils down to the fact that the tool's output result is returned to the LLM, which then generates the final response to the user, thus acting as an agent. In our case, we want to output the result directly from the tool without additional post-processing—this is where Langgraph comes into play.

Langgraph defines a graph for the dialogue, consisting of nodes that change the state and edges that determine which node to transition to next.

Let's modify our code, resulting in the following codebase structure:

.
├── requirements.txt
└── src
    ├── env_settings.py
    ├── graph
    │   ├── edges
    │   │   ├── __init__.py
    │   │   └── tools_edge.py
    │   ├── enums.py
    │   ├── __init__.py
    │   ├── langgraph_state.py
    │   ├── main_graph.py
    │   ├── nodes
    │   │   ├── chat_bot.py
    │   │   └── __init__.py
    │   └── tools
    │       ├── about_tool.py  
    │       └── __init__.py
    └── main.py

We won't examine each file here; the complete codebase for this stage can be reviewed in the second commit of the corresponding repository. To describe the concept, let's look at the graph definition (main_graph.py):

"""Main graph."""

from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import END, START, StateGraph

from .edges import choose_tool
from .enums import NodeName
from .langgraph_state import State
from .nodes import call_chat_bot
from .tools import call_tool

# Define a new graph
workflow = StateGraph(State)

# add nodes
workflow.add_node(NodeName.CHAT_BOT.value, call_chat_bot)
workflow.add_node(NodeName.ABOUT.value, call_tool)

# add edges
workflow.add_edge(START, NodeName.CHAT_BOT.value)
workflow.add_conditional_edges(
    NodeName.CHAT_BOT.value,
    choose_tool,
)
workflow.add_edge(NodeName.ABOUT.value, END)

compiled_graph = workflow.compile(checkpointer=MemorySaver())

Key changes compared to the first stage:

  1. The LLM call was made a separate node in the graph (src/graph/nodes/chat_bot.py).
  2. Defined a tool (src/graph/tools/abouttool.py) that simply returns a pre-defined message, and bound it to the LLM using the built-in bind_tools function (src/graph/nodes/chatbot.py).
  3. Added memory to our bot using a checkpoint during graph compilation—using MemorySaver to store history in memory.
  4. Added edges between nodes:
  • An edge from START to the LLM call node—this defines the entry point into the graph.
  • An edge from the LLM node with a check to see if a tool was called (graph/edges/tools_edge.py).
  • And what we aimed for: an edge from the tool call node to END, representing the end of the graph or the output of a message to the user.

After these changes, let's test the output about itself:

BOT: Hi, I know all about cats!
YOU: Tell me about yourself
BOT: I am a cat expert from 'Cats Inc.'. How can I help?

It works as expected! Next, we'll move to the final stage, where we'll add a dialogue branch to inquire about the pet's characteristics and calculate the cost.

Third Stage

To implement this functionality, we'll use a feature of Langgraph called Human-in-the-loop. This involves pausing the graph execution to wait for user input. There is detailed official documentation on this topic, but it does not cover cases where a series of questions is asked, with the option for the user to exit the scripted dialogue. For example, in a node asking about the pet's weight, the user might abruptly change their mind and ask about cat health instead. In such a case, we need to return to the start of the graph to correctly handle the query. This behavior is what we are implementing now.

After making the changes, the final codebase structure will be:

.
├── requirements.txt
└── src
    ├── calculator
    │   ├── coefs.py
    │   └── __init__.py
    ├── env_settings.py
    ├── graph
    │   ├── edges
    │   │   ├── entrypoint
    │   │   │   ├── check_calculator.py
    │   │   │   ├── entrypoint_edge.py
    │   │   │   └── __init__.py
    │   │   ├── exit_from_calculator.py
    │   │   ├── __init__.py
    │   │   └── tools_edge.py
    │   ├── enums.py
    │   ├── __init__.py
    │   ├── langgraph_state.py
    │   ├── main_graph.py
    │   ├── nodes
    │   │   ├── calculator_nodes.py
    │   │   ├── chat_bot.py
    │   │   ├── exit_from_calculator
    │   │   │   ├── checker.py
    │   │   │   ├── __init__.py
    │   │   │   └── should_continue.py
    │   │   ├── __init__.py
    │   │   └── start.py
    │   ├── tools
    │   │   ├── about_tool.py
    │   │   └── __init__.py
    │   └── utils.py
    └── main.py

As with the previous stage, we’ll only review the most significant changes.

1) Entry Point in the Graph

Now, to determine the starting point, a condition will be checked to transition to the pet cost calculation branch (src/graph/edges/entrypoint/entrypoint_edge.py):

"""Entry point edge."""

from ...enums import NodeName
from ...langgraph_state import State
from ...utils import get_last_human_message
from .check_calculator import check_calculator_chain


def conditional_entry_point(state: State) -> str:
    """
    Determine the entry point based on the user's last message and a calculator check.

    Args:
        state (State): The current state of the conversation.

    Returns:
        str: Returns the entry point node name. If the last user message indicates a calculator query,
             it returns the start calculator node name; otherwise, it returns the chat bot node name.
    """
    user_query = get_last_human_message(state)
    if user_query:
        checking_calculator_result = check_calculator_chain.invoke({"user_query": user_query})
        if checking_calculator_result.content.lower() == "yes":
            return NodeName.START_CALCULATOR.value

    return NodeName.CHAT_BOT.value

Here, few-shot prompting is useful for describing possible cases of transitioning to the question branch (src/graph/edges/entrypoint/check_calculator.py):

"""Check calculator chain."""

from langchain_core.prompts import PromptTemplate
from langchain_openai import ChatOpenAI

from env_settings import env

# LLM
llm = ChatOpenAI(openai_api_key=env.OPENAI_API_KEY, model=env.OPENAI_MODEL, temperature=0)

template = """
Please determine whether the following user query relates to the cost estimation of a single cat.

Return "Yes" or "No".

Examples:

User query: "I want to know how much a cat costs."
Answer: Yes

User query: "Tell me something about cats."
Answer: No

User query: {user_query}
Answer:
"""

# Prompt
prompt = PromptTemplate(
    template=template,
    input_variables=["user_query"],
)

# Chain
check_calculator_chain = prompt | llm

Thus, each time the chatbot incorrectly transitions to the calculation branch during operation, we can add this example to the prompt.

2) Nodes with Questions

The definition of nodes with questions is located in the module src/graph/nodes/calculator_nodes.py:

"""Calculator nodes."""


from langchain_core.messages import AIMessage

from calculator import get_cat_cost

from ..enums import NodeName
from ..langgraph_state import State


def ask_human(state: State):
    """Fake node to ask the human."""
    pass


def start_calculator(state: State):
    """
    Initiate the calculator process by asking for the cat's gender.

    Args:
        state (State): The current state of the conversation.

    Returns:
        dict: A dictionary containing the next message and the next node name.
    """
    next_node = NodeName.GENDER.value
    next_message = "What is your cat's gender?"

    return {
        "messages": [AIMessage(content=next_message)],
        "next_node": next_node,
    }


def gender_node(state: State):
    """
    Process the gender input and asks for the cat's breed.

    Args:
        state (State): The current state of the conversation.

    Returns:
        dict: A dictionary containing the gender, the next message, and the next node name.
    """
    next_node = NodeName.BREED.value
    next_message = "What is your cat's breed?"

    return {
        "gender": state["should_continue_data"],
        "messages": [AIMessage(content=next_message)],
        "next_node": next_node,
    }


def breed_node(state: State):
    """
    Process the breed input and asks for the cat's weight.

    Args:
        state (State): The current state of the conversation.

    Returns:
        dict: A dictionary containing the breed, the next message, and the next node name.
    """
    last_message = state["messages"][-1].content.lower()

    next_node = NodeName.WEIGHT.value
    next_message = "What is the weight of your cat in kilograms?"

    return {
        "breed": last_message,
        "messages": [AIMessage(content=next_message)],
        "next_node": next_node,
    }


def weight_node(state: State):
    """
    Process the weight input and calculates the cost of the cat.

    Args:
        state (State): The current state of the conversation.

    Returns:
        dict: A dictionary containing the final message with the cat's cost and the next node name.
    """
    cost = get_cat_cost(gender=state["gender"], breed=state["breed"], weight_kg=state["should_continue_data"])

    next_node = NodeName.START.value
    next_message = f"Your cat is worth {cost} dollars"

    return {
        "messages": [AIMessage(content=next_message)],
        "next_node": next_node,
    }

Each of these nodes:

  1. Executes business logic—storing data in the state or performing calculations.
  2. Defines the next message and node to transition to.

After this, the graph execution pauses to wait for user input (node ask_human). Once we get a user response, it's time to decide whether to continue asking questions or return to the beginning of the dialogue so that the chatbot can decide what to do next (src/graph/nodes/exitfromcalculator/should_continue.py):

"""Main node."""

from ...enums import NodeName
from ...langgraph_state import State
from .checker import checker_chain


def should_continue_node(state: State):
    """
    Determine whether the conversation should continue based on the last user message and the next node.

    Args:
        state (State): The current state of the conversation.

    Returns:
        dict or None: A dictionary containing the next node and any data needed to continue the conversation,
                      or None if no continuation is needed.
    """
    last_message = state["messages"][-1].content.lower()

    if state["next_node"] in (
        NodeName.START.value,
        NodeName.BREED.value,
    ):
        return

    elif state["next_node"] == NodeName.GENDER.value:
        prompt_examples = """
        User's response: "boy"
        Category: male

        User's response: "tell me about yourself"
        Category: None

        User's response: "girl"
        Category: female
        """

        result = checker_chain.invoke(
            {"options": "female, male", "examples": prompt_examples, "user_response": last_message}
        )
        if result.content != "None":
            return {"should_continue_data": result.content}

    elif state["next_node"] == NodeName.WEIGHT.value:
        if last_message.isdigit():
            return {"should_continue_data": int(last_message)}

    return {
        "next_node": NodeName.START.value,
        "should_continue_data": None,
    }

You can devise any rules for handling user input for each of the nodes. In our example, we check for numeric input for weight and determine the pet’s gender using few-shot prompting. If it is determined that the user's response does not match the question, the next node is set to the start node.

Let's test our bot in action:

BOT: Hi, I know all about cats!
YOU: I want to calculate the cost of my cat
BOT: What is your cat's gender?
YOU: boy
BOT: What is your cat's breed?
YOU: I don't know
BOT: What is the weight of your cat in kilograms?
YOU: 4
BOT: Your cat is worth 4 dollars
YOU: Thanks! Tell me about yourself
BOT: I am a cat expert from 'Cats Inc.'. How can I help?

It works! Now let's see how the bot handles exiting from the scripted dialogue:

BOT: Hi, I know all about cats!
YOU: Tell me about yourself
BOT: I am a cat expert from 'Cats Inc.'. How can I help?
YOU: Let's calculate the cost of the cat
BOT: What is your cat's gender?
YOU: What's about the health of cats?
BOT: Cats, like all pets, require regular veterinary care to maintain their health. Here are some key aspects of cat health to consider:

1. **Regular Veterinary Check-ups**: Cats should have annual check-ups with a veterinarian to monitor their overall health, update vaccinations, and address any concerns.

2. **Proper Nutrition**: A balanced diet is essential for a cat's health. Choose high-quality cat food that meets their nutritional needs based on age, activity level, and any health conditions.

3. **Hydration**: Cats need access to fresh water at all times to stay hydrated, especially if they are on a dry food diet.

4. **Grooming**: Regular grooming helps prevent matting, hairballs, and skin issues. Brush your cat's coat regularly and trim their nails as needed.

5. **Parasite Control**: Protect your cat from fleas, ticks, and internal parasites by using vet-recommended preventatives.

6. **Dental Care**: Dental health is crucial for cats. Brushing their teeth regularly and providing dental treats or toys can help prevent dental issues.

7. **Exercise**: Engage your cat in regular play and provide opportunities for exercise to maintain a healthy weight and mental stimulation.

8. **Spaying/Neutering**: Consider spaying or neutering your cat to prevent unwanted litters and reduce the risk of certain health issues.

9. **Behavioral Health**: Monitor your cat's behavior for any changes that could indicate stress, anxiety, or illness. Provide a stimulating environment with scratching posts, toys, and hiding spots.

10. **Emergency Preparedness**: Be aware of common cat emergencies and have a plan in place in case of accidents or sudden illness.

If you have specific questions about your cat's health or need advice on a particular issue, feel free to ask!

As seen from the example, the chatbot understood that the user wanted to change the topic and responded correctly!

Thus, we have a stable solution that can be scaled and made more complex without the fear that LLM randomness will break it.


This content originally appeared on HackerNoon and was authored by postovalov


Print Share Comment Cite Upload Translate Updates
APA

postovalov | Sciencx (2024-09-27T13:09:31+00:00) Taming LLMs with Langchain + Langgraph. Retrieved from https://www.scien.cx/2024/09/27/taming-llms-with-langchain-langgraph/

MLA
" » Taming LLMs with Langchain + Langgraph." postovalov | Sciencx - Friday September 27, 2024, https://www.scien.cx/2024/09/27/taming-llms-with-langchain-langgraph/
HARVARD
postovalov | Sciencx Friday September 27, 2024 » Taming LLMs with Langchain + Langgraph., viewed ,<https://www.scien.cx/2024/09/27/taming-llms-with-langchain-langgraph/>
VANCOUVER
postovalov | Sciencx - » Taming LLMs with Langchain + Langgraph. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2024/09/27/taming-llms-with-langchain-langgraph/
CHICAGO
" » Taming LLMs with Langchain + Langgraph." postovalov | Sciencx - Accessed . https://www.scien.cx/2024/09/27/taming-llms-with-langchain-langgraph/
IEEE
" » Taming LLMs with Langchain + Langgraph." postovalov | Sciencx [Online]. Available: https://www.scien.cx/2024/09/27/taming-llms-with-langchain-langgraph/. [Accessed: ]
rf:citation
» Taming LLMs with Langchain + Langgraph | postovalov | Sciencx | https://www.scien.cx/2024/09/27/taming-llms-with-langchain-langgraph/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.