{"id":104071,"date":"2026-03-17T15:54:06","date_gmt":"2026-03-17T10:24:06","guid":{"rendered":"https:\/\/www.guvi.in\/blog\/?p=104071"},"modified":"2026-04-06T10:50:23","modified_gmt":"2026-04-06T05:20:23","slug":"build-agentic-ai-with-langchain-and-langgraph","status":"publish","type":"post","link":"https:\/\/www.guvi.in\/blog\/build-agentic-ai-with-langchain-and-langgraph\/","title":{"rendered":"How to Build Agentic AI with LangChain and LangGraph in 2026"},"content":{"rendered":"\n<p>You have probably used ChatGPT or any AI chatbot and noticed that it answers questions but cannot actually go out and do things on its own. Ask it to research a topic, write a report, check your database, and send a summary email, and it will tell you how to do it but will not do it for you. Agentic AI changes that completely. This tutorial on how to build agentic AI with LangChain and LangGraph will show you how to create AI that thinks, decides, and acts across multiple steps all by itself.<\/p>\n\n\n\n<p>In this guide, you will learn what agentic AI is, how LangChain and LangGraph work together, and how to build a real working agent from scratch with code examples written so clearly that even a beginner can follow every step.<\/p>\n\n\n\n<p><strong>Quick Answer<\/strong><\/p>\n\n\n\n<p>Agentic AI is AI that can reason, plan, use tools, and complete multi-step tasks autonomously. You build it using LangChain, which provides the building blocks like tools and memory, and LangGraph, which connects those blocks into a stateful workflow using nodes and edges. Together they let you create AI agents that loop, branch, remember, and act on their own.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>What Is Agentic AI<\/strong><\/h2>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1200\" height=\"630\" src=\"https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2026\/04\/What-is-Agentic-AI_-1200x630.png\" alt=\"What is Agentic AI\" class=\"wp-image-105861\" srcset=\"https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2026\/04\/What-is-Agentic-AI_-1200x630.png 1200w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2026\/04\/What-is-Agentic-AI_-300x158.png 300w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2026\/04\/What-is-Agentic-AI_-768x403.png 768w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2026\/04\/What-is-Agentic-AI_-1536x806.png 1536w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2026\/04\/What-is-Agentic-AI_-2048x1075.png 2048w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2026\/04\/What-is-Agentic-AI_-150x79.png 150w\" sizes=\"(max-width: 1200px) 100vw, 1200px\" title=\"\"><\/figure>\n\n\n\n<p>Regular AI gives you an answer. Agentic AI takes action. When you ask a regular LLM a question, it reads your prompt and generates a response. That is the end of the interaction. Agentic AI is different because it can break a goal into smaller steps, decide which tool to use at each step, execute those tools, look at the results, and then decide what to do next. It keeps going until the goal is complete.<\/p>\n\n\n\n<p>Think of it like the difference between asking someone for directions and hiring a travel agent. One gives you information. The other plans the trip, books the tickets, arranges the hotel, and handles problems along the way. Agentic AI is the travel agent.<\/p>\n\n\n\n<p><strong><em>Here is something to think about:<\/em><\/strong><em> what tasks in your daily work require you to take multiple steps, check something, then decide what to do next based on the result? Those are exactly the tasks agentic AI is built for.<\/em><\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>What Is LangChain and What Is LangGraph<\/strong><\/h2>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1200\" height=\"630\" src=\"https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2026\/04\/What-is-LangChain-and-What-is-LangGraph-1200x630.png\" alt=\"What is LangChain and what is LangGraph\" class=\"wp-image-105863\" srcset=\"https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2026\/04\/What-is-LangChain-and-What-is-LangGraph-1200x630.png 1200w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2026\/04\/What-is-LangChain-and-What-is-LangGraph-300x158.png 300w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2026\/04\/What-is-LangChain-and-What-is-LangGraph-768x403.png 768w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2026\/04\/What-is-LangChain-and-What-is-LangGraph-1536x806.png 1536w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2026\/04\/What-is-LangChain-and-What-is-LangGraph-2048x1075.png 2048w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2026\/04\/What-is-LangChain-and-What-is-LangGraph-150x79.png 150w\" sizes=\"(max-width: 1200px) 100vw, 1200px\" title=\"\"><\/figure>\n\n\n\n<p>Before writing any code, you need to clearly understand what each tool does and why you need both. Many beginners try to use one without the other and run into walls quickly.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>1. What LangChain Does<\/strong><\/h3>\n\n\n\n<p>LangChain is an open-source Python framework that gives you the building blocks to create AI-powered applications. It connects your LLM to tools like web search, databases, APIs, and file systems. It also handles prompts, memory, and the logic of how your agent should behave step by step.<\/p>\n\n\n\n<p>Think of LangChain as the toolbox. It has everything your agent needs to function, the LLM brain, the tools it can use, and the memory it can store things in. But a toolbox alone does not build a house. You need a blueprint for how everything connects.<\/p>\n\n\n\n<ul>\n<li><strong>LLM integration<\/strong> means LangChain works with OpenAI, Anthropic, Google Gemini, and more out of the box<\/li>\n\n\n\n<li><strong>Tool support<\/strong> means your agent can call web search, run Python code, query databases, and more<\/li>\n\n\n\n<li><strong>Memory management<\/strong> means your agent can remember what happened earlier in a conversation<\/li>\n\n\n\n<li><strong>Chain logic<\/strong> means you can define sequences of steps your agent should follow<\/li>\n<\/ul>\n\n\n\n<p>Also read &#8211; <a href=\"https:\/\/www.guvi.in\/blog\/build-a-language-model-application-with-langchain\/\" target=\"_blank\" rel=\"noreferrer noopener\">Building a Language Model Application with LangChain&nbsp;<\/a><\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>2. What LangGraph Does<\/strong><\/h3>\n\n\n\n<p>LangGraph is a library built on top of LangChain that lets you define your agent&#8217;s workflow as a graph. A graph is just a way of saying: here are the steps (nodes) and here are the paths between them (edges). LangGraph lets your agent loop back, branch based on decisions, and handle complex multi-step workflows that a simple chain cannot manage.<\/p>\n\n\n\n<p>If LangChain is the toolbox, LangGraph is the blueprint. It decides the order of operations, what happens when a tool returns an unexpected result, and when the agent should stop versus keep going.<\/p>\n\n\n\n<ul>\n<li><strong>Nodes<\/strong> are individual steps in your workflow, like calling the LLM or running a tool<\/li>\n\n\n\n<li><strong>Edges<\/strong> are the paths connecting nodes, deciding what runs next<\/li>\n\n\n\n<li><strong>State<\/strong> is the shared memory that all nodes can read from and write to<\/li>\n\n\n\n<li><strong>Conditional edges<\/strong> let your agent branch based on what the previous step returned<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>3. Why You Need Both Together<\/strong><\/h3>\n\n\n\n<p>LangChain gives your agent capabilities. LangGraph gives your agent structure. Without LangChain, you have no tools or <a href=\"https:\/\/www.guvi.in\/blog\/guide-to-large-language-models\/\" target=\"_blank\" rel=\"noreferrer noopener\">LLM<\/a> integration. Without LangGraph, you have no way to manage complex workflows that loop, branch, or persist state across steps. Used together, they let you build production-ready agentic AI systems that can handle real-world tasks.<\/p>\n\n\n\n<ul>\n<li><strong>LangChain alone<\/strong> works for simple linear workflows but breaks down when logic gets complex<\/li>\n\n\n\n<li><strong>LangGraph alone<\/strong> provides the workflow engine but needs LangChain for LLM and tool access<\/li>\n\n\n\n<li><strong>Together<\/strong> they cover everything from a simple chatbot to a multi-agent research system<\/li>\n\n\n\n<li><strong>Production readiness<\/strong> means the combination handles failures, memory, and human-in-the-loop checks<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>4. The ReAct Pattern That Powers Agentic AI<\/strong><\/h3>\n\n\n\n<p>The most important concept in agentic AI is the ReAct pattern. ReAct stands for Reason and Act. It describes how an agent thinks about what to do, takes an action, observes the result, and then reasons again about what to do next. This loop of reasoning and acting is what makes an agent autonomous.<\/p>\n\n\n\n<p>LangChain and LangGraph implement the ReAct pattern natively. When your agent receives a task, it reasons about which tool to use, calls that tool, reads the result, reasons again based on what it found, and either acts again or returns a final answer. This cycle repeats until the task is done.<\/p>\n\n\n\n<ul>\n<li><strong>Reason<\/strong> means the agent thinks about the current state and decides what action to take<\/li>\n\n\n\n<li><strong>Act<\/strong> means the agent calls a tool or takes a step based on its reasoning<\/li>\n\n\n\n<li><strong>Observe<\/strong> means the agent reads the output of the action it just took<\/li>\n\n\n\n<li><strong>Loop<\/strong> means the agent repeats the cycle until it reaches a satisfactory final answer<\/li>\n<\/ul>\n\n\n\n<p><strong><em>Ask yourself this: <\/em><\/strong><em>when you solve a complex problem, do you figure out everything at once or do you take a step, see what happens, and adjust? Agentic AI uses the same approach.<\/em><\/p>\n\n\n\n<p>Do check out HCL-GUVI\u2019s <a href=\"https:\/\/www.guvi.in\/zen-class\/artificial-intelligence-and-machine-learning-course\/?utm_source=blog&amp;utm_medium=hyperlink&amp;utm_campaign=how-to-build-agentic-ai-with-langchain-and-langgraph-in-2026\" target=\"_blank\" rel=\"noreferrer noopener\">AI\/ML Course<\/a> if you want to learn how to build real AI systems like agentic workflows using LangChain and LangGraph. The course covers Python, machine learning, deep learning, and hands-on AI projects with mentor support, helping you gain practical experience in building production-ready AI applications.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>How to Set Up Your Environment<\/strong><\/h2>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1200\" height=\"630\" src=\"https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2026\/04\/How-to-Set-Up-Your-Environment-1200x630.png\" alt=\"How to set up your environment\" class=\"wp-image-105864\" srcset=\"https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2026\/04\/How-to-Set-Up-Your-Environment-1200x630.png 1200w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2026\/04\/How-to-Set-Up-Your-Environment-300x158.png 300w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2026\/04\/How-to-Set-Up-Your-Environment-768x403.png 768w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2026\/04\/How-to-Set-Up-Your-Environment-1536x806.png 1536w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2026\/04\/How-to-Set-Up-Your-Environment-2048x1075.png 2048w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2026\/04\/How-to-Set-Up-Your-Environment-150x79.png 150w\" sizes=\"(max-width: 1200px) 100vw, 1200px\" title=\"\"><\/figure>\n\n\n\n<p>Getting your environment ready is the first real step. You do not need a powerful machine for this. A standard laptop with Python installed is enough to follow along with every example in this tutorial.<\/p>\n\n\n\n<p>You will need Python 3.10 or above, an OpenAI API key or any other supported LLM API key, and a code editor. VS Code is a good choice if you do not have a preference.<\/p>\n\n\n\n<p><strong>1. Installing the Required Libraries<\/strong><\/p>\n\n\n\n<p>Open your terminal and run <strong>pip install langchain langgraph langchain-openai langchain-community<\/strong> to install everything you need in one command. This installs LangChain, LangGraph, the OpenAI integration, and the community tools package.<\/p>\n\n\n\n<p>Once installed, set your OpenAI API key as an environment variable. In your terminal run <strong>export OPENAI_API_KEY=your_api_key_here<\/strong> on Mac or Linux. On Windows run <strong>set OPENAI_API_KEY=your_api_key_here<\/strong> in your command prompt.<\/p>\n\n\n\n<ul>\n<li><strong>langchain<\/strong> is the core framework with chains, tools, and memory<\/li>\n\n\n\n<li><strong>langgraph<\/strong> is the graph-based workflow engine for complex agents<\/li>\n\n\n\n<li><strong>langchain-openai<\/strong> gives you the OpenAI LLM and embedding integrations<\/li>\n\n\n\n<li><strong>langchain-community<\/strong> gives you access to hundreds of community-built tools and integrations<\/li>\n<\/ul>\n\n\n\n<p><strong>2. Understanding the Core Building Blocks<\/strong><\/p>\n\n\n\n<p>Before writing your first agent, you need to know the four building blocks you will use in every LangGraph workflow. These are the same blocks that power everything from a simple agent to a multi-agent production system.<\/p>\n\n\n\n<p>The first block is the State. This is a Python dictionary that stores all the information your agent needs to carry across steps, like the user&#8217;s question, previous tool results, and the final answer.&nbsp;<\/p>\n\n\n\n<p>The second block is Nodes. Each node is a Python function that takes the current state, does something like calling the LLM or running a tool, and returns an updated state.<\/p>\n\n\n\n<p>The third block is Edges. Edges connect your nodes and tell LangGraph what to run next. The fourth block is the Graph itself, which is where you register all your nodes and edges and compile everything into a runnable workflow.<\/p>\n\n\n\n<ul>\n<li><strong>State<\/strong> holds all the data that flows through your workflow from start to finish<\/li>\n\n\n\n<li><strong>Nodes<\/strong> are the workers that read the state, do a job, and update the state<\/li>\n\n\n\n<li><strong>Edges<\/strong> are the rules that decide which node runs after the current one finishes<\/li>\n\n\n\n<li><strong>Graph<\/strong> is the container that wires everything together and makes it runnable<\/li>\n<\/ul>\n\n\n\n<p><strong>3. Setting Up Your First LangGraph File<\/strong><\/p>\n\n\n\n<p>Create a new file called <strong>agent.py<\/strong> in your project folder. At the top of the file, add your imports by writing <strong>from langgraph.graph import StateGraph, END<\/strong>, <strong>from langchain_openai import ChatOpenAI<\/strong>, and <strong>from typing import TypedDict, Annotated, List<\/strong>. These three imports give you everything you need to define a basic agent workflow.<\/p>\n\n\n\n<p>Next define your state class. Write a class called <strong>AgentState<\/strong> that inherits from <strong>TypedDict<\/strong> and has a field called <strong>messages<\/strong> of type <strong>List<\/strong>. This state will hold the conversation history that flows between your nodes throughout the workflow.<\/p>\n\n\n\n<ul>\n<li><strong>StateGraph<\/strong> is the LangGraph class you use to define and compile your graph<\/li>\n\n\n\n<li><strong>END<\/strong> is a special LangGraph constant that tells the graph when to stop running<\/li>\n\n\n\n<li><strong>ChatOpenAI<\/strong> is the LangChain class that gives you access to GPT-4 and other OpenAI models<\/li>\n\n\n\n<li><strong>TypedDict<\/strong> lets you define the structure of your state so Python knows what fields to expect<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>How to Build Your First Agentic AI Workflow Step by Step<\/strong><\/h2>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1200\" height=\"630\" src=\"https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2026\/04\/How-to-Build-Your-First-Agentic-AI-Workflow-Step-by-Step-1200x630.png\" alt=\"How to build Agentic AI workflow - step by step\" class=\"wp-image-105865\" srcset=\"https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2026\/04\/How-to-Build-Your-First-Agentic-AI-Workflow-Step-by-Step-1200x630.png 1200w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2026\/04\/How-to-Build-Your-First-Agentic-AI-Workflow-Step-by-Step-300x158.png 300w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2026\/04\/How-to-Build-Your-First-Agentic-AI-Workflow-Step-by-Step-768x403.png 768w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2026\/04\/How-to-Build-Your-First-Agentic-AI-Workflow-Step-by-Step-1536x806.png 1536w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2026\/04\/How-to-Build-Your-First-Agentic-AI-Workflow-Step-by-Step-2048x1075.png 2048w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2026\/04\/How-to-Build-Your-First-Agentic-AI-Workflow-Step-by-Step-150x79.png 150w\" sizes=\"(max-width: 1200px) 100vw, 1200px\" title=\"\"><\/figure>\n\n\n\n<p>You are going to build a real working <a href=\"https:\/\/www.guvi.in\/blog\/guide-on-ai-agents-mcps-and-github-copilot\/\" target=\"_blank\" rel=\"noreferrer noopener\">AI agent<\/a> from scratch. Everything goes into one file called <strong>agent.py<\/strong>. By the end of Step 5 you will have a complete agent you can run on your own machine.<\/p>\n\n\n\n<p>The agent works like this: you ask it a question, it decides if it needs to search for information, calls a search tool if it does, reads the result, and gives you a final answer. Simple, real, and fully agentic.<\/p>\n\n\n\n<p>Do not skip steps. Each one adds a piece that the next step depends on.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Step 1: Create the File and Add Imports<\/strong><\/h3>\n\n\n\n<p>Create a new file called <strong>agent.py<\/strong> in any folder on your computer. Open it in your code editor.<\/p>\n\n\n\n<p>Now add these imports at the very top of the file. Each line brings in something your agent needs:<\/p>\n\n\n\n<p><strong>from langgraph.graph import StateGraph, END<\/strong> This gives you the graph builder and the END signal that tells your agent to stop.<\/p>\n\n\n\n<p><strong>from langchain_openai import ChatOpenAI.<\/strong> This connects you to the OpenAI GPT model.<\/p>\n\n\n\n<p><strong>from langchain_core.tools import tool.<\/strong> This lets you turn any Python function into a tool your agent can call.<\/p>\n\n\n\n<p><strong>from langchain_core.messages import HumanMessage, ToolMessage.<\/strong> These are the message types your agent uses to talk and receive tool results.<\/p>\n\n\n\n<p><strong>from typing import TypedDict, List, Annotated<\/strong> <strong>import operator.<\/strong> These handle type definitions and list operations for your state.<\/p>\n\n\n\n<p>That is your imports done. Your file has 6 lines so far and is ready for the next piece.<\/p>\n\n\n\n<ul>\n<li><strong>StateGraph<\/strong> is the main LangGraph class that holds all your nodes and edges together<\/li>\n\n\n\n<li><strong>END<\/strong> is a special signal that tells the graph that the workflow is finished<\/li>\n\n\n\n<li><strong>ChatOpenAI<\/strong> is your agent&#8217;s brain, the LLM that reasons and makes decisions<\/li>\n\n\n\n<li><strong>HumanMessage<\/strong> wraps the user&#8217;s question so the LLM understands who sent it<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Step 2: Define the State<\/strong><\/h3>\n\n\n\n<p>The state is a shared dictionary that every node in your graph can read from and write to. Think of it as the agent&#8217;s memory as it works through the task.<\/p>\n\n\n\n<p>Add this directly below your imports:<\/p>\n\n\n\n<p><strong>class AgentState(TypedDict):<\/strong> <strong>messages: Annotated[List, operator.add]<\/strong><\/p>\n\n\n\n<p>That is it. Just two lines. The <strong>messages<\/strong> field holds the full conversation history including your question, the LLM&#8217;s thoughts, tool calls, and tool results. Every node reads from this and adds to it.<\/p>\n\n\n\n<p>The <strong>Annotated[List, operator.add]<\/strong> part simply means new messages get added to the list rather than replacing what was already there.<\/p>\n\n\n\n<ul>\n<li><strong>TypedDict<\/strong> gives your state a clear structure Python can understand and validate<\/li>\n\n\n\n<li><strong>messages list<\/strong> is the single place where the entire agent history lives during the run<\/li>\n\n\n\n<li><strong>operator.add<\/strong> means every node appends its output to the list, nothing gets overwritten<\/li>\n\n\n\n<li><strong>One field is enough<\/strong> for this agent because everything flows through messages<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Step 3: Create a Tool<\/strong><\/h3>\n\n\n\n<p>A tool is any <a href=\"https:\/\/www.guvi.in\/blog\/what-is-function-in-python\/\" target=\"_blank\" rel=\"noreferrer noopener\">Python function<\/a> your agent can call when it needs to take action. You mark it with <strong>@tool<\/strong> above the function and write a clear docstring so the LLM knows when to use it.<\/p>\n\n\n\n<p>Add this below your state definition:<\/p>\n\n\n\n<p><strong>@tool<\/strong> <strong>def search_web(query: str) -&gt; str:<\/strong> <strong>&#8220;Search the web for current information on any topic. Use this when you need up-to-date facts.&#8221;<\/strong> <strong>return f&#8221;Search results for: {query} &#8211; LangGraph 0.2 released in 2025 with full multi-agent support.&#8221;<\/strong><\/p>\n\n\n\n<p>For now, this tool simulates a search result. In a real project, you would replace the return line with an actual search API call. The LLM does not care how the function works internally. It only reads the docstring to decide when to call it.<\/p>\n\n\n\n<p>Now add these two lines below your tool:<\/p>\n\n\n\n<p><strong>tools = [search_web]<\/strong> <strong>llm = ChatOpenAI(model=&#8221;gpt-4o&#8221;, temperature=0)<\/strong> <strong>llm_with_tools = llm.bind_tools(tools)<\/strong><\/p>\n\n\n\n<p><strong>bind_tools<\/strong> connects your tool to the LLM. Now, when the LLM reasons that it needs to search for something, it will call your function automatically.<\/p>\n\n\n\n<ul>\n<li><strong>@tool<\/strong> registers the function so LangChain knows it is an agent tool<\/li>\n\n\n\n<li><strong>Docstring<\/strong> is what the LLM reads to understand what the tool does and when to use it<\/li>\n\n\n\n<li><strong>bind_tools<\/strong> links your tools to the LLM so it can choose to call them during reasoning<\/li>\n\n\n\n<li><strong>temperature=0<\/strong> means the agent makes consistent logical decisions every time it runs<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Step 4: Write the Three Core Functions<\/strong><\/h3>\n\n\n\n<p>You need three functions. The agent function that calls the LLM, the tool function that runs your search tool, and a condition function that decides whether to keep going or stop.<\/p>\n\n\n\n<p>Add them one by one below your LLM setup.<\/p>\n\n\n\n<p>First, the agent function:<\/p>\n\n\n\n<p><strong>def agent_node(state: AgentState):<\/strong> <strong>response = llm_with_tools.invoke(state[&#8220;messages&#8221;])<\/strong> <strong>return {&#8220;messages&#8221;: [response]}<\/strong><\/p>\n\n\n\n<p>This reads all the messages in the state, sends them to GPT-4o, and adds the reply back to the state. Simple.<\/p>\n\n\n\n<p>Second, the tool function:<\/p>\n\n\n\n<p><strong>def tool_node(state: AgentState):<\/strong> <strong>last_message = state[&#8220;messages&#8221;][-1]<\/strong> <strong>results = []<\/strong> <strong>for tool_call in last_message.tool_calls:<\/strong> <strong>result = search_web.invoke(tool_call[&#8220;args&#8221;])<\/strong> <strong>results.append(ToolMessage(content=result, tool_call_id=tool_call[&#8220;id&#8221;]))<\/strong> <strong>return {&#8220;messages&#8221;: results}<\/strong><\/p>\n\n\n\n<p>This reads the last message from the LLM, sees which tool it requested, runs that tool, and puts the result back into the state so the LLM can read it next.<\/p>\n\n\n\n<p>Third, the condition function:<\/p>\n\n\n\n<p><strong>def should_continue(state: AgentState):<\/strong> <strong>last_message = state[&#8220;messages&#8221;][-1]<\/strong> <strong>if hasattr(last_message, &#8220;tool_calls&#8221;) and last_message.tool_calls:<\/strong> <strong>return &#8220;continue&#8221;<\/strong> <strong>return &#8220;end&#8221;<\/strong><\/p>\n\n\n\n<p>This checks the last LLM message. If the LLM wants to call a tool it returns &#8220;continue&#8221; and the graph goes to the tool node. If the LLM has a final answer it returns &#8220;end&#8221; and the graph stops.<\/p>\n\n\n\n<ul>\n<li><strong>agent_node<\/strong> is the reasoning step, the LLM reads the history and decides what to do next<\/li>\n\n\n\n<li><strong>tool_node<\/strong> is the action step, it runs the tool and puts the result back for the LLM to read<\/li>\n\n\n\n<li><strong>should_continue<\/strong> is the traffic light between the two nodes, it controls the loop<\/li>\n\n\n\n<li><strong>ToolMessage<\/strong> wraps the tool result so the LLM knows the result came from a tool call<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Step 5: Build the Graph and Run Your Agent<\/strong><\/h3>\n\n\n\n<p>This is the last step. You will wire all the pieces together, compile the graph, and run it.<\/p>\n\n\n\n<p>Add this below your three functions:<\/p>\n\n\n\n<p><strong>graph = StateGraph(AgentState)<\/strong> <strong>graph.add_node(&#8220;agent&#8221;, agent_node)<\/strong> <strong>graph.add_node(&#8220;tools&#8221;, tool_node)<\/strong> <strong>graph.set_entry_point(&#8220;agent&#8221;)<\/strong> <strong>graph.add_conditional_edges(&#8220;agent&#8221;, should_continue, {&#8220;continue&#8221;: &#8220;tools&#8221;, &#8220;end&#8221;: END})<\/strong> <strong>graph.add_edge(&#8220;tools&#8221;, &#8220;agent&#8221;)<\/strong> <strong>app = graph.compile()<\/strong><\/p>\n\n\n\n<p>Here is what each line does. <strong>add_node<\/strong> registers each function as a named step. <strong>set_entry_point<\/strong> tells the graph to start at the agent node. <strong>add_conditional_edges<\/strong> routes from the agent to either the tools node or END based on what <strong>should_continue<\/strong> returns. <strong>add_edge<\/strong> from tools back to agent creates the loop so the agent can reason again after every tool call. <strong>compile<\/strong> locks everything and makes it runnable.<\/p>\n\n\n\n<p>Now add this at the very bottom of your file to run it:<\/p>\n\n\n\n<p><strong>result = app.invoke({&#8220;messages&#8221;: [HumanMessage(content=&#8221;What is LangGraph and when was it updated?&#8221;)]})<\/strong> <strong>print(result[&#8220;messages&#8221;][-1].content)<\/strong><\/p>\n\n\n\n<p>Save the file and run <strong>python agent.py<\/strong> in your terminal.<\/p>\n\n\n\n<p>Your agent will receive the question, reason that it needs current information, call <strong>search_web<\/strong>, read the result, think again, and print a final answer. That is the full ReAct loop running live on your machine. You just built a working agentic AI system.<\/p>\n\n\n\n<ul>\n<li><strong>add_conditional_edges<\/strong> is what creates the reasoning loop between the agent and tools<\/li>\n\n\n\n<li><strong>add_edge from tools to agent<\/strong> lets the LLM keep reasoning after every tool result<\/li>\n\n\n\n<li><strong>compile<\/strong> turns your graph definition into something you can actually invoke and run<\/li>\n\n\n\n<li><strong>app.invoke<\/strong> starts the whole workflow and returns the final state when the agent finishes<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>What Can You Build With LangChain and LangGraph<\/strong><\/h2>\n\n\n\n<p>Once you understand the basics, the kinds of agents you can build grow quickly. These are the most practical and commonly built agentic AI systems in 2026.<\/p>\n\n\n\n<p><strong>1. Research and Summarization Agents<\/strong><\/p>\n\n\n\n<p>A research agent takes a question, searches multiple sources, reads and compares the results, and produces a clean structured summary. This is one of the most popular use cases because it saves hours of manual research work.<\/p>\n\n\n\n<p>Your graph for this would have a planner node that breaks the question into sub-queries, a search node that queries each sub-topic, a grader node that checks if the results are relevant, and a summarizer node that writes the final answer. LangGraph manages the flow between all of these automatically.<\/p>\n\n\n\n<ul>\n<li><strong>Planner node<\/strong> breaks the big question into smaller searchable queries<\/li>\n\n\n\n<li><strong>Search node<\/strong> runs each query against a web search or a database tool<\/li>\n\n\n\n<li><strong>Grader node<\/strong> checks if results are relevant and loops back to search if they are not<\/li>\n\n\n\n<li><strong>The summarizer node<\/strong> takes all the relevant results and writes a clean final answer<\/li>\n<\/ul>\n\n\n\n<p><strong>2. Multi-Agent Systems<\/strong><\/p>\n\n\n\n<p>A multi-agent system is when you have several specialized agents that each handle one part of a task and pass results to each other. For example, a researcher agent gathers information, a writer agent drafts a report, and an editor agent reviews and refines it. LangGraph&#8217;s supervisor pattern makes this straightforward to implement.<\/p>\n\n\n\n<p>You define a supervisor node that reads the task and decides which agent to call next. Each specialist agent is its own node with its own tools and instructions. The supervisor routes between them until the full task is complete.<\/p>\n\n\n\n<ul>\n<li><strong>Supervisor node<\/strong> acts as the coordinator, deciding which specialist to call next<\/li>\n\n\n\n<li><strong>Specialist agents<\/strong> each focus on one thing like research, writing, or data analysis<\/li>\n\n\n\n<li><strong>Message passing<\/strong> is how agents share results with each other through the shared state<\/li>\n\n\n\n<li><strong>Parallel execution<\/strong> is possible in LangGraph when two agents do not depend on each other<\/li>\n<\/ul>\n\n\n\n<p><strong>3. Retrieval Augmented Generation Agents<\/strong><\/p>\n\n\n\n<p>A RAG agent goes beyond basic <a href=\"https:\/\/www.guvi.in\/blog\/guide-for-retrieval-augmented-generation\/\" target=\"_blank\" rel=\"noreferrer noopener\">RAG<\/a> by adding decision-making to the retrieval process. Instead of always retrieving from the same source, the agent decides which source to query based on the question, grades the retrieved documents for relevance, rewrites the query if the results are not good enough, and only generates an answer when it has solid evidence.<\/p>\n\n\n\n<p>This is called agentic RAG and it produces significantly better results than static RAG pipelines because the agent can course correct when retrieval fails.<\/p>\n\n\n\n<ul>\n<li><strong>Query routing<\/strong> means the agent picks the right data source for each question<\/li>\n\n\n\n<li><strong>Document grading<\/strong> means the agent checks if retrieved content actually answers the question<\/li>\n\n\n\n<li><strong>Query rewriting<\/strong> means the agent tries a better search if the first attempt was not useful<\/li>\n\n\n\n<li><strong>Answer generation<\/strong> only happens when the agent is satisfied with what it retrieved<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Tips for Building Better Agentic AI Systems<\/strong><\/h2>\n\n\n\n<ul>\n<li><strong>Keep your nodes small and focused:<\/strong> Each node should do one thing well. A node that calls the LLM, runs a tool, and updates three parts of the state is too complex and hard to debug.<\/li>\n\n\n\n<li><strong>Write clear tool docstrings:<\/strong> The LLM reads your tool&#8217;s docstring to decide when to use it. A vague docstring means the agent will misuse the tool or ignore it entirely.<\/li>\n\n\n\n<li><strong>Always define a stopping condition:<\/strong> Without a clear condition that moves the graph to END, your agent can loop forever. Always test your conditional edges carefully.<\/li>\n\n\n\n<li><strong>Use temperature=0 for agents:<\/strong> Creative responses are great for writing tools but bad for agents that need to make consistent logical decisions. Keep temperature at zero for reliability.<\/li>\n\n\n\n<li><strong>Start with one tool:<\/strong> Build your first agent with a single tool and get it working end to end before adding more. Each new tool adds complexity to the agent&#8217;s decision-making.<\/li>\n\n\n\n<li><strong>Log your state at every step:<\/strong> During development, print the state after each node runs. Seeing the full state at each step makes debugging dramatically easier.<\/li>\n<\/ul>\n\n\n\n<div style=\"background-color: #099f4e; border: 3px solid #110053; border-radius: 12px; padding: 18px 22px; color: #FFFFFF; font-size: 18px; font-family: Montserrat, Helvetica, sans-serif; line-height: 1.6; box-shadow: 0 4px 12px rgba(0, 0, 0, 0.15); max-width: 750px; margin: 22px auto;\">\n  <h3 style=\"margin-top: 0; font-size: 22px; font-weight: 700; color: #ffffff;\">\ud83d\udca1 Did You Know?<\/h3>\n  <ul style=\"padding-left: 20px; margin: 10px 0;\">\n    <li>LangGraph is an MIT-licensed open-source library, meaning it is completely free to use in personal and commercial projects with no restrictions.<\/li>\n    <li>The ReAct pattern used in agentic AI was introduced in a 2022 research paper by Google and has since become the standard architecture for production AI agents.<\/li>\n    <li>LangGraph supports human-in-the-loop checkpoints, meaning you can pause an agent mid-workflow, let a human review the state, and then resume execution from exactly where it stopped.<\/li>\n  <\/ul>\n<\/div>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Conclusion<\/strong><\/h2>\n\n\n\n<p>Learning how to build agentic AI with LangChain and LangGraph is one of the most valuable skills a developer can have in 2026. You have gone from understanding what agentic AI is, to knowing the difference between LangChain and LangGraph, to building a real working agent with nodes, edges, tools, and state. The concepts you learned here, ReAct, state management, conditional edges, and tool binding, are the same foundations used in production systems at companies building the most advanced AI applications today.<\/p>\n\n\n\n<p>The best next step is to take the agent you built in this tutorial and extend it with one more tool. Add a calculator, a database query, or a file reader. Run it, trace the state at each step, and watch how the graph handles a more complex workflow. That hands-on loop of building and observing is what turns theory into real skill.&nbsp;<\/p>\n\n\n\n<p>If you want to go deeper, also explore our guide on<a href=\"https:\/\/www.guvi.in\/blog\/langchain-tutorial-for-beginners\/\"> <\/a><a href=\"https:\/\/www.guvi.in\/blog\/autogen-tutorial\/\" target=\"_blank\" rel=\"noreferrer noopener\">AutoGen Tutorial<\/a> to see how different agentic frameworks compare.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>FAQs<\/strong><\/h2>\n\n\n<div id=\"rank-math-faq\" class=\"rank-math-block\">\n<div class=\"rank-math-list \">\n<div id=\"faq-question-1773727817992\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \"><strong>1. Do I need to know machine learning to build agentic AI with LangChain and LangGraph?<\/strong><\/h3>\n<div class=\"rank-math-answer \">\n\n<p>No. You do not need any machine learning background. You need basic Python skills, an understanding of functions and dictionaries, and the concepts covered in this tutorial. LangChain and LangGraph handle all the model complexity for you.<\/p>\n\n<\/div>\n<\/div>\n<div id=\"faq-question-1773727838195\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \"><strong>2. What is the difference between LangChain and LangGraph?<\/strong><\/h3>\n<div class=\"rank-math-answer \">\n\n<p>LangChain gives you the building blocks like LLM integration, tools, and memory. LangGraph gives you the workflow engine that connects those blocks into stateful graphs with loops, branches, and conditional logic. Most serious agentic AI projects use both together.<\/p>\n\n<\/div>\n<\/div>\n<div id=\"faq-question-1773727857980\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \"><strong>3. Can I use LangGraph with models other than OpenAI?<\/strong><\/h3>\n<div class=\"rank-math-answer \">\n\n<p>Yes. LangGraph works with any LLM that LangChain supports, which includes Anthropic Claude, Google Gemini, Mistral, LLaMA, and many more. You just swap out the LLM class in your code and everything else stays the same.<\/p>\n\n<\/div>\n<\/div>\n<div id=\"faq-question-1773727877464\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \"><strong>4. How is agentic AI different from a regular chatbot?<\/strong><\/h3>\n<div class=\"rank-math-answer \">\n\n<p>A regular chatbot takes a message and returns a reply. An agentic AI takes a goal, breaks it into steps, uses tools to gather information or take actions, reasons about the results, and keeps going until the goal is fully completed. It acts instead of just responding.<\/p>\n\n<\/div>\n<\/div>\n<div id=\"faq-question-1773727896973\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \"><strong>5. Is LangChain and LangGraph free to use?<\/strong><\/h3>\n<div class=\"rank-math-answer \">\n\n<p>Yes. Both LangChain and LangGraph are open-source and MIT-licensed, meaning they are completely free. You will pay for the LLM API you use, such as OpenAI or Anthropic, but the frameworks themselves have no cost.<\/p>\n\n<\/div>\n<\/div>\n<\/div>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>You have probably used ChatGPT or any AI chatbot and noticed that it answers questions but cannot actually go out and do things on its own. Ask it to research a topic, write a report, check your database, and send a summary email, and it will tell you how to do it but will not [&hellip;]<\/p>\n","protected":false},"author":65,"featured_media":105860,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[933],"tags":[],"views":"1216","authorinfo":{"name":"Jebasta","url":"https:\/\/www.guvi.in\/blog\/author\/jebasta\/"},"thumbnailURL":"https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2026\/03\/How-to-Build-Agentic-AI-with-LangChain-and-LangGraph-300x116.png","jetpack_featured_media_url":"https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2026\/03\/How-to-Build-Agentic-AI-with-LangChain-and-LangGraph.png","_links":{"self":[{"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/posts\/104071"}],"collection":[{"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/users\/65"}],"replies":[{"embeddable":true,"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/comments?post=104071"}],"version-history":[{"count":5,"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/posts\/104071\/revisions"}],"predecessor-version":[{"id":105866,"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/posts\/104071\/revisions\/105866"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/media\/105860"}],"wp:attachment":[{"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/media?parent=104071"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/categories?post=104071"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/tags?post=104071"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}