Unleash the Power of AI: Build Your Own Coding Assistant

Ever dreamed of having a tireless coding partner that understands plain English and writes code for you? That’s exactly what a Coder Agent is - your personal AI-powered coding assistant that turns your ideas into working code.

In this guide, I’ll show you how to create your very own Coder Agent using Python. Whether you’re a seasoned developer or just starting out, you’ll learn how to harness the power of Large Language Models (LLMs) to automate your coding workflow.

What You’ll Build

Your Coder Agent will be able to:

  • Understand natural language instructions
  • Write and edit code based on your requirements
  • Manage project files and structure
  • Search for relevant information to improve its output

Building Blocks: The Foundation

This tutorial builds on some excellent resources from the LangChain team:

Feel free to explore these for a deeper understanding of the concepts we’ll use.

Setting Up Your Environment

I come from a JavaScript/Node.js background, so I’ll be using Poetry for managing Python dependencies - it reminds me of the simplicity of npm. Don’t worry if you prefer a different setup; the concepts will work with any Python environment.

This guide is written from a Mac perspective, but the principles apply to any operating system.

Let’s Get Building!

Step 1: Environment Setup

  1. Install Python (on a Mac you’re already covered)
  2. Install Poetry: brew install poetry
  3. Create the project folder: poetry new coding-assistant
  4. cd coding-assistant
  5. poetry add langgraph langsmith langchain_anthropic

Step 2: Create a Chatbot

Let’s start with something simple - a basic chatbot that can understand and respond to messages. This will form the foundation of our Coder Agent.

The whole chatbot in this example is in a single file coding_assistant/assistant.py. You can just copy-and-paste the examples to that file.

Setup LLM

Theoretically you can use any supported LLM for this example. We will be using ChatGPT for our example. You can read more about why we cannot use Claude for it for now below.

To use ChatGPT for your LLM needs, all you have to do is to get your OpenAI API key from the OpenAPI website and export it like this:

export OPENAI_API_KEY="sk-abcd123..."

First try

from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-4o", temperature=0)


def chatbot(state: State):
    return {"messages": [llm.invoke(state["messages"])]}


graph_builder.add_edge(START, "chatbot")
graph_builder.add_node("chatbot", chatbot)
graph_builder.add_edge("chatbot", END)

graph = graph_builder.compile()


def stream_graph_updates(user_input: str):
    for event in graph.stream({"messages": [("user", user_input)]}):
        for value in event.values():
            print("Assistant:", value["messages"][-1].content)

while True:
    try:
        user_input = input("User: ")
        if user_input.lower() in ["quit", "exit", "q"]:
            print("Goodbye!")
            break

        stream_graph_updates(user_input)
    except:
        # fallback if input() is not available
        print("An interactive console session is required")
        break

Now this code creates a simple chatbot that we can interact with. You can run it by just executing this command:

> poetry run python coding_assistant/assistant.py
User: hello world
Assistant: Hello! How can I assist you today?
User: quit
Goodbye!

Step 3: Add Tools and Capabilities

Now comes the exciting part - we’ll give our agent the ability to actually work with code! By adding tools for file management and search capabilities, we’ll transform our simple chatbot into a capable coding assistant.

Following the How to use the prebuilt ReAct agent document, we’ll give the LLM tools that it can use to edit our project files. We also hook in a custom prompt to make the assistant understand what’s expected of them, based on How to add a custom system prompt to the prebuilt ReAct agent.

We will do a couple of changes to the previous code:

  1. We will extend the LangGraph graph_builder with a call to create_react_agent with the list of tools.
  2. We will add a search agent to make our resulting code better. This allows it to figure out things it doesn’t know yet
  3. We will also add memory to persist the chat history to allow the assistant to remember the previous discussion
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-4o", temperature=0)


import sys

if len(sys.argv) <= 1:
    print("Usage: python assistant.py <project_directory>")
    sys.exit(1)

working_directory = sys.argv[1]
print("Project directory:", working_directory)


from langchain_community.tools.tavily_search import TavilySearchResults

search_toolset = [
    TavilySearchResults(k=5),
]


from langchain_community.agent_toolkits import FileManagementToolkit

file_management_toolset = FileManagementToolkit(root_dir=working_directory).get_tools()

tools = search_toolset + file_management_toolset


from langgraph.prebuilt import create_react_agent

prompt = """
You are a helpful coding assistant that will create and edit the whole project based on
the user's instructions. Generate concise, efficient, and modern code to address the
described task or problem. Prioritize readability, leverage advanced features, and adhere
to best practices.

Always start by listing the files in the project directory.
"""
react_agent = create_react_agent(llm, tools=tools, state_modifier=prompt)


from langgraph.checkpoint.memory import MemorySaver

memory = MemorySaver()

config = {"configurable": {"thread_id": "1"}}


from typing_extensions import TypedDict, Annotated
from langgraph.graph.message import add_messages
from langgraph.graph import StateGraph, START, END

class State(TypedDict):
    messages: Annotated[list, add_messages]

def chatbot(state: State):
    messages = state["messages"]
    response = react_agent.invoke({"messages": messages})
    print(response)
    return response

graph_builder = StateGraph(State)
graph_builder.add_edge(START, "chatbot")
graph_builder.add_node("chatbot", chatbot)
graph_builder.add_edge("chatbot", END)

graph = graph_builder.compile(checkpointer=memory)


def stream_graph_updates(user_input: str):
    for event in graph.stream({"messages": [("user", user_input)]}, config):
        for value in event.values():
          message = value["messages"][-1]
          if isinstance(message, tuple):
              print(message)
          else:
              message.pretty_print()

import traceback

while True:
    try:
        user_input = input("User: ")
        if user_input.lower() in ["quit", "exit", "q"]:
            print("Goodbye!")
            break

        stream_graph_updates(user_input)
    except Exception:
        print(traceback.format_exc())
        break

Let’s try it out! If you run the assistant, you would get something like this:

> poetry run python coding_assistant/assistant.py project
Project directory: project
User: I need a TODO app

Potential issues

There is one challenges I fixed for you there that I want to share so you don’t have to waste time figuring them out.

We could otherwise use Claude in place of ChatGPT but Claude fails providing multiple arguments to tools correctly. What this means is that when the write_file tool requires both file_path and content of the file, roughly half of the time Claude only provides the file_path. It causes the following error:

Error: TypeError("CreateFileTool._run() missing 1 required positional argument: 'content'")
Please fix your mistakes.

You would next open the directory provided on the command-line and in there open the index.html file.

It would look possibly something like this: TODO APP Screenshot

Now you can continue the conversation and make changes to the application and see the changes update on screen.

Taking Your Agent to the Next Level

Congratulations! You’ve built a basic but functional Coder Agent. While it’s already useful, there’s so much more potential to explore. Here are some exciting ways to enhance your agent:

  • Smart File Filtering: Respect .gitignore rules using the pathspec module
  • Custom Behavior Rules: Add project-specific instructions through .assistantrules
  • Intelligent Project Analysis: Make the assistant always start by fetching the project file list

Pro Tips for Better Results

When working with your new Coder Agent, keep these points in mind:

  • Be specific in your requirements
  • Start with smaller tasks and gradually increase complexity
  • Review the changes before committing them
  • Keep the conversation context focused on one feature or change at a time

The world of AI-powered development is evolving rapidly, and your Coder Agent is just the beginning. What will you build with it?