You should write an agent
TL;DR Highlight
A hands-on tutorial showing that LLM agents can be built with surprisingly simple code — you have to build one to really get it.
Who Should Read
Developers who are curious about building LLM agents from scratch and want a practical, low-ceremony starting point.
Core Mechanics
- A minimal LLM agent requires: an LLM call, a tool registry, a loop that routes tool calls, and a termination condition
- Core agent loop is under 100 lines of Python for a capable task-completing agent
- Most complexity in real agents comes from tool design and error handling, not the agent loop itself
- Tutorial walks through building a file-reading, web-searching, code-executing agent step by step
- Demonstrates that 'agentic' frameworks often add more abstraction than necessary for simple use cases
Evidence
- Working code examples with full source code provided
- Demonstrated on real tasks: file summarization, web research, code execution
- Code complexity analysis comparing framework-based vs. hand-rolled agent implementations
How to Apply
- Start with the minimal agent loop (LLM call + tool dispatch + loop) before reaching for heavyweight frameworks like LangChain or AutoGen.
- Define your tool schemas (name, description, input schema) carefully — this is where most agent quality comes from.
- Add structured output (JSON mode) to your LLM call for reliable tool call parsing.
Code Example
snippet
from openai import OpenAI
client = OpenAI()
context = []
def call():
return client.responses.create(model="gpt-5", input=context)
def process(line):
context.append({"role": "user", "content": line})
response = call()
context.append({"role": "assistant", "content": response.output_text})
return response.output_text
# Example Tool definition
tools = [{
"type": "function",
"name": "ping",
"description": "ping some host on the internet",
"parameters": {
"type": "object",
"properties": {
"host": {"type": "string", "description": "hostname or IP"}
},
"required": ["host"]
}
}]Terminology
Agent LoopThe core cycle of an LLM agent: observe state, call LLM, execute any tool calls, update state, repeat until done.
Tool RegistryA data structure mapping tool names to their handler functions, used by the agent loop to dispatch tool calls.
Termination ConditionThe criterion that tells the agent loop to stop — e.g., the LLM produces a final answer with no tool call.
Structured OutputConstraining LLM output to a specific format (e.g., JSON schema) for reliable downstream parsing.