Build multi-agent systems with CrewAI framework
✓Works with OpenClaudeYou are a CrewAI framework specialist. The user wants to set up a multi-agent system using CrewAI with properly configured agents, tasks, and crew orchestration.
What to check first
- Run
pip list | grep crewaito verify CrewAI is installed (version 0.1.0+) - Check
echo $OPENAI_API_KEYto confirm your LLM API key is set in environment - Verify Python version is 3.9+ with
python --version
Steps
- Install CrewAI and dependencies with
pip install crewai crewai-tools langchain-openai - Import the core classes:
Agent,Task,Crewfromcrewaimodule - Define your first Agent with
role,goal,backstoryparameters and assign an LLM viallmparameter - Create a Task by instantiating with
description,expected_output, and assign it to anagent - Set the Task's
toolsparameter to give agents access to functions like web search or file operations - Define additional agents and tasks following the same pattern for your multi-agent workflow
- Instantiate a Crew with
agentslist,taskslist, andverbose=Truefor debugging - Call
crew.kickoff()to execute the workflow and retrieve results via theoutputproperty
Code
from crewai import Agent, Task, Crew
from crewai_tools import tool
from langchain_openai import ChatOpenAI
# Initialize LLM
llm = ChatOpenAI(model="gpt-4", temperature=0.7)
# Define custom tool
@tool
def search_web(query: str) -> str:
"""Search the web for information"""
return f"Search results for: {query}"
@tool
def write_file(filename: str, content: str) -> str:
"""Write content to a file"""
with open(filename, 'w') as f:
f.write(content)
return f"Written to {filename}"
# Create Agents
researcher = Agent(
role="Research Analyst",
goal="Find and analyze relevant information",
backstory="Expert researcher with deep analytical skills",
llm=llm,
tools=[search_web],
verbose=True
)
writer = Agent(
role="Content Writer",
goal="Write clear, engaging content based on research",
backstory="Professional writer skilled in technical content",
llm=llm,
tools=[write_file],
verbose=True
)
# Create Tasks
research_task = Task(
description="Research the latest trends in AI agents",
expected_output="Comprehensive list of 5 key trends with explanations",
agent=researcher,
tools=[search_web]
)
writing_task = Task(
description="Write a blog post about AI agent trends",
expected_
Note: this example was truncated in the source. See the GitHub repo for the latest full version.
Common Pitfalls
- Letting agents loop indefinitely without a hard step limit — set
max_iterationsto 10-20 for most workflows - Passing entire conversation history every iteration — costs explode. Use summarization or sliding window
- Not validating tool outputs before passing them to the next step — one bad output corrupts the entire chain
- Trusting the agent's self-evaluation — agents are notoriously bad at knowing when they're wrong
- Forgetting that agents can hallucinate tool calls that don't exist — always validate tool names against your registry
When NOT to Use This Skill
- When a single LLM call would suffice — agents add 5-10x latency and cost
- When the task has well-defined steps that don't need branching logic — use a workflow engine instead
- For high-stakes decisions without human review — agents make confident mistakes
How to Verify It Worked
- Run the agent on 10+ test cases including edge cases — track success rate, average steps, and total cost
- Compare agent output to human baseline — if a human can do it faster and cheaper, you don't need an agent
- Inspect the full reasoning trace, not just the final output — agents often arrive at correct answers via wrong reasoning
Production Considerations
- Set hard cost ceilings per agent run — a runaway agent can burn $50+ in minutes
- Log every tool call, every model call, every state transition — debugging agents without logs is impossible
- Have a kill switch — agents should be cancelable mid-run without corrupting state
- Monitor token usage trends — context bloat is the #1 cause of agent cost overruns
Related AI Agents Skills
Other Claude Code skills in the same category — free to download.
AutoGen Setup
Create AI agent conversations with AutoGen
LangGraph Workflow
Build stateful AI agent workflows with LangGraph
AI Agent Tools
Create custom tools for AI agents (search, calculator, API)
AI Agent Memory
Implement agent memory with vector stores and summaries
AI Agent Evaluation
Evaluate AI agent performance with benchmarks and metrics
AI Agent Observability
Add tracing, logging, and metrics to AI agents so you can debug failures
AI Agent Retry Strategy
Build robust retry logic for LLM and tool calls in AI agents
pydantic-ai
Build production-ready AI agents with PydanticAI — type-safe tool use, structured outputs, dependency injection, and multi-model support.
Want a AI Agents skill personalized to YOUR project?
This is a generic skill that works for everyone. Our AI can generate one tailored to your exact tech stack, naming conventions, folder structure, and coding patterns — with 3x more detail.