In CrewAI: Multi-Agent Orchestration, an agent is a persona with a job to do. Each agent is configured with a role, a goal, and a backstory that shape how the underlying LLM behaves. These three fields are injected into the system prompt, giving the model a consistent identity and purpose throughout a multi-step task. Designing effective agents means crafting these personas carefully, selecting the right LLM backend, and tuning parameters like memory, delegation, and iteration limits. This section covers everything you need to know to build well-defined CrewAI: Multi-Agent Orchestration agents.
1. The Agent Abstraction
CrewAI: Multi-Agent Orchestration models multi-agent collaboration as a crew of specialized agents, each
responsible for a distinct piece of work. The Agent class is the fundamental
building block. Unlike generic LLM wrappers, a CrewAI: Multi-Agent Orchestration agent carries a persistent identity
defined by three core fields: role, goal, and
backstory. These fields are not merely labels; they are woven directly into
the system prompt that the LLM receives on every call, ensuring consistent behavior across
multiple reasoning steps.
The following example creates a minimal agent with just the three required persona fields. This is the simplest possible agent configuration.
from crewai import Agent
researcher = Agent(
role="Senior Research Analyst",
goal="Find and synthesize the latest information on a given topic",
backstory=(
"You are a veteran research analyst with 15 years of experience "
"at a top consulting firm. You excel at finding primary sources, "
"cross-referencing claims, and producing concise summaries that "
"executives can act on immediately."
),
)
print(researcher.role)
Write backstories in second person ("You are...") rather than third person. CrewAI: Multi-Agent Orchestration injects the backstory directly into the system prompt, so second person reads naturally as an instruction to the LLM.
2. Role, Goal, and Backstory in Depth
Each of the three persona fields serves a distinct purpose in shaping agent behavior:
- role: A short title that identifies the agent's function within the crew. CrewAI: Multi-Agent Orchestration uses this as the agent's display name in logs and when other agents reference it during delegation. Keep it concise (3 to 6 words).
- goal: A one-sentence description of what the agent is trying to accomplish. The goal is appended to every prompt, giving the LLM a consistent objective. Specific, measurable goals produce better results than vague ones.
- backstory: A paragraph of context that establishes the agent's expertise, personality, and working style. This is the most influential field for controlling output quality and tone. Think of it as a detailed system prompt fragment.
The table below summarizes how each field influences the LLM's behavior.
| Field | Purpose | Prompt Placement | Best Practice |
|---|---|---|---|
| role | Identity label | System prompt header | 3 to 6 words, noun phrase |
| goal | Objective anchor | System prompt body | Specific, measurable |
| backstory | Expertise and style | System prompt body | 2 to 4 sentences, second person |
3. LLM Selection and Configuration
By default, CrewAI: Multi-Agent Orchestration agents use the model specified in the OPENAI_MODEL_NAME
environment variable (falling back to gpt-4o). You can override this on a
per-agent basis using the llm parameter, which accepts a model name string
or an LLM configuration object. This allows you to mix models within a single crew, using
a powerful model for complex reasoning and a cheaper model for straightforward tasks.
from crewai import Agent
# Use a specific model by name (requires appropriate API key in env)
analyst = Agent(
role="Financial Analyst",
goal="Produce accurate quarterly earnings summaries",
backstory="You are a CFA charterholder with deep expertise in equity research.",
llm="gpt-4o", # OpenAI model
)
# Use a different provider for a simpler task
formatter = Agent(
role="Report Formatter",
goal="Convert raw analysis into clean markdown reports",
backstory="You are a technical writer who specializes in financial documents.",
llm="anthropic/claude-3-5-sonnet-20241022", # Anthropic via LiteLLM syntax
)
# Use a local model via Ollama
local_agent = Agent(
role="Data Classifier",
goal="Classify support tickets by category",
backstory="You are a support triage specialist.",
llm="ollama/llama3.1",
)
CrewAI: Multi-Agent Orchestration uses LiteLLM under the hood, so any model string that LiteLLM supports will work. Prefix the model name with the provider (e.g., anthropic/, ollama/, groq/) for non-OpenAI models.
4. Agent Memory
CrewAI: Multi-Agent Orchestration agents can optionally maintain memory across interactions. When memory
is enabled at the agent level, the agent retains context from earlier steps in the current
execution. This is distinct from crew-level memory (covered in
Section N.5), which persists across multiple crew runs.
Agent-level memory helps when an agent is called multiple times within a single crew
execution and needs to recall what it did previously.
from crewai import Agent
researcher = Agent(
role="Iterative Researcher",
goal="Build a comprehensive knowledge base through multiple search rounds",
backstory=(
"You are a methodical researcher who builds understanding incrementally. "
"You remember what you have already found and avoid redundant searches."
),
memory=True, # Enable agent-level memory
)
5. Verbose Mode and Debugging
During development, you will frequently need to inspect what an agent is thinking and
doing at each step. The verbose parameter controls whether the agent prints
its reasoning, tool calls, and intermediate outputs to the console. Set it to
True during development and False in production.
from crewai import Agent
writer = Agent(
role="Technical Writer",
goal="Produce clear, accurate documentation for APIs",
backstory="You are a senior technical writer at a developer tools company.",
verbose=True, # Print reasoning steps to console
)
# When this agent runs, you will see output like:
# [Technical Writer] Thinking: I need to understand the API endpoints first...
# [Technical Writer] Using tool: FileReadTool
# [Technical Writer] Tool result: { ... }
6. Delegation Control
By default, CrewAI: Multi-Agent Orchestration agents can delegate subtasks to other agents in the crew. This is
a powerful feature for complex workflows, but it can also lead to unexpected behavior
if agents delegate when they should not. The allow_delegation parameter
gives you explicit control. When set to False, the agent must complete its
task on its own without asking other agents for help.
from crewai import Agent
# This agent works independently; no delegation
editor = Agent(
role="Copy Editor",
goal="Fix grammar, style, and formatting issues in draft text",
backstory="You are a meticulous copy editor with 20 years of experience.",
allow_delegation=False, # Must do its own work
)
# This agent can ask others for help
lead_researcher = Agent(
role="Lead Researcher",
goal="Coordinate research efforts and synthesize findings",
backstory="You lead a team of analysts and know when to delegate deep dives.",
allow_delegation=True, # Can delegate to other crew members
)
Delegation can consume additional LLM calls and increase both latency and cost. Disable it for agents performing self-contained, well-defined tasks. Enable it only for coordinator or manager agents that genuinely benefit from farming out subtasks.
7. Iteration Limits with max_iter
Agents operate in a reasoning loop, and without a bound, a confused agent could loop
indefinitely. The max_iter parameter sets the maximum number of reasoning
iterations an agent can perform for a single task. If the agent has not produced a final
answer after max_iter steps, CrewAI: Multi-Agent Orchestration forces it to return whatever it has
so far. The default is 20, which is sufficient for most tasks.
from crewai import Agent
# Allow more iterations for complex research tasks
deep_researcher = Agent(
role="Deep Researcher",
goal="Thoroughly investigate complex technical questions",
backstory="You are a PhD-level researcher who leaves no stone unturned.",
max_iter=30, # Allow up to 30 reasoning steps
)
# Restrict iterations for simple tasks to save cost
classifier = Agent(
role="Ticket Classifier",
goal="Classify each support ticket into exactly one category",
backstory="You are a support triage bot.",
max_iter=5, # Simple task, should finish quickly
)
8. System Prompt Customization
While role, goal, and backstory provide a structured way to shape the system prompt,
sometimes you need finer control. CrewAI: Multi-Agent Orchestration allows you to inject additional system-level
instructions using the system_template parameter. This is useful for
enforcing output formats, safety constraints, or domain-specific rules that apply
regardless of the task.
from crewai import Agent
compliance_agent = Agent(
role="Compliance Reviewer",
goal="Review content for regulatory compliance issues",
backstory=(
"You are a compliance specialist with expertise in GDPR, CCPA, "
"and SOC 2 requirements."
),
system_template=(
"You MUST flag any personally identifiable information (PII) found in "
"the content. Always cite the specific regulation that applies. "
"Never approve content that contains unredacted PII.\n\n"
"{system_prompt}"
),
)
The {system_prompt} placeholder in system_template is where CrewAI: Multi-Agent Orchestration inserts the auto-generated system prompt (built from role, goal, and backstory). Place your custom instructions before or after this placeholder to control their priority.
9. Putting It All Together
The following example combines all the configuration options discussed in this section into a single, fully configured agent. This pattern is representative of production-grade agent definitions.
from crewai import Agent
senior_analyst = Agent(
role="Senior Market Analyst",
goal=(
"Identify emerging market trends and produce actionable investment "
"recommendations backed by quantitative data"
),
backstory=(
"You are a senior analyst at a hedge fund with 12 years of experience "
"in technology sector equities. You combine fundamental analysis with "
"alternative data sources. Your reports are known for their precision "
"and the clarity of their recommendations."
),
llm="gpt-4o",
memory=True,
verbose=True,
allow_delegation=False,
max_iter=25,
)
print(f"Agent: {senior_analyst.role}")
print(f"Goal: {senior_analyst.goal}")
A well-designed CrewAI: Multi-Agent Orchestration agent starts with three carefully crafted strings: role, goal, and backstory. These fields are not boilerplate; they are the primary mechanism for controlling agent behavior. Layer on LLM selection, memory, delegation, and iteration limits to fine-tune performance, cost, and reliability. In the next section, we will define tasks that give these agents something to do.