Building Conversational AI with LLMs and Agents
Appendix N: CrewAI: Multi-Agent Orchestration

Crews: Sequential and Hierarchical Processes

Big Picture

A crew is the orchestration layer that brings agents, tasks, and tools together. The Crew class defines how tasks are executed: sequentially (one after another) or hierarchically (managed by a coordinator agent). It also controls global settings like verbose logging, memory, rate limiting, and input variables. This section covers crew configuration, process types, kickoff mechanics, and output handling.

1. The Crew Class

The Crew class is the entry point for running a multi-agent workflow. At minimum, you provide a list of agents and a list of tasks. CrewAI: Multi-Agent Orchestration then orchestrates execution, passing outputs between tasks, managing tool calls, and collecting final results. Think of the crew as the "main function" of your multi-agent application.

The following example creates and runs a minimal crew with two agents and two tasks.

from crewai import Agent, Task, Crew

researcher = Agent(
 role="Researcher",
 goal="Gather information on a given topic",
 backstory="You are a thorough research specialist.",
)
writer = Agent(
 role="Writer",
 goal="Write clear, engaging content",
 backstory="You are a skilled technical writer.",
)

research_task = Task(
 description="Research the key features of CrewAI: Multi-Agent Orchestration version 0.80.",
 expected_output="A bullet-point list of 5 to 8 key features with brief descriptions.",
 agent=researcher,
)
writing_task = Task(
 description="Write a blog post introducing CrewAI: Multi-Agent Orchestration's key features to developers.",
 expected_output="An 800-word blog post in markdown format.",
 agent=writer,
 context=[research_task],
)

crew = Crew(
 agents=[researcher, writer],
 tasks=[research_task, writing_task],
)

result = crew.kickoff()
print(result.raw[:200])
# CrewAI: Multi-Agent Orchestration 0.80: Key Features for Developers CrewAI: Multi-Agent Orchestration 0.80 introduces several significant improvements for building multi-agent AI applications. Here are the highlights: ## 1. Enhanced Task De...

2. Sequential Process

The sequential process is the default execution mode. Tasks run one at a time in the order they appear in the tasks list. Each task can access the outputs of all previous tasks (explicitly via context or implicitly through the sequential flow). This is the simplest and most predictable execution mode.

from crewai import Agent, Task, Crew, Process

planner = Agent(role="Planner", goal="Create project plans", backstory="Expert PM.")
developer = Agent(role="Developer", goal="Implement features", backstory="Senior engineer.")
tester = Agent(role="Tester", goal="Validate implementations", backstory="QA specialist.")

plan_task = Task(
 description="Create a development plan for a REST API with user authentication.",
 expected_output="A numbered list of implementation steps with time estimates.",
 agent=planner,
)
dev_task = Task(
 description="Based on the plan, write the core API endpoint code in Python/FastAPI.",
 expected_output="Python code for the main API endpoints with inline comments.",
 agent=developer,
 context=[plan_task],
)
test_task = Task(
 description="Review the code and create a test plan with specific test cases.",
 expected_output="A test plan with at least 5 test cases covering happy path and edge cases.",
 agent=tester,
 context=[dev_task],
)

crew = Crew(
 agents=[planner, developer, tester],
 tasks=[plan_task, dev_task, test_task],
 process=Process.sequential, # This is the default
)
# result = crew.kickoff()
Tip

Sequential execution is predictable and easy to debug. Start with sequential mode during development, and only switch to hierarchical when you need a manager agent to coordinate complex, interdependent tasks dynamically.

3. Hierarchical Process

The hierarchical process introduces a manager agent that coordinates the crew. Instead of tasks running in a fixed order, the manager decides which agent should handle each task, reviews outputs, and can reassign work if the result is unsatisfactory. This mimics a real team with a project lead who delegates and reviews. You can provide your own manager agent or let CrewAI: Multi-Agent Orchestration create one automatically using a specified manager_llm.

from crewai import Agent, Task, Crew, Process

researcher = Agent(
 role="Researcher",
 goal="Find accurate data on market trends",
 backstory="You are a data-driven market researcher.",
)
analyst = Agent(
 role="Analyst",
 goal="Identify patterns and insights from data",
 backstory="You are a quantitative analyst with statistics expertise.",
)
writer = Agent(
 role="Report Writer",
 goal="Produce polished executive reports",
 backstory="You write reports for C-suite audiences.",
)

# In hierarchical mode, you do not need to assign agents to tasks;
# the manager will decide who handles what.
tasks = [
 Task(
 description="Gather data on the global EV market: sales, growth, top manufacturers.",
 expected_output="Raw data with sources.",
 ),
 Task(
 description="Analyze the EV market data and identify the top 3 trends.",
 expected_output="A list of 3 trends with supporting evidence.",
 ),
 Task(
 description="Write a 1-page executive summary of the EV market analysis.",
 expected_output="A polished executive summary, under 500 words.",
 ),
]

crew = Crew(
 agents=[researcher, analyst, writer],
 tasks=tasks,
 process=Process.hierarchical,
 manager_llm="gpt-4o", # LLM for the auto-created manager agent
)
# result = crew.kickoff()

Custom Manager Agent

For finer control over the manager's behavior, provide your own manager agent instead of relying on the auto-generated one.

from crewai import Agent, Crew, Process

manager = Agent(
 role="Project Manager",
 goal="Coordinate the team to deliver high-quality results on time",
 backstory=(
 "You are an experienced project manager who excels at breaking down "
 "complex projects, assigning work to the right people, and ensuring "
 "quality standards are met. You review all deliverables before approval."
 ),
 allow_delegation=True,
)

crew = Crew(
 agents=[researcher, analyst, writer],
 tasks=tasks,
 process=Process.hierarchical,
 manager_agent=manager, # Use custom manager instead of manager_llm
)
Warning

Hierarchical mode uses more LLM calls than sequential mode because the manager agent reasons about task assignment and reviews outputs. Expect 2x to 3x the token usage compared to sequential execution for the same set of tasks. Monitor costs carefully during development.

4. Crew Kickoff and Input Variables

The kickoff() method starts crew execution. You can pass dynamic input variables using the inputs parameter, which is a dictionary. These variables are interpolated into task descriptions and expected outputs using curly-brace syntax. This makes your crews reusable across different topics, datasets, or configurations without modifying the task definitions.

from crewai import Agent, Task, Crew

researcher = Agent(
 role="Topic Researcher",
 goal="Research any given topic comprehensively",
 backstory="You are a versatile researcher.",
)
writer = Agent(
 role="Content Writer",
 goal="Write engaging articles on any subject",
 backstory="You are a versatile content creator.",
)

research_task = Task(
 description="Research the topic: {topic}. Focus on recent developments from {year}.",
 expected_output="A summary of key findings about {topic}.",
 agent=researcher,
)
write_task = Task(
 description="Write a {word_count}-word article about {topic} for a {audience} audience.",
 expected_output="A polished article of approximately {word_count} words.",
 agent=writer,
 context=[research_task],
)

crew = Crew(agents=[researcher, writer], tasks=[research_task, write_task])

# Run the same crew with different inputs
result1 = crew.kickoff(inputs={
 "topic": "quantum computing",
 "year": "2025",
 "word_count": "1000",
 "audience": "technical",
})

result2 = crew.kickoff(inputs={
 "topic": "sustainable agriculture",
 "year": "2025",
 "word_count": "800",
 "audience": "general",
})

5. Verbose Logging

Crew-level verbose logging controls the overall output detail during execution. When enabled, you see each agent's reasoning steps, tool calls, and task transitions. This is invaluable during development and debugging.

from crewai import Crew

crew = Crew(
 agents=[researcher, writer],
 tasks=[research_task, write_task],
 verbose=True, # Show detailed execution logs
)
# Output during kickoff():
# [Researcher] Starting task: Research the topic...
# [Researcher] Thinking: I should search for recent developments...
# [Researcher] Using tool: SerperDevTool
# [Researcher] Task complete.
# [Writer] Starting task: Write an article...
# ...

6. Memory Settings

Crew-level memory enables persistent storage that survives across multiple kickoff() calls. This is distinct from agent-level memory (which lasts only within a single execution). CrewAI: Multi-Agent Orchestration supports three types of crew memory: short-term, long-term, and entity memory. These are covered in depth in Section N.5. Here we show the basic configuration.

from crewai import Crew

crew = Crew(
 agents=[researcher, writer],
 tasks=[research_task, write_task],
 memory=True, # Enable all memory types
 verbose=True,
)

# First run: agents build up memory
result1 = crew.kickoff(inputs={"topic": "RAG systems"})

# Second run: agents can recall insights from the first run
result2 = crew.kickoff(inputs={"topic": "advanced RAG patterns"})

7. Rate Limiting with max_rpm

When using rate-limited APIs (both LLM providers and tool APIs), you can set max_rpm (maximum requests per minute) at the crew level. CrewAI: Multi-Agent Orchestration will automatically throttle requests to stay within the limit, adding small delays between calls as needed. This prevents 429 (rate limit) errors that can derail long-running crew executions.

from crewai import Crew

crew = Crew(
 agents=[researcher, analyst, writer],
 tasks=[research_task, analysis_task, report_task],
 max_rpm=30, # Limit to 30 API requests per minute
 verbose=True,
)
Tip

Set max_rpm to about 80% of your actual API rate limit. This leaves headroom for occasional bursts and prevents the crew from hitting the hard limit during intensive tool-use phases.

8. Working with Crew Output

The kickoff() method returns a CrewOutput object that provides access to the final result in multiple formats. You can get the raw text, structured JSON, Pydantic objects (if the final task uses structured output), and metadata about token usage.

from crewai import Crew

crew = Crew(agents=[researcher, writer], tasks=[research_task, write_task])
result = crew.kickoff(inputs={"topic": "LLM agents"})

# Access the final output in different formats
print(result.raw) # Raw text output from the last task
print(result.json_dict) # Dict if the last task used output_json
print(result.pydantic) # Pydantic model if last task used output_pydantic
print(result.token_usage) # Token usage statistics

# Access individual task outputs
for task_output in result.tasks_output:
 print(f"Task: {task_output.description[:50]}")
 print(f"Output: {task_output.raw[:100]}")
 print("---")
Task: Research the key features of CrewAI: Multi-Agent Orchestration for LLM ag Output: CrewAI: Multi-Agent Orchestration provides a framework for orchestrating multi-agent workflows. Key features include: role-ba --- Task: Write a developer-focused blog post about Crew Output: # Getting Started with CrewAI: Multi-Agent Orchestration: Build Multi-Agent AI Applications CrewAI: Multi-Agent Orchestration is an open-source framewor ---
Key Insight

The Crew class is where everything comes together. Sequential mode provides predictable, debuggable execution; hierarchical mode adds dynamic task assignment through a manager agent. Use input variables to make crews reusable, verbose logging for debugging, max_rpm for rate limiting, and memory for cross-run persistence. In the next section, we explore advanced patterns including delegation mechanics, memory types, and callbacks.