Tools give agents the ability to interact with the world beyond text generation. A CrewAI: Multi-Agent Orchestration tool is a Python function that an agent can invoke during its reasoning loop to search the web, read files, query APIs, or perform computations. CrewAI: Multi-Agent Orchestration ships with a library of built-in tools and provides a simple @tool decorator for creating custom ones. This section covers both, along with tool caching, error handling, and best practices for tool design.
1. How Tools Work in CrewAI: Multi-Agent Orchestration
When an agent encounters a task that requires external information or actions, it decides (through LLM reasoning) to call one of its available tools. The tool executes Python code, returns a result as a string, and the agent incorporates that result into its next reasoning step. This is the same function-calling pattern used by OpenAI and other providers, but CrewAI: Multi-Agent Orchestration wraps it in a higher-level abstraction that handles argument parsing, error recovery, and result formatting automatically.
Tools are assigned to agents via the tools parameter, which accepts a list
of tool instances. Each agent can have a different set of tools, allowing you to scope
capabilities precisely.
from crewai import Agent
from crewai_tools import SerperDevTool, FileReadTool
# Create tool instances
search_tool = SerperDevTool()
file_tool = FileReadTool()
researcher = Agent(
role="Research Analyst",
goal="Find accurate information using web search and local files",
backstory="You are a meticulous researcher who verifies claims from multiple sources.",
tools=[search_tool, file_tool], # Agent can use both tools
)
Install the CrewAI: Multi-Agent Orchestration tools package separately: pip install crewai-tools. The core crewai package does not include built-in tools by default.
2. Built-in Tools: SerperDevTool
The SerperDevTool provides web search capabilities via the
Serper API. It is the most commonly used built-in tool
because it gives agents access to real-time information from the web. You need a Serper
API key stored in the SERPER_API_KEY environment variable.
import os
from crewai import Agent, Task, Crew
from crewai_tools import SerperDevTool
os.environ["SERPER_API_KEY"] = "your-serper-api-key"
search_tool = SerperDevTool()
researcher = Agent(
role="News Researcher",
goal="Find the latest news on a given topic",
backstory="You are a journalist who tracks breaking news.",
tools=[search_tool],
)
news_task = Task(
description="Find the 3 most important AI news stories from the past week.",
expected_output="A numbered list of 3 news stories, each with title, source, and one-sentence summary.",
agent=researcher,
)
crew = Crew(agents=[researcher], tasks=[news_task])
# result = crew.kickoff()
3. Built-in Tools: FileReadTool and WebsiteSearchTool
CrewAI: Multi-Agent Orchestration provides several other built-in tools for common operations. The
FileReadTool reads local files, which is useful when agents need to process
documents, configuration files, or data. The WebsiteSearchTool uses
retrieval-augmented generation (RAG) to search and extract information from specific
websites.
from crewai import Agent
from crewai_tools import FileReadTool, WebsiteSearchTool
# FileReadTool: read any local file
file_reader = FileReadTool(file_path="data/quarterly_report.txt")
# WebsiteSearchTool: RAG-based search on a specific website
docs_searcher = WebsiteSearchTool(website="https://docs.crewai.com")
analyst = Agent(
role="Document Analyst",
goal="Extract insights from documents and web sources",
backstory="You specialize in document analysis and information extraction.",
tools=[file_reader, docs_searcher],
)
The table below summarizes the most commonly used built-in tools.
| Tool | Purpose | Required Config |
|---|---|---|
SerperDevTool |
Web search via Serper API | SERPER_API_KEY env var |
FileReadTool |
Read local files | Optional file_path |
WebsiteSearchTool |
RAG search on a website | Optional website URL |
DirectoryReadTool |
List files in a directory | Optional directory path |
CodeInterpreterTool |
Execute Python code | None |
ScrapeWebsiteTool |
Scrape raw content from a URL | None |
4. Creating Custom Tools with the @tool Decorator
When the built-in tools do not cover your use case, you can create custom tools using
the @tool decorator. This is the simplest way to expose any Python function
to a CrewAI: Multi-Agent Orchestration agent. The function's docstring becomes the tool description that the LLM
uses to decide when to invoke it, so write descriptive docstrings.
from crewai import Agent
from crewai.tools import tool
@tool("Calculate Compound Interest")
def compound_interest(principal: float, rate: float, years: int) -> str:
"""Calculate compound interest. Takes principal amount in dollars,
annual interest rate as a decimal (e.g., 0.05 for 5%), and number
of years. Returns the final amount and total interest earned."""
final = principal * (1 + rate) ** years
interest = final - principal
return f"Principal: ${principal:,.2f}, Final: ${final:,.2f}, Interest: ${interest:,.2f}"
financial_agent = Agent(
role="Financial Calculator",
goal="Perform accurate financial calculations",
backstory="You are a financial advisor who uses precise calculations.",
tools=[compound_interest],
)
# The agent can now call compound_interest during its reasoning loop
The tool's docstring is critical. The LLM reads it to decide when and how to call the tool. A vague docstring (e.g., "Does stuff") will cause the agent to misuse or ignore the tool. Always describe the inputs, what the tool does, and what it returns.
5. Custom Tools with the BaseTool Class
For tools that need initialization logic, state management, or more complex configuration,
subclass BaseTool instead of using the decorator. This approach gives you
full control over the tool's lifecycle, including constructor arguments and instance
variables.
from crewai.tools import BaseTool
from pydantic import Field
import requests
class GitHubRepoTool(BaseTool):
name: str = "GitHub Repository Info"
description: str = (
"Fetches metadata about a GitHub repository. "
"Input should be in the format 'owner/repo' (e.g., 'crewAIInc/crewAI')."
)
api_token: str = Field(default="", description="GitHub personal access token")
def _run(self, repo_path: str) -> str:
"""Fetch repository metadata from the GitHub API."""
headers = {"Accept": "application/vnd.github.v3+json"}
if self.api_token:
headers["Authorization"] = f"token {self.api_token}"
response = requests.get(
f"https://api.github.com/repos/{repo_path}",
headers=headers,
)
if response.status_code != 200:
return f"Error: Could not fetch repo {repo_path} (HTTP {response.status_code})"
data = response.json()
return (
f"Repository: {data['full_name']}\n"
f"Stars: {data['stargazers_count']}\n"
f"Forks: {data['forks_count']}\n"
f"Language: {data['language']}\n"
f"Description: {data['description']}"
)
# Usage
github_tool = GitHubRepoTool(api_token="ghp_your_token_here")
agent = Agent(
role="Open Source Analyst",
goal="Evaluate GitHub repositories",
backstory="You assess open source projects for adoption readiness.",
tools=[github_tool],
)
6. Tool Caching
When multiple agents (or the same agent across iterations) call the same tool with
the same arguments, CrewAI: Multi-Agent Orchestration can cache the results to avoid redundant API calls. Caching
is enabled by default for most tools. You can control it with the cache_function
parameter, which lets you define custom logic for when to cache and when to fetch fresh
results.
from crewai.tools import tool
def should_cache(args: dict, result: str) -> bool:
"""Only cache successful results (not errors)."""
return not result.startswith("Error")
@tool("Stock Price Lookup")
def stock_price(ticker: str) -> str:
"""Look up the current stock price for a given ticker symbol.
Returns the price and daily change percentage."""
# In production, this would call a real API
return f"{ticker}: $150.25 (+1.3%)"
stock_price.cache_function = should_cache
7. Tool Error Handling
Tools interact with external systems that can fail: APIs go down, files are missing, rate limits are hit. CrewAI: Multi-Agent Orchestration provides a built-in error handling mechanism. When a tool raises an exception, CrewAI: Multi-Agent Orchestration catches it and passes the error message back to the agent as an observation. The agent can then decide to retry, try a different tool, or work around the failure. You can also implement explicit error handling inside your tool to provide more informative error messages.
from crewai.tools import tool
import requests
@tool("Fetch URL Content")
def fetch_url(url: str) -> str:
"""Fetch the text content of a web page. Input is a full URL
starting with http:// or https://. Returns the page text or
an error message if the request fails."""
try:
response = requests.get(url, timeout=10)
response.raise_for_status()
return response.text[:5000] # Limit response size
except requests.exceptions.Timeout:
return f"Error: Request to {url} timed out after 10 seconds. Try again later."
except requests.exceptions.HTTPError as e:
return f"Error: HTTP {e.response.status_code} when fetching {url}."
except requests.exceptions.RequestException as e:
return f"Error: Could not fetch {url}. Reason: {str(e)}"
Return descriptive error strings from your tools instead of raising exceptions. This gives the agent enough context to reason about what went wrong and try an alternative approach, rather than simply seeing a generic failure message.
8. Assigning Tools to Agents vs. Tasks
Tools can be assigned at two levels in CrewAI: Multi-Agent Orchestration. Agent-level tools are available for every task the agent performs. Task-level tools are available only for that specific task, supplementing (not replacing) the agent's default tools. Use task-level tools when a particular task requires a specialized capability that other tasks do not need.
from crewai import Agent, Task
from crewai_tools import SerperDevTool, FileReadTool
from crewai.tools import tool
@tool("Database Query")
def query_db(sql: str) -> str:
"""Execute a read-only SQL query against the analytics database.
Returns the query results as a formatted table."""
# In production, connect to a real database
return "| id | name | revenue |\n| 1 | Acme | 1.2M |"
search = SerperDevTool()
file_reader = FileReadTool()
analyst = Agent(
role="Business Analyst",
goal="Answer business questions using data",
backstory="You combine web research with internal data analysis.",
tools=[search, file_reader], # Available for all tasks
)
# This specific task also gets database access
db_task = Task(
description="What was Acme Corp's revenue last quarter? Check our internal database first.",
expected_output="Revenue figure with source (internal DB or web).",
agent=analyst,
tools=[query_db], # Additional tool for this task only
)
Tools transform agents from text generators into actors that can search, read, compute, and call APIs. Use built-in tools for common operations (web search, file reading), and the @tool decorator or BaseTool subclass for custom capabilities. Write clear docstrings, handle errors gracefully, and scope tool access to the agents and tasks that need it. In the next section, we will assemble agents, tasks, and tools into crews.