"Everyone has a demo that works once. The hard part is building something that works a thousand times, for a thousand different users, without bankrupting you on API costs."
Compass, Cost Conscious AI Agent
Chapter Overview
Building an AI product is not primarily an engineering scheduling problem; it is a feasibility-and-evidence problem. AI shifts product design toward roles the model can perform reliably enough, cheaply enough, and fast enough to be useful in a workflow. This chapter gives you the frameworks to navigate that shift before writing a single line of production code.
You will learn to frame AI product hypotheses, choose the right model role from a taxonomy of patterns (scout, drafter, filter, ranker, verifier), assess technical and regulatory feasibility with a scoring framework, and study real-world case studies that illustrate how role assignment decisions play out in practice. Every section produces a concrete, reusable artefact: an AI Role Canvas, a Feasibility Scorecard, and a Role Assignment Brief.
This chapter explicitly references (not re-teaches) the technical depth from earlier chapters: APIs (Ch 10), prompting (Ch 11), RAG (Ch 20), evaluation (Ch 29), and strategy (Ch 33). Its unique contribution is the hypothesis-formation stage: the thinking that happens before the build loop begins (covered in Chapter 37) and the shipping decisions that follow (covered in Chapter 38).
Parts I through X taught you how LLMs work, how to prompt and fine-tune them, and how to deploy them safely. This chapter is where all of that knowledge converges into a product decision: given an idea, should you build it, and if so, what role should the model play? The frameworks here (AI Role Canvas, Feasibility Scorecard) give you a structured way to evaluate feasibility before committing engineering resources, feeding directly into the build loop of Chapter 37 and the shipping decisions of Chapter 38.
Learning Outcomes
- Convert an idea into an AI product hypothesis with explicit model role, data needs, risk tier, and cost assumptions
- Choose an initial architecture pattern (API-only, RAG, tool-using agent, hybrid) using the book's decision framework
- Assign model roles from a taxonomy of patterns: scout, drafter, filter, ranker, verifier
- Score feasibility across technical, data, regulatory, and economic dimensions before committing to a build
- Analyze real-world case studies to see how role assignment decisions succeed or fail in practice
Prerequisites
- Chapter 10: LLM APIs (making API calls, structured output)
- Chapter 11: Prompt Engineering (prompt design, guardrails)
- Chapter 33: Strategy & ROI (enterprise lens on AI adoption)
- Basic Python proficiency and familiarity with REST APIs
Sections
- 36.1 What Makes AI Products Different Probabilistic behavior, non-binary correctness, human-AI UX design for uncertainty and trust, ML maintenance debt, and agent failure modes.
- 36.2 Choosing the Model's Role The copilot-to-autopilot spectrum. Role assignment patterns (drafter, classifier, router, researcher, verifier). The AI Role Canvas deliverable.
- 36.3 Risk and Feasibility Assessment Feasibility scoring across technical, data, regulatory, and economic dimensions. Data readiness audits, regulatory pre-screening, and the Feasibility Scorecard deliverable.
- 36.4 Case Studies: Role Assignment in Practice Worked examples of role assignment decisions across domains: healthcare triage, e-commerce ranking, legal review, and customer support routing.
What's Next?
In the next chapter, Chapter 37: Building and Steering AI Products, you will take your product hypothesis and learn the observe-steer development loop that turns it into a working prototype.
