Team Architect
Design optimal agent team compositions for any project. This skill walks through a structured process to analyze requirements, select the right agents, choose a topology, assign models, size the team, and produce a ready-to-use blueprint.
Process Overview
Requirements --> Role Selection --> Topology --> Model Assignment --> Sizing --> Blueprint
Step 1: Requirements Analysis
Before selecting any agents, fully understand what the project demands.
Gather these inputs:
- •Scope: What is being built, changed, or fixed? List concrete deliverables.
- •Complexity: Is this a single-file fix or a multi-system migration?
- •Domains involved: Frontend, backend, database, infrastructure, ML, security, business logic?
- •Timeline pressure: Is this urgent (incident) or planned (feature work)?
- •Quality requirements: Does this need security review? Performance testing? Legal sign-off?
- •Existing codebase: Greenfield or brownfield? What languages and frameworks?
Classify the project into one of these categories:
| Category | Description | Typical Size |
|---|---|---|
| Bug fix | Single issue, narrow scope | Solo or Pair |
| Feature | New capability in existing system | Small (3-4) |
| Full-stack feature | Frontend + backend + database changes | Medium (5-6) |
| Migration | Moving between systems, languages, or platforms | Medium to Large |
| Greenfield | Building something new from scratch | Small to Medium |
| Incident | Production issue requiring rapid response | Small (3-4) |
| Data/ML pipeline | Data processing, model training, or analytics | Small to Medium |
| Content/docs | Writing, editing, publishing | Pair to Small |
| Security audit | Vulnerability assessment and remediation | Small (3-4) |
Step 2: Role Selection
Map each identified domain to the best agent type from the catalog. See references/role-catalog.md for the complete mapping.
Selection rules:
- •Every team needs exactly one lead. Pick the agent whose domain best matches the project's primary concern.
- •Add one implementation agent per major domain of work.
- •Add quality roles only when the project demands them (security-sensitive, performance-critical, or large scope).
- •Add infrastructure roles only when deployment, cloud, or database changes are involved.
- •Prefer agents that complement each other. Check the "works well with" field in the role catalog.
- •When in doubt, leave an agent out. You can always add one later.
Color assignment (determines priority and context ordering):
| Color | Priority | Assigned To |
|---|---|---|
| green | Lead | Team lead, highest authority |
| yellow | 2nd | Primary implementer or co-lead |
| purple | 3rd | Secondary implementer or specialist |
| orange | 4th | Quality/review role |
| pink | 5th | Support or auxiliary role |
Step 3: Topology Selection
Choose how agents communicate. See references/team-patterns.md for full details.
Decision flowchart:
START
|
v
How many agents?
|
+-- 1-2 agents --> Mesh (or no topology needed)
|
+-- 3-6 agents
| |
| v
| Is the work sequential (stage A then B then C)?
| |
| +-- Yes --> Pipeline
| |
| +-- No
| |
| v
| Do all agents need to talk to each other?
| |
| +-- Yes, and team <= 3 --> Mesh
| |
| +-- No --> Hub-and-Spoke (default)
|
+-- 7+ agents
|
v
Can you split into 2-3 sub-teams?
|
+-- Yes --> Hierarchical (sub-leads under project lead)
|
+-- No --> Reconsider scope. 7+ in flat structure is chaos.
Default choice: Hub-and-Spoke. It works for 80% of projects. Only deviate when you have a clear reason.
Step 4: Model Assignment
Assign compute models based on role complexity.
| Model | Cost | Use For |
|---|---|---|
| opus | High | Leads, architects, complex reasoning, security audits, code review of critical paths |
| sonnet | Medium | Implementation agents, standard coding, testing, documentation |
| haiku | Low | Simple formatting, boilerplate generation, log analysis, context summarization |
Assignment rules:
- •The lead always gets opus.
- •Agents doing complex architectural decisions or security-sensitive work get opus.
- •Standard implementation agents get sonnet.
- •Agents doing repetitive or simple tasks get haiku.
- •When budget is tight, only the lead gets opus; everyone else gets sonnet.
- •Never assign haiku to a lead or to any agent doing code review.
Step 5: Team Sizing
Apply the sizing guide. See references/sizing-guide.md for the full decision matrix.
Core principle: smaller is almost always better.
- •Start with the minimum viable team.
- •Every additional agent adds coordination overhead (roughly 5-10% per agent).
- •A team of 3 focused agents will outperform a team of 7 with idle members.
- •Only scale up when you can clearly justify what each additional agent contributes.
Validation checklist:
- • Every agent has a distinct, non-overlapping responsibility.
- • No agent is idle for more than 30% of the project duration.
- • The lead can realistically coordinate all direct reports (max 5-6).
- • The topology supports the communication patterns the project requires.
Step 6: Blueprint Generation
Output a JSON blueprint that team-builder can consume directly.
Blueprint schema:
{
"team_name": "descriptive-kebab-case-name",
"description": "One sentence describing the team's mission",
"topology": "hub-spoke | pipeline | mesh | hierarchical",
"agents": [
{
"role": "agent-type-from-catalog",
"color": "green | yellow | purple | orange | pink",
"model": "opus | sonnet | haiku",
"responsibility": "One sentence: what this agent owns"
}
],
"communication": {
"pattern": "Description of how agents interact",
"lead": "agent-role-name"
}
}
Worked Examples
Example 1: REST API Feature with Database Changes
Requirements:
- •Add a new
/users/preferencesendpoint to an existing Node.js/Express API - •Requires new PostgreSQL table and migrations
- •Must include input validation, auth checks, and unit tests
- •Timeline: standard feature work, no rush
Analysis:
- •Domains: backend (primary), database, testing
- •Complexity: moderate, well-defined scope
- •Category: Feature
Team design:
{
"team_name": "user-preferences-api",
"description": "Build the /users/preferences endpoint with database schema and tests",
"topology": "hub-spoke",
"agents": [
{
"role": "backend-architect",
"color": "green",
"model": "opus",
"responsibility": "Lead: API design, route implementation, code integration"
},
{
"role": "database-admin",
"color": "yellow",
"model": "sonnet",
"responsibility": "Schema design, migration scripts, query optimization"
},
{
"role": "test-automator",
"color": "purple",
"model": "sonnet",
"responsibility": "Unit tests, integration tests, validation edge cases"
}
],
"communication": {
"pattern": "Hub-and-spoke: backend-architect coordinates all work",
"lead": "backend-architect"
}
}
Why this works: Three agents, each with a clear domain. The backend-architect leads because the primary work is API implementation. The database-admin handles schema independently but coordinates through the lead. The test-automator writes tests in parallel once the API contract is defined. No idle agents, no overlapping responsibilities.
Example 2: Production Incident -- Payment Processing Failure
Requirements:
- •Payments are failing in production with a 500 error
- •Affects checkout flow, revenue impact is active
- •Need to diagnose, fix, and verify immediately
- •Stack: Python backend, Stripe integration, PostgreSQL
Analysis:
- •Domains: backend, payment, database, debugging
- •Complexity: high urgency, unknown root cause
- •Category: Incident
Team design:
{
"team_name": "payment-incident-response",
"description": "Diagnose and fix production payment processing failures",
"topology": "hub-spoke",
"agents": [
{
"role": "incident-responder",
"color": "green",
"model": "opus",
"responsibility": "Lead: triage, coordinate investigation, approve fixes"
},
{
"role": "error-detective",
"color": "yellow",
"model": "opus",
"responsibility": "Log analysis, stack trace investigation, root cause identification"
},
{
"role": "payment-integration",
"color": "purple",
"model": "sonnet",
"responsibility": "Stripe API inspection, webhook verification, payment flow analysis"
},
{
"role": "python-pro",
"color": "orange",
"model": "sonnet",
"responsibility": "Implement the fix once root cause is identified"
}
],
"communication": {
"pattern": "Hub-and-spoke: incident-responder coordinates all agents, fast iteration",
"lead": "incident-responder"
}
}
Why this works: Four agents because the problem spans multiple domains and urgency is high. The error-detective gets opus because root cause analysis requires deep reasoning. The payment-integration specialist knows Stripe-specific failure modes. The python-pro stands ready to implement the fix quickly once the cause is found. The incident-responder keeps everyone focused and prevents thrashing.
Example 3: Large-Scale Platform Migration (Monolith to Microservices)
Requirements:
- •Break a Ruby on Rails monolith into 4 microservices
- •New services in Go, with a React frontend refresh
- •Need CI/CD pipelines, infrastructure as code, database partitioning
- •3-month project, multiple workstreams
Analysis:
- •Domains: backend (Go), frontend (React), infrastructure, database, legacy (Rails), DevOps
- •Complexity: very high, multi-month, multi-system
- •Category: Migration
Team design:
{
"team_name": "monolith-to-microservices",
"description": "Migrate Rails monolith to Go microservices with React frontend",
"topology": "hierarchical",
"agents": [
{
"role": "architect-review",
"color": "green",
"model": "opus",
"responsibility": "Project lead: architecture decisions, cross-team coordination, PR review"
},
{
"role": "golang-pro",
"color": "yellow",
"model": "opus",
"responsibility": "Backend sub-lead: Go microservice design and implementation"
},
{
"role": "frontend-developer",
"color": "purple",
"model": "sonnet",
"responsibility": "Frontend sub-lead: React component migration and new UI"
},
{
"role": "legacy-modernizer",
"color": "orange",
"model": "sonnet",
"responsibility": "Rails decomposition: identify boundaries, extract services"
},
{
"role": "terraform-specialist",
"color": "pink",
"model": "sonnet",
"responsibility": "Infrastructure: IaC for new microservice deployments"
},
{
"role": "database-optimizer",
"color": "pink",
"model": "sonnet",
"responsibility": "Database partitioning, migration scripts, data integrity"
},
{
"role": "deployment-engineer",
"color": "pink",
"model": "sonnet",
"responsibility": "CI/CD pipelines, container orchestration, rollout strategy"
}
],
"communication": {
"pattern": "Hierarchical: architect-review leads, golang-pro and frontend-developer act as sub-leads for their domains",
"lead": "architect-review"
}
}
Why this works: Seven agents is large, but justified by the scope. The hierarchical topology prevents the lead from being overwhelmed: the golang-pro manages backend work, the frontend-developer manages UI work, and both report to the architect-review lead. The legacy-modernizer is critical because someone needs to understand the existing Rails code to decompose it properly. Infrastructure roles (terraform-specialist, deployment-engineer) work semi-independently on their domain. The database-optimizer handles the hardest technical challenge: splitting a shared database without data loss.
Quick Reference
Most common team pattern (copy and adapt):
Lead (opus, green) + 2 Implementation agents (sonnet, yellow/purple) = 3-agent hub-spoke team
When to add a 4th agent: When quality requirements demand a dedicated reviewer or tester.
When to add a 5th agent: When infrastructure changes are significant enough to warrant a specialist.
When to go beyond 5: Only for multi-workstream projects. Switch to hierarchical topology.
Files in this skill:
- •
references/role-catalog.md-- Complete mapping of all 50+ agent types to team roles - •
references/team-patterns.md-- Detailed topology patterns with diagrams and examples - •
references/sizing-guide.md-- Team sizing decision matrix and cost considerations
Related Skills
- •team-templates -- Browse 18 pre-built team blueprints instead of designing from scratch
- •team-builder -- Takes the blueprint produced by this skill and creates the actual team
- •team-orchestrator -- Decompose the project into tasks after the team is created