Rich Seroter linked to a great article on 7 agentic AI design patterns and I had one of those “oh, that’s what we’ve been doing” moments.

I’ve been building BrandCast with three distinct AI agent teams:

  • brandcast-marketing (17 agents): SEO, content publishing, customer discovery, competitive analysis
  • brandcast (18 agents): Solution architecture, Prisma migrations, code quality, documentation
  • brandcast-biz (8 agents): Financial modeling, supply chain, fundraising, unit economics

That’s 43 specialized agents across three domains, all coordinating through CLAUDE.md instructions and explicit hand-off protocols via GitHub Issues. And it turns out we’ve been implementing most of these formal agentic AI patterns without knowing their names.

Some we got right by accident. Some we should formalize. And one we’re explicitly avoiding for good reason.

The Patterns We’re Using

1. Multi-Agent Collaboration (Our Core Architecture)

This is our primary pattern across all three repos.

Marketing team (brandcast-marketing):

  • seo-specialist: Keyword research and content optimization
  • content-publishing-specialist: Validation and publishing workflow
  • weekly-planning-specialist: Strategic planning and prioritization
  • customer-discovery-specialist: Interview design and insight extraction
  • competitive-marketing-specialist: Market analysis and positioning

Engineering team (brandcast):

  • solution-architect: Plan solutions before coding, identify reuse opportunities
  • prisma-migration: Guide database schema changes safely
  • code-quality-checker: Enforce standards, detect anti-patterns
  • browser-safety-checker: Prevent memory leaks in 24/7 display environments
  • quick-lookup: Fast code navigation and pattern reference

Business team (brandcast-biz):

  • financial-planner: P&L projections, cash flow forecasting, break-even analysis
  • supply-chain-specialist: JIT inventory, drop-shipping, vendor relationships
  • unit-economics-analyst: CAC/LTV modeling, pricing optimization
  • fundraising-strategist: Investor outreach, pitch deck refinement

The coordination happens through Claude Code’s Task tool, which acts as our coordinator. When I need to change database schema, the coordinator routes to prisma-migration. When I need to model cash flow, it routes to financial-planner.

What we got right: Each agent has a single, well-defined responsibility. The SEO specialist doesn’t try to publish content. The solution architect doesn’t try to write code. The financial planner doesn’t design supply chains.

What we should formalize: We’re missing explicit hand-off protocols across repos. Right now, the coordination is implicit through each repo’s CLAUDE.md and human-in-the-middle instructions to create a GitHub Issue in another team’s repository. We should document when marketing insights should trigger business modeling, or when engineering constraints should inform financial projections.

2. Sequential Workflow (Content Pipeline and Prisma Migrations)

We use fixed pipelines in multiple domains.

Content publishing workflow:

Draft → SEO Review → Image Generation → Validation → Publish

This is documented in our slash commands:

  1. /draft:personal creates content in content/personal/drafts/ (including this post!)
  2. Agent generates hero image via Gemini
  3. SEO agent optimizes (optional)
  4. Publishing specialist validates frontmatter
  5. /publish:personal pushes to production

Database migration workflow:

Schema Analysis → Migration Generation → Safety Review → Testing → Deployment

From the prisma-migration agent:

  1. Analyze proposed schema changes for breaking changes
  2. Suggest safe migration strategies
  3. Run npx prisma migrate dev --name descriptive_name
  4. Review generated SQL before accepting
  5. Test in local/staging before production

What we got right: The pipelines are predictable and debuggable. When something fails, we know exactly which stage broke.

What we’re learning: Fixed pipelines work great for repeatable operations (publishing, migrations), but they’re too rigid for discovery workflows where the next step depends on what you learn.

3. Tool Use Pattern (Everything We Do)

Nearly every agent integrates external tools:

Marketing agents:

  • SEO specialist: WebSearch for keyword research
  • Analytics agents: GA4 MCP server for traffic data
  • Publishing agents: Git, file system, Gemini image generation
  • Scheduler agents: Cron-like task execution via Scheduler MCP

Engineering agents:

  • Prisma migration: Prisma CLI, database introspection
  • Code quality: ESLint, TypeScript compiler, Grep/Glob for pattern detection
  • Solution architect: Codebase search, documentation reading
  • Browser safety: Memory profiling, leak detection tools

Business agents:

  • Financial planner: Spreadsheet generation, scenario modeling
  • Supply chain: Vendor API integration, inventory systems
  • Unit economics: Customer data analysis, cohort tracking

The article points out that “system reliability inherits tool reliability limitations.” We learned this the hard way:

  • Gemini image generation occasionally fails → Publishing specialist has fallback to default images
  • GA4 API rate limits hit → Analytics agents retry with exponential backoff
  • Prisma migration fails on production → Rollback strategy documented

What we should formalize: Explicit error handling and fallback strategies for each tool integration. Right now it’s scattered across agent prompts.

4. Planning Pattern (Weekly Planning and Solution Architecture)

We use planning agents in both marketing and engineering.

Marketing: weekly-planning-specialist

  1. Analyzes current context (health checks, GA4 reports, git history)
  2. Identifies dependencies (content queue, alpha tester pipeline)
  3. Sequences operations (customer discovery → content → outreach)
  4. Surfaces hidden complexity (automation issues blocking progress)

Engineering: solution-architect

From the agent prompt:

Your Mission: Prevent rework by planning thoroughly upfront. For a single-person startup, solving the same problem twice is the biggest productivity killer.

The solution architect:

  1. Searches codebase for similar implementations
  2. Checks shared packages for reusable components
  3. Designs solutions that default to shared abstractions
  4. Creates detailed implementation plans with task breakdowns

What we got right: Planning happens before execution, not during. This prevents mid-implementation pivots when we realize we’re missing critical dependencies.

What we’re avoiding: Over-planning for simple tasks. The article warns that planning “adds overhead for simple tasks.” We only invoke the weekly planner on Mondays, not for every blog post. We only invoke solution-architect for “substantial feature work,” not bug fixes.

The Patterns We’re Implicitly Using

5. ReAct Pattern (Hidden in Our Workflows)

The ReAct pattern alternates between reasoning, acting, and observing results. We do this, but it’s not formalized.

Example from content publishing:

  1. Reason: Check if hero image exists
  2. Act: Read file system
  3. Observe: Image missing
  4. Reason: Determine appropriate image type (beaver mascot vs photo-realistic)
  5. Act: Generate with Gemini
  6. Observe: Verify generation succeeded

Example from database migrations:

  1. Reason: Analyze proposed schema change for breaking changes
  2. Act: Run prisma migrate dev to generate migration
  3. Observe: Review generated SQL
  4. Reason: Identify data loss scenarios, suggest backfill strategies
  5. Act: Add data migration steps if needed
  6. Observe: Test migration in local environment

Example from financial planning:

  1. Reason: Model cash flow with different hiring scenarios
  2. Act: Generate spreadsheet with projections
  3. Observe: Runway drops below 6 months in worst case
  4. Reason: Identify cost reduction opportunities
  5. Act: Remodel with adjusted burn rate
  6. Observe: Runway extends to 12 months

This happens naturally through Claude Code’s tool-use workflow, but we don’t explicitly label these loops.

What we should formalize: Make the reasoning steps explicit in agent prompts. Instead of “generate hero image if missing,” write:

1. Check if hero image exists (observe)
2. If missing, determine image type needed (reason)
3. Generate image using appropriate tool (act)
4. Verify image quality (observe)
5. If failed, use fallback (reason/act)

6. Reflection Pattern (Auditors and Code Review)

We use the Reflection pattern in multiple domains:

Marketing: seo-specialist has two modes

  1. Create mode: Write SEO-optimized content
  2. Audit mode: Critique existing content for improvements

Engineering: code-quality-checker

From the agent prompt:

Your mission is to enforce quality standards through constructive review, not to write code yourself.

The code-quality-checker:

  1. Reviews code after implementation
  2. Identifies anti-patterns and violations
  3. Suggests specific improvements
  4. Validates fixes before merge

Business: financial-planner scenario analysis

The financial planner creates projections, then critiques them:

  1. Generate base case financial model
  2. Stress test with worst case assumptions
  3. Identify unrealistic assumptions
  4. Revise model with conservative estimates

What we should formalize: Make the role separation more explicit. Right now, the SEO agent has both modes in one prompt. We should split into seo-content-writer and seo-content-auditor to reduce confirmation bias.

Similarly, we could split financial-planner into financial-modeler (creates projections) and financial-auditor (stress tests assumptions).

The Pattern We’re Explicitly Avoiding

7. Human-in-the-Loop (Intentionally Minimized)

The article describes Human-in-the-Loop as integrating human oversight “at critical decision points.”

We do this for high-stakes decisions, but we’re actively trying to minimize it in most locations. Why? Because we’re a one-person operation. That’s hard to do when you’re building out your initial go-to-market strategy and feature set. But it’s an ultimate goal as this project beings to spread its wings.

Our automation philosophy:

Manual Workflows (require human judgment):

  • Blog publishing approval and editing
  • Customer discovery calls (TBD)
  • Pricing decisions
  • Database schema changes to production
  • Code merges to main
  • Financial projections review

Automated Workflows (no human gate):

  • Daily health checks → reports/health-check-{date}.md
  • Weekly analytics → reports/weekly-{date}.md
  • GA4 reports → reports/ga4-weekly-{date}.md
  • Code quality checks (runs automatically)
  • Build/test pipelines
  • Development environment migrations

The article warns: “Scales poorly as human review becomes bottleneck.” Exactly. At one person, everything is a bottleneck. We only gate decisions that truly need judgment and human intuitive leaps.

What we got right: We distinguish between reversible decisions (automate) and irreversible decisions (human gate). Publishing to a blog is reversible (can delete/edit). Migrating production database is not (requires backups and rollback planning).

Cross-Domain Pattern Examples

The most interesting patterns emerge when agents collaborate across domains.

Example: Hardware bundling decision

This requires coordination between all three teams:

  1. Marketing (customer-discovery-specialist): “30% of prospects ask about hardware bundles”
  2. Business (supply-chain-specialist): “JIT fulfillment possible at 10-50 customers, inventory required beyond that”
  3. Business (financial-planner): “Hardware bundles add $169 margin per unit, but require $10K working capital”
  4. Engineering (solution-architect): “Need order management system to track hardware fulfillment”
  5. Conclusion: Defer hardware until 50+ customers, focus on software validation

Example: Pricing model validation

  1. Marketing (pricing-strategist): “Research suggests $40-60 ARPU for B2B digital signage”
  2. Business (unit-economics-analyst): “CAC target $200, need 4 month payback at $50/month”
  3. Business (financial-planner): “At 100 customers × $50/mo = $5K MRR, break-even in 18 months”
  4. Marketing (customer-discovery-specialist): “Test $25/display pricing in alpha, measure willingness-to-pay”

What we should formalize: Document these cross-domain workflows explicitly. Right now they’re implicit in how I use the agents. We should create routing rules like:

**When customer discovery reveals feature requests:**
1. Marketing validates demand (customer-discovery-specialist)
2. Business models unit economics (unit-economics-analyst)
3. Engineering estimates complexity (solution-architect)
4. Business decides build/defer (strategic-planner)

What We Should Change

After reading this article, here are the improvements I’m making:

1. Formalize Cross-Domain Hand-Offs

Create explicit routing logic across all three repos:

## Cross-Domain Agent Routing

**Customer insight → Business decision:**
- customer-discovery-specialist → unit-economics-analyst → strategic-planner

**Technical constraint → Financial impact:**
- solution-architect → financial-planner → strategic-planner

**Market opportunity → Build decision:**
- competitive-marketing-specialist → unit-economics-analyst → solution-architect

2. Split Agents Using Reflection Pattern

Create explicit create/critique pairs:

  • seo-content-writer + seo-content-auditor
  • financial-modeler + financial-auditor
  • code-writer + code-reviewer (already have code-quality-checker)

This reduces confirmation bias and makes each agent’s job clearer.

3. Add Explicit ReAct Loops

Update agent prompts to label reasoning steps:

## Database Migration Workflow

### Step 1: Analyze Schema Changes

**Observe:**
- Read proposed schema.prisma changes
- Compare to current production schema

**Reason:**
- Identify breaking changes (renamed fields, deleted tables)
- Determine data migration needs
- Assess rollback complexity

**Act:**
- Generate migration with descriptive name
- Add data migration steps if needed

**Observe:**
- Review generated SQL
- Check for unsafe operations (DROP, data loss)

4. Document Tool Fallbacks for Each Domain

Marketing:

**If Gemini image generation fails:**
1. Retry once with simplified prompt
2. If still failing, use brand default image
3. Log warning to health check report
4. Continue publishing (don't block on images)

Engineering:

**If Prisma migration fails:**
1. Check for conflicting migrations
2. Review SQL for syntax errors
3. Reset local database and retry
4. If production, execute rollback plan

Business:

**If financial model calculation fails:**
1. Verify input data integrity
2. Simplify assumptions and recalculate
3. Use conservative manual estimates
4. Flag uncertainty in output

Patterns Are Discovered, Not Invented

The best patterns emerge from solving real problems, not from reading articles about patterns. I built what I needed across three domains, and it turned out I’d independently discovered several established patterns.

But formalizing these patterns has value. It gives us:

  1. Shared vocabulary for discussing architecture across domains
  2. Design principles for building new agents
  3. Common mistakes to avoid (from others’ experience)
  4. Optimization opportunities we might have missed

The patterns aren’t prescriptive. We’re not using Human-in-the-Loop everywhere because it doesn’t fit our constraints. We’re not using Reflection pattern for simple tasks because the overhead isn’t worth it.

But knowing the patterns helps us make those trade-offs consciously instead of accidentally.

What’s Next

I’m going to spend the next week formalizing these patterns across all three repos:

  1. Update .claude/agents/ in all repos to use explicit Reflection pattern
  2. Add cross-domain routing decision tree to each CLAUDE.md
  3. Document tool fallbacks for each agent
  4. Create ReAct templates for complex workflows

Then I’ll write a follow-up post about what actually changed in practice. Did formalizing help? Did it add overhead? Did we discover new patterns we hadn’t noticed?

The meta-lesson here is that good engineering isn’t about following patterns religiously. It’s about understanding the problems patterns solve, recognizing when you’re already solving those problems, and formalizing just enough to make your system easier to reason about.

We’ve been doing agentic AI design without knowing the formal names. Now we know the names. Let’s see if that knowledge improves our system or just gives us fancier ways to describe what already works.