Learn the full flywheel workflow through a real project: 693 beads, 282 commits on day one, 85% complete in hours.
The Challenge: Building a Memory System
On December 7, 2025, a new project was conceived: cass-memory - a procedural memory system for coding agents. The goal? Go from zero to a fully functional CLI tool in a single day using the flywheel workflow.
Day 1 Results
This lesson walks you through exactly how it was done, so you can replicate this workflow on your own projects.
Phase 1: Multi-Model Planning
The first step isn't to start coding. It's to gather diverse perspectives on the problem.
Collect Competing Proposals
Ask multiple frontier models to propose implementation plans
Scientific validation approach
Search pointers & tombstones
Cross-agent enrichment
ACE pipeline design
Each model received the same prompt with minimal guidance - just 2-3 messages to clarify the goal. The key instruction: "Design a memory system that works for all coding agents, not just Claude."
chat_shared_conversation_to_file tool makes this easy.Phase 2: Plan Synthesis
Now comes the crucial step: have one model synthesize the best ideas from all proposals into a single master plan.
1# Put all proposal files in the project folder2competing_proposal_plans/32025-12-07-gemini-*.md42025-12-07-grok-*.md5gpt_pro_version.md6claude_version/78# Ask Opus 4.5 to create the hybrid plan9cc "Read all the files in competing_proposal_plans/.10Create a hybrid plan that takes the best parts of each.11Write it to PLAN_FOR_CASS_MEMORY_SYSTEM.md"
PLAN_FOR_CASS_MEMORY_SYSTEM.md
The resulting plan was 5,600+ lines - a comprehensive blueprint covering architecture, data models, CLI commands, the reflection pipeline, storage, and implementation roadmap.
Anatomy of a Great Plan
The plan is the bedrock of a successful agentic project. Let's dissect what makes the actual 5,600+ line plan so effective.
Document Structure: 11 Major Sections
Executive Summary
Problem statement, three-layer solution, key innovations table
Core Architecture
Cognitive model, ACE pipeline, 7 design principles
Data Models
TypeScript schemas, confidence decay algorithm, validation rules
CLI Commands
15+ commands with usage examples and JSON outputs
Reflection Pipeline
Generator, Reflector, Validator, Curator phases
Integration
Search wrapper, error handling, secret sanitization
LLM Integration
Provider abstraction, Zod schemas, prompt templates
Storage & Persistence
Directory structure, cascading config, embeddings
Agent Integration
AGENTS.md template, MCP server design
Implementation Roadmap
Phased delivery with ROI priorities
Comparison Matrix
Feature checklist against competing proposals
Patterns That Make Plans Effective
Theory-First Approach
Each major feature includes: schema definition → algorithm → usage examples → implementation notes. Never jumps to code before explaining the why.
Progressive Elaboration
Simple concepts expand into nested detail. 'Bullet maturity' starts as a concept, becomes a state machine, then includes transition rules and decay calculations.
Concrete Examples Throughout
Not just 'validate inputs' but actual TypeScript interfaces, JSON outputs, bash command examples, and ASCII diagrams showing data flow.
Edge Cases Anticipated
The plan addresses error handling for cass timeouts, toxic bullet blocking, stale rule detection, and secret sanitization before implementation begins.
Comparison Tables
Key decisions contextualized against alternatives. Shows trade-offs between approaches from different model proposals.
Distinctive Innovations in This Plan
Confidence Decay Half-Life
Rules lose credibility over time. Harmful events weighted 4× helpful ones. Full algorithm with decay factors specified.
Anti-Pattern Inversion
Harmful rules converted to 'DON'T do X' instead of deleted, preserving the learning while inverting the advice.
Evidence-Count Gate
Pre-LLM heuristic filter that saves API calls. Rules need minimum evidence before promotion.
Cascading Config
Global user playbooks + repo-level playbooks merged intelligently with conflict resolution.
- • Executive summary - Problem + solution in 1 page
- • Data models - TypeScript/Zod schemas for all entities
- • CLI/API surface - Every command with examples
- • Architecture diagrams - ASCII boxes showing data flow
- • Error handling - What can go wrong, how to recover
- • Implementation roadmap - Prioritized phases with dependencies
- • Comparison tables - Why this approach over alternatives
Phase 3: From Plan to Beads
A 5,600-line markdown file is great for humans, but agents need structured, trackable tasks. This is where beads comes in.
1# Initialize beads in the project2bd init34# Have an agent transform the plan into beads5cc "Read PLAN_FOR_CASS_MEMORY_SYSTEM.md carefully.67Transform each section, feature, and implementation detail8into individual beads using the bd CLI.910Create epics for major phases, then break them into tasks.11Set up dependencies so blockers are clear.12Use priorities: P0 for foundation, P1-P2 for core features,13P3-P4 for polish and future work.1415Create at least 300 beads covering the full implementation."
Beads Structure
Tasks linked with dependencies so blockers are visible and agents know what to work on next.
Phase 4: Swarm Execution
With 350+ beads ready, it's time to unleash the swarm. Multiple agents work in parallel, each picking up tasks based on what's ready.
The Agent Swarm
1# Launch the swarm with NTM2ntm spawn cass-memory --cc=6 --cod=3 --gmi=234# Each agent runs this workflow:5# 1. Check what's ready6bv --robot-triage78# 2. Claim a task9bd update <id> --status in_progress1011# 3. Implement12# (agent does the work)1314# 4. Close when done15bd close <id>1617# 5. Repeat
The agents coordinate using bv (beads viewer) to see what's ready, avoiding conflicts and ensuring the most important blockers get cleared first.
Agent Coordination with Agent Mail
When agents need to share context or coordinate on overlapping work, Agent Mail provides the communication layer.
- File reservations: Agents claim files before editing to avoid conflicts
- Status updates: Agents report progress so others know what's happening
- Handoffs: When one agent finishes a blocker, dependent agents get notified
- Review requests: Agents can ask each other to review their work
1# Example Agent Mail coordination23# Agent "BlueLake" reserves files before editing4mcp.file_reservation_paths(5project_key="/data/projects/cass-memory",6agent_name="BlueLake",7paths=["src/playbook/*.ts"],8ttl_seconds=3600,9exclusive=true10)1112# Agent "GreenCastle" messages about a blocker being cleared13mcp.send_message(14project_key="/data/projects/cass-memory",15sender_name="GreenCastle",16to=["BlueLake", "RedFox"],17subject="Types foundation complete",18body_md="Zod schemas are done. You can now work on playbook and CLI."19)
The Commit Cadence
With many agents working simultaneously, commits need careful orchestration. A dedicated commit agent runs continuously.
1# The commit agent pattern (runs every 15-20 minutes)23# Step 1: Understand the project4cc "First read AGENTS.md, read the README, and explore5the project to understand what we're doing. Use ultrathink."67# Step 2: Commit in logical groupings8cc "Based on your knowledge of the project, commit all9changed files now in a series of logically connected10groupings with super detailed commit messages for each11and then push.1213Take your time to do it right. Don't edit the code at all.14Don't commit ephemeral files. Use ultrathink."
Commit Statistics
The commit agent ran every 15-20 minutes, grouping changes logically and writing detailed commit messages.
This pattern ensures atomic, well-documented commits even when 10+ agents are making changes simultaneously.
Results & Key Lessons
After one day of flywheel-powered development, the cass-memory project achieved:
Key Lessons
Planning is 80% of the work
A detailed plan makes agent execution predictable and fast
Multi-model synthesis beats single-model planning
Each model brought unique insights that improved the final design
Beads enable parallelism
Structured tasks with dependencies let many agents work without conflicts
Coordination tools are essential
Agent Mail and file reservations prevent agents from stepping on each other
Dedicated commit agent keeps history clean
Separating commit responsibility from coding ensures atomic commits
Try It Yourself
Ready to try this workflow on your own project? Here's the quickstart:
1# 1. Gather proposals from multiple models2# (Use GPT Pro, Gemini, Claude, Grok - whichever you have access to)3# Save each as markdown in competing_proposal_plans/45# 2. Synthesize into a master plan6cc "Read all files in competing_proposal_plans/.7Create a hybrid plan taking the best of each.8Write to PLAN.md"910# 3. Transform plan into beads11bd init12cc "Read PLAN.md. Transform into 100+ beads with13dependencies and priorities. Use bd CLI."1415# 4. Launch the swarm16ntm spawn myproject --cc=3 --cod=2 --gmi=11718# 5. Monitor with bv19bv --robot-triage # See what's ready2021# 6. Watch the magic happen22ntm attach myproject2324# 7. (Every 15-20 min) Run the commit agent25cc "Commit all changes in logical groupings with26detailed messages. Don't edit code. Push when done."