From the Maker's Schedule to the Hive: Engineering at the Speed of AI

Coding agents keep getting stronger. First you run one on your branch. Then you want two in parallel, so you add a second checkout. Then four. Suddenly you're managing four static environments by hand. Syncing branches. Restarting services. Juggling ports. The agents are fast. You're the bottleneck.
This is where most teams get stuck. Four environments, manually maintained, feels like the ceiling.
It's not. It's just where the tooling runs out.
Teams that break through this ceiling will ship faster than teams that don't. The constraint isn't agent capability anymore. It's infrastructure.
So we built dust-hive.
For the past couple months, we've been building the Dust platform differently. Multiple AI coding agents (Claude Code, Open Code, Codex) work simultaneously on different features and branches. The agents write most of the code. We direct the work.
In 2025, we were operating on a fixed grid. In 2026, we're beekeeping. Maintaining a hive.
The Schedule Has Changed
Paul Graham wrote about maker's schedules versus manager's schedules in 2009. Makers need time in units of half a day minimum. A single meeting can destroy an afternoon by fragmenting it into pieces too small for meaningful work.
Working with AI coding agents inverts this. We've moved from the maker's schedule to the manager's schedule. Multiple agents run simultaneously, each on different features. We review output, provide feedback, steer work mid-flight. The cognitive work is triaging, prioritizing, unblocking. Agent A just finished refactoring authentication. Agent B hit a type error in payments. Agent C needs a decision on index strategy for its migration. Context-switching every few minutes across parallel workstreams.
If you've written async code with Tokio or JavaScript's event loop, you know the rule: don't block between awaits. Long synchronous chunks starve other tasks. The same principle applies here. Spending four hours heads-down writing code yourself blocks the feedback loop. Your agents sit idle, waiting for direction that doesn't come. The leverage disappears.
Good async code yields often. Good agent management does the same: short bursts of focused review, then yield. Check the next environment. Unblock the next agent. Keep the whole system making progress.
Conducting an orchestra rather than playing an instrument.
Isolated Environments, Made Easy
The technical requirement: multiple isolated development environments running concurrently. Different git branches, different databases, different port ranges, zero conflicts.
Dust-hive handles this through a few architectural choices:
Git worktrees give each environment its own directory at
~/dust-hive/{env-name}/ with a dedicated branch. Worktrees share the main repository's git database, so they're fast to create and storage-efficient. No manual branch switching. No stale checkouts.Automatic port allocation assigns each environment a 1000-port range starting at base 10000. First environment on 10000-10999, second on 11000-11999. Want to compare three frontend implementations? Open
localhost:10000, localhost:11000, and localhost:12000 in different browser tabs.Full infrastructure isolation per environment: dedicated Docker containers, Postgres database, QDrant vector store, Elasticsearch cluster, Temporal namespace. Test a risky migration in one environment while developing features in another. If the migration breaks something, the blast radius is contained.
State management that matches the workflow: environments can be cold (minimal services, enough to run tests, linters, formatters, typechecking), warm (full stack running, ready for end-to-end testing), or stopped. Spawning a new environment is near-instant. Warming takes under a minute.
The stack: Bun for runtime (TypeScript directly, no build step), Zellij for terminal UI, all services as background daemons with PID files. Close your terminal, services keep running. Come back later, reconnect to live logs via
tail -F.The Terminal as Control Center
As coding agents get stronger, you stop needing an editor. You need a control center. Spatial organization. Persistent state. Instant context switching. Multiple views into parallel workstreams.
Dust-hive provides this through Zellij sessions. Each environment gets its own session with tabs for every service in the stack: shell, backend APIs, frontend, background workers. Switch between environments with
alt+w. Monitor all your environments at once. Your terminal becomes a control center for the hive.Services run as daemons in the background. The UI layer is pure monitoring and interaction. Close your laptop for the night and open it the next morning. Everything is where you left it.
Agent Skills
The agents need context too. When a coding agent lands in a dust-hive environment, it needs to know: what services are running? What ports? How do I run tests? Where are the logs?
dust-hive includes an agent skill that gets injected into the agent's context. It teaches the agent:
- Environment detection.How to recognize it's inside a dust-hive worktree and which environment it's in.
- Available commands.The dust-hive CLI for managing services, viewing logs, checking status.
- Port allocation.Which ports map to which services in this specific environment.
- How to run checks.Linting, type checking, building, testing for each part of the stack.
- Troubleshooting patterns.Common issues like symlinked dependencies, watchers not detecting changes after rebase, test database setup.
The skill is a markdown file that lives alongside the codebase. When an agent opens a shell in
~/dust-hive/feature-auth/, it automatically gains awareness of the environment: this is environment "feature-auth", front is on port 12000, here's how to check if services are healthy, here's how to run the test suite.Without this, agents fumble. They try to run services on wrong ports. They don't know the test database is shared. They miss that installing dependencies in a worktree modifies the main repo. The skill encodes operational knowledge that would otherwise require constant human correction.
The pattern generalizes. Any infrastructure you build for parallel agent workflows should include documentation of that infrastructure. Agents work better when they understand the environment they're operating in.
Technical Depth and Engineering Taste
Here's the thing that becomes obvious once you're doing this daily: managing multiple agents building software requires exceptional technical skills and strong engineering taste, deployed together.
You need the technical depth to catch subtle architectural mistakes in generated code. To read database migrations and spot the missing index that will cause production problems in six months. To understand why that innocent-looking ORM query the agent just added will kill performance at scale. To recognize when the agent chose the wrong abstraction three layers deep.
The agents are fast. An agent generates a 300-line module in two minutes. You review it in a similar timeframe to maintain momentum. You develop heuristics. Does this follow our patterns? Are error cases handled? Does the abstraction level match surrounding code? Compressed code review, relying on deep familiarity with your codebase to identify what needs closer scrutiny.
And you need taste. Product sense. The judgment to know when a technically correct solution misses the actual user need. When to accept good-enough versus push for better. When a refactor will pay off versus when it's premature. The type system won't save you when the agent writes valid TypeScript that does the wrong thing for your use case.
Understanding everything the agents produce is the job. You're accountable for quality, performance, maintainability. You need to explain why every significant decision was made. You need confidence the code handles edge cases you haven't tested.
This raises the technical bar, not lowers it. You need expert-level knowledge across more areas because you're overseeing more parallel work. You maintain architectural coherence across agents working on different parts of the system. You catch integration issues before they happen.
The Rhythm of Beekeeping
Multiple agents across multiple environments. You check environment A, see it's progressing. Switch to B where something looks off, provide feedback. Notice C just finished and needs review. D is still running tests. Back to A to test the completed auth refactor.
Each environment is semi-autonomous. The agents work on their own. You monitor, intervene when needed, provide resources, remove blockers. You're maintaining conditions for good work to happen.
Agents communicate through git commits, test results, log output, type errors. A pattern of test failures in one environment means the agent might be stuck or misunderstood a requirement. Unusual quiet means it's waiting for something or hit an unrecoverable error.
You develop a rhythm. Spawn environments for new work. Warm them when you need full services to test. Cool them to free resources without losing state. Destroy them when work is merged.
The timescales differ. Agents execute rapid actions: hundreds of lines per hour. You operate strategically: which features to prioritize, how the architecture should evolve, which technical debt to address, where the system needs to go long-term. Agents have tactics. You have strategy.
What It Takes
Does getting to this speed require building custom internal infrastructure? Yes. Here's why.
The constraint isn't compute. It's friction. Every manual step in environment setup is a blocking operation in your workflow.
npm install in each environment: 2-5 minutes. cargo build in each environment: 5-10 minutes. Manual branch sync: 1-2 minutes per environment, multiple times per day. Multiply by ten environments. The overhead compounds until agents sit idle waiting for you to catch up.dust-hive eliminates blocking through two approaches.
Aggressive caching. Symlinked node_modules across worktrees. Shared Rust compilation cache. Pre-built binaries from main. When most of the work is already done, spawning becomes near-instant.
Dependency-aware orchestration. Your project has a build graph. Some things can run in parallel. Some things depend on others completing first. Some things take ten seconds, others take two minutes. dust-hive encodes this knowledge: start the database containers immediately (slow to become healthy). While they're warming up, build the Rust binaries (CPU-bound, no database dependency). Once Postgres is healthy, run migrations. Once migrations complete and binaries are built, start the API servers. The longest-running tasks start first and run in the background while shorter tasks execute in parallel.
This is project-specific. Generic tools can't know that your Elasticsearch index creation can run while your frontend is compiling, or that your worker process needs the API server healthy before it can start. dust-hive encodes our dependency graph explicitly. The result: less than 5 seconds to spawn a new environment. Not because the work disappeared, but because it's sequenced optimally.
Can this be productized? Not easily. The caching transfers. The dependency orchestration doesn't. Every codebase has its own build graph, its own service dependencies, its own initialization sequence. A generic tool would need a configuration layer expressive enough to capture arbitrary dependency graphs, or it would need to be rebuilt for each project. dust-hive is fast because it encodes deep knowledge about our specific stack. That knowledge doesn't generalize.
What should you do to get here?
If you're beginning to explore parallel agents, start simple. A few checkouts, manual port offsets, separate Docker Compose projects. Validate the workflow before investing in tooling.
If you're running 5+ agents and hitting friction, build something. The core concepts transfer: git worktrees for branch isolation, port ranges for service isolation, a terminal multiplexer for persistent sessions, aggressive caching for speed. The dependency orchestration requires understanding your own build graph: what can parallelize, what blocks on what, what takes longest and should start first.
The infrastructure investment pays off when environment switching becomes invisible. When spawning a new workstream takes less time than formulating the task. When the bottleneck shifts from "waiting for setup" to "how fast can I review what the agents produced."
That's when you stop operating on a fixed grid and start maintaining a hive.
What Comes Next
Two extensions on the roadmap:
Remote environments on devboxes. Running environments on remote machines saves local resources and lets agents continue working when your laptop is closed. The control interface stays local. Your terminal connects to remote daemons. Same workflow, but your machine stays cool and quiet while the hive keeps humming.
Sharing warm environments with teammates. Exposing a warm environment over Tailscale turns it into a preview environment. "Check out
http://henry-env-a.tailscale:10000" becomes a real workflow. No deploy. No staging. Just share your local hive.The Shift
The transition from maker to manager schedule is happening now. Engineers who spent 6 hours in deep flow writing code now spend that time reviewing, directing, steering. The cognitive work has changed shape.
You need tools that match this shape. A real control center in your terminal, handling multiple concurrent contexts. Infrastructure making isolated environments easy to spin up. UI giving visibility across everything happening at once.
Building with AI agents requires the same technical depth as building without them. More breadth, certainly. You're responsible for more code written faster across more contexts. The leverage is real. So is the cognitive load.
You're writing less code because you're more technical: technical enough to direct multiple agents, catch their mistakes, maintain architectural coherence, ship quality software faster than you could alone.
Dust-hive exists because working with AI agents at speed demands better infrastructure. When you're maintaining a hive, you need tools built for beekeeping.