Build a Flock, Not a Crowd

The average person at Dust uses over 15 different agents in a week. Each one built by someone who understood a problem: the sales team's way of qualifying deals, the support team's approach to customer lookup, the specific criteria and edge cases that make up how things actually get done at their company.
Each agent is good at what it does. Someone reaches for it, gets a result, moves on. Fifteen agents, fifteen pockets of knowledge, each one self-contained. But what one builder figures out, every agent should benefit from. That's why we built skills.
What a Skill Is
After a quarter review, someone on the sales team refines the deal qualification criteria and updates their agent. It works well. The sales team qualifies deals faster, with fewer misroutes.
A few weeks later, someone from onboarding pings them: "your agent and ours are qualifying deals differently." The builder checks. The same criteria are embedded in the onboarding agent, the forecasting agent, the weekly report agent, each built by a different person. Four versions of the same knowledge, subtly diverging.
The builder realizes the knowledge shouldn't live inside any single agent. It should live somewhere that any agent can draw on.
That's what a skill is. It packages a capability into three layers: instructions that define how it works, knowledge that informs it, and tools that make it executable. Any agent can use a skill. When the situation calls for it, the skill loads with everything it needs.
Instructions define how the capability works. For deal qualification, this means the specific criteria, the exceptions, the escalation path when a deal hits an unusual edge. These aren't generic guidelines. They encode how your company does this particular thing, with all the nuance that implies.
Knowledge connects the skill to the data sources it needs. The CRM field definitions in Notion, the qualification methodology docs in Google Drive, the examples that distinguish a good deal from a marginal one. When the skill loads, this knowledge loads with it.
Tools are the integrations the skill can act through. Salesforce, Gmail, GitHub, Zendesk, your calendar, or any MCP server. Skills can bundle multiple tools with instructions on how to use them jointly, something that's hard to express cleanly in a single agent's prompt.
Here's what a deal qualification skill looks like in practice:
Skill: Qualify Enterprise Deal
Instructions
Evaluate the deal against our enterprise qualification criteria.
A deal qualifies as enterprise if:
- Annual contract value exceeds $100k
- The buyer has a dedicated procurement process
- Technical evaluation involves more than one team
If two of three criteria are met but ACV is below threshold,
flag as "enterprise-adjacent" and route to the mid-market team.
Never auto-qualify a deal from a partner channel without
checking the partner tier in Salesforce.
Knowledge
- Enterprise qualification rubric (last updated: Jan 2026)
- Partner tier definitions
- Examples: Q3 deals that were misqualified and why
Tools
- Salesforce: pull deal details, update qualification status
- Slack: notify mid-market team on enterprise-adjacent flags |
|---|
The instructions carry the nuance: the partner channel exception, the "enterprise-adjacent" category, the specific threshold. This is the kind of knowledge that used to live in one person's agent prompt.
The builder extracts the qualification logic from their agent into a skill and attaches it to the other three. Next quarter, when the criteria change again, one update. Fifteen agents sharing one deal qualification skill.
O(n) becomes O(1).
But the reason we built skills isn't efficiency. It came from watching how agents work alongside people, and noticing what was missing.
From Tools to Fabric
Skills are a mechanism. But the thinking behind them starts with a broader observation about agents and people.
People share context. They build on each other's decisions. When someone on the sales team refines how they qualify deals, the rest of the org absorbs it through conversation, through documents, through osmosis. Knowledge flows between people. Among agents, it doesn't.
At some point we started asking: what would it look like if it did? Not agents as individual tools orbiting individual users, but agents as participants in the same collaborative fabric, interleaved with the humans around them, growing alongside them.
One of our operating principles at Dust is something we call flocking. It came from watching how the best teams coordinate. Not through top-down plans but through shared awareness, each person responding to the same signals, adjusting in real time. A flock moves together. A crowd is just a lot of individuals in the same space.
Fifteen agents a week, hundreds of interactions across a company. Are these agents a flock, or a crowd?
What Holds a Flock Together
A flock holds together through three forces.
Alignment: shared direction. Every agent on Dust is defined by a purpose and a goal. That structure is alignment. It's baked into what an agent is.
Separation: each element maintains its own distinct space, avoids colliding with others. For agents, this means not stepping on each other's territory. Two agents that handle similar tasks shouldn't carry overlapping, divergent versions of the same knowledge.
Cohesion: the group stays connected. When one element shifts, the rest respond. When someone discovers a better way to do something, the improvement reaches every agent that needs it.
Alignment emerges naturally from how agents are built on Dust. Separation and cohesion don't. Without a mechanism for them, agents drift apart with every edit, stepping on each other's territory, encoding the same tribal knowledge independently, each version subtly diverging.
Skills are that mechanism. They address both forces at once: keeping agents in their lane while keeping them connected.
Double the Agents, Double the Entropy
Here's the framing that clarified the problem for us.
When someone builds an agent, they're doing more than creating a tool. They're documenting how a certain process works at their company. How we qualify deals. How we look up customers. How we handle escalations. The agent becomes a living record of organizational knowledge. Not written in a wiki that goes stale, but embedded in something people use every day.
That's powerful. It's also where the problem starts.
Every agent extends the organizational surface area, encoding context that the rest of the system now needs to account for. And the entropy each agent creates is roughly fixed. Double the number of agents, double the entropy. Tribal knowledge that could be shared foundation is instead scattered across individual agents, each one a small silo.
O(n) complexity on organizational knowledge.
We saw this firsthand with our own agents. Two teams had each built agents that handled customer lookup. Same CRM, same fields, slightly different logic for handling merged accounts. When we changed how merged accounts were stored, one agent got updated. The other didn't. For weeks, the support team's agent was returning stale data on any customer who'd been through a merge. Nobody noticed because the agent still looked like it was working. Same knowledge, two copies, one drifting silently out of date.
That's the cost of a crowd.
Agents Are Append-Only
Skills solve the duplication problem. But they also solve a subtler one that we only recognized by watching agents evolve over time.
In practice, agents are mostly append-only. As needs expand, instructions grow. A new use case surfaces, you add handling for it. A new edge case appears, you document it. A new tool gets connected, you add instructions for how to use it alongside the existing ones. The prompt balloons.
Here's the cost. The space something occupies in an agent's instructions doesn't scale with how often it's used. A complex set of guidelines for how to use the Salesforce integration might account for 30% of the prompt but get invoked in 1% of conversations. Those tokens sit in context for the other 99%, taking up space, adding noise, pushing out information that actually matters right now.
Expanding an agent's capabilities can quietly degrade how well it does everything else. More instructions means more noise in every conversation, including the ones where most of those instructions are irrelevant. The model has more to sort through, more chances to lose track of what matters. You give the agent a new capability and find that an older one, the one people actually rely on, has gotten slightly worse.
Skills break this cycle because they load dynamically. An agent doesn't carry every skill at all times. When it encounters a task that matches a skill's domain, the skill loads: instructions, knowledge, tool configurations, all of it. The rest of the time, the agent stays lean, its context reserved for what's relevant to the current conversation.
The agent draws on shared capabilities without the weight of carrying them everywhere. The marginal complexity of a new agent is just what's genuinely new: its specific purpose, its particular audience, its edge cases. The shared foundation is already there.
Two Motions
So far we've looked at what skills do for individual agents. Something more interesting emerges at the level of the fleet.
Skills create two motions, and together they're what turns a crowd into a flock.
The first is diffusion. A skill propagates outward. Someone builds a skill that captures how the company qualifies enterprise deals. It starts in one agent, then another team attaches the same skill instead of rebuilding the capability, then a third. The knowledge spreads across the fleet, carrying a consistent understanding of how this thing works here. Update the skill, and the update spreads too. One edit rippling outward to every agent that uses it.
This is the motion of infrastructure. Build once, benefit everywhere. Raise the floor so builders can focus on the ceiling.
The second is coalescence. The reverse motion. Builders scattered across the company have each figured out something good: the sales team's CRM methodology, the support team's customer lookup logic, the marketing team's tone guidelines. These bits of excellence live in individual agents and in the minds of the builders who created them, isolated from each other.
Skills give this knowledge somewhere to consolidate. The best approach to CRM entry gets extracted from the sales agent and turned into a shared skill. The support team's customer lookup gets formalized the same way. What was scattered coalesces into shared building blocks. Greatest hits, not lowest common denominator.
This is the motion of collective intelligence.
Diffusion without coalescence means you're propagating whatever someone happened to build first. Coalescence without diffusion means you're capturing excellence but not spreading it. The flock needs both: knowledge flowing inward as the best practices consolidate, knowledge flowing outward as those practices reach every agent that needs them. One pulling in, one pushing out. Not a one-time migration but a continuous process.
The Other Half
Right now, most teams building with agents are focused on making each one better. Better models, improved prompts, more capabilities. That work matters.
But it's only half the picture.
The other half is what happens between agents. How knowledge moves through a fleet. How an improvement in one place reaches every place that needs it. How the fifteenth agent someone builds is easier than the first, because the shared foundation is already there.
A smarter agent in isolation is a better tool. A fleet of agents that share knowledge, stay lean, and improve as one is something else. Connective tissue that gets stronger the more people build on it.
We think the companies that get this right will look different from the ones that don't. Not because their individual agents are smarter, but because the whole system learns. A flock, not a crowd. That distinction turns out to matter more than we expected.


