An Engineer's Day at Dust: What It Actually Looks Like

At Dust, engineers don't wait for product requirements. There are no product managers handing down specs. Engineers are accountable for a project end to end -- from the first idea to production monitoring after launch. Writing code is one part of the job. Understanding the user problem, shaping the solution, making the architectural tradeoffs, coordinating the rollout: all of that is also the job.
The technical scope is broad. The stack runs from a React frontend to a Rust core handling retrieval and vector search. Engineers work across all of it, in a fully open-source codebase. One day we're fixing a subtle edge case in the vector indexing pipeline. The next we're in a user discovery session understanding why a data sync isn't landing the way we expect.
There's one more thing that makes Dust engineers different from most: the product we ship is the tool we use to ship it.
@eng, @q, @incident -- the Dust agents that run inside our engineering workflow are used daily by the team. When something feels off, you're not inferring from user feedback. We notice it ourselves, file the issue, and fix it. The loop from "I use this" to "I built this" is measured in days.Here is what a typical day looks like.
9:00 -- Starting the day
Alex opens Slack before touching any code.
A few pull request comments came in. One needs a response. One is ready to merge. There's also a thread about a user issue that came in through support. Alex is the lead on the relevant feature. They decide it'll be their first task of the morning.
No standup. No ticket grooming. No sprint planning. Just the work, and the context we need to do it well.
10:00 -- Into the first task
Before writing anything, Alex opens Dust and calls
@eng -- a custom agent with access to the team's runbooks, architecture documents, and incident history:"@eng What do we know about the timeout behavior in the data sync layer under heavy load? Any runbooks or past incidents?"
@eng returns three relevant runbooks, a Slack thread from six weeks ago where the team debugged a similar issue, and the relevant section of the architecture document. In one query, Alex has the full context that would have taken 20 minutes to piece together manually.Generic AI tools know the internet.
@eng knows our systems.11:00 -- Parallel work with AI agents
dust-hive -- an internal tool the team built -- gives each development environment its own isolated copy of the full stack: its own database, its own instance of Qdrant (the vector database Dust uses for search), its own background workers, etc. Multiple AI coding agents can work on different features simultaneously, each in their own environment. Alex is running two of them this morning, switching between them.
Alex reviews what one agent built. It's a 280-line module. Heuristics kick in immediately: does this follow the team's patterns? Are error cases handled? Does the abstraction level match surrounding code?
The answer is mostly yes. One thing is off: the agent chose a library for retry logic that the codebase already implements in a more opinionated way. Technically correct. Not idiomatic Dust. Alex leaves a comment, asks it to refactor, and moves on while the agent handles it.
Knowing the codebase well enough to catch that kind of subtle drift -- that's the job.
12:30 -- @q for a hard question
Alex reaches a design decision worth thinking through carefully. Time to call
@q in Dust -- a code assistant agent with access to the full codebase, all pull request history, and every design document:"@q What's the current pattern for handling multi-region sync in the data layer? I'm weighing two approaches and want to understand what existing code implies about our direction."
@q comes back with relevant code references, the design decision made nine months ago and why, and a clear analysis of what each approach would mean given the current architecture. The answer is grounded in Dust's actual codebase and pull request history.The call gets made in ten minutes. The agent implements it.
13:00 -- Lunch
Lunch with the whole team, not a working one.
15:00 -- An alert fires
A monitoring alert lights up. Elevated latency in a Rust service on a specific search path.
First message goes to
@eng in Slack: "Latency spike on the search path, started about 15 minutes ago. What do the runbooks say?"@eng pulls the relevant runbook, surfaces two previous incidents with similar signatures, and identifies the most likely cause based on the pattern. Alex starts from context, not from scratch.The diagnosis takes under 20 minutes: an index configuration change in a recent deploy interacting unexpectedly with a specific query pattern. A fix gets prepared.
While that's happening,
@incident -- a Dust agent set up to handle incident communications -- drafts the user-facing status update, calibrated to the actual impact and consistent with past communications. Alex reviews it, adjusts one sentence, and it goes out.The fix is deployed. The alert resolves. The postmortem draft is already taking shape.
17:00 -- Merging and shipping
The feature is ready. Alex runs the full check sequence in the hive environment -- linting, type checking, the test suite -- then moves the pull request to review.
A colleague does the code review. Merged.
Deployment is autonomous, the team trusts Alex to deploy when needed. Two clicks, and 15 minutes later it's live. Dashboards get monitored for a while. Everything is clean.
From conception to production: Alex wrote maybe 20% of the code. Made 100% of the decisions.
18:00 -- Closing the loop
Before closing the laptop, two things.
First,
@issue -- a Dust agent that creates well-structured GitHub issues from Slack conversations -- files the follow-up tasks. Alex pastes the thread where the team discussed next steps after the incident. Two issues filed, nothing lost.Second, a quick scan of the open hive environments. A few comments left for agents to pick up in the morning. Laptop closed.
What this day actually means
Working with AI agents doesn't replace deep technical knowledge. It changes where that knowledge matters. You're responsible for code you didn't write, architecture you didn't draft, migrations you didn't author. You catch the mistakes. You make the calls. You ship the software. The faster the agents move, the more precise your judgment needs to be.
And because the product you're shipping is the same tool you use every day, there's no distance between "what users need" and "what you experience." When something is wrong, you know it. When something is right, you know that too.
If this is the kind of work that interests you, we're hiring!