Anthropic Claude SDK vs Dust: Build or use a platform?

Davis ChristenhuisDavis Christenhuis
-February 27, 2026
Anthropic Claude SDK vs Dust
The Anthropic Claude SDK gives developers direct programmatic access to Claude models through client libraries in Python, TypeScript, and other languages. Once you understand what the SDK can do, the next question becomes whether to build your AI implementation from scratch or use a managed platform that handles the infrastructure for you.
This guide covers the SDK's capabilities, compares building with the raw SDK against using a platform like Dust, and helps you decide which approach fits your use case.

📌 TL;DR

Don't have a few minutes to explore the Anthropic Claude SDK? Here are the key takeaways of the article:
  • The SDK gives you direct control: Client libraries in Python, TypeScript, and other languages let you build custom AI applications with complete control over Claude's model parameters, caching, and tool use.
  • Agent SDK extends the base capabilities: Beyond basic API calls, the Agent SDK adds built-in tools for file operations and command execution.
  • Dust eliminates infrastructure work: The platform works by maintaining persistent connections to your business tools (Slack, Notion, GitHub), with automatic synchronization that keeps data current as documents change.
  • Platform features reduce engineering overhead: Managed connectors, built-in RAG, collaborative agent building, and enterprise security come ready to use without custom development.
  • Your decision depends on what you're building: Use the SDK for consumer products where AI is your core offering; use Dust when connecting AI to internal company data across teams.

What is Anthropic Claude SDK?

The Anthropic Claude SDK is a set of client libraries that let developers communicate directly with Claude models through code. Available in Python, TypeScript, Java, Go, Ruby, C#, and PHP, the SDK provides programmatic access to Claude's capabilities without needing to build HTTP requests manually.
The SDK serves as the foundation for custom AI applications. You write code that sends prompts to Claude, receives responses, and handles everything from authentication to error management. This low-level access means you control how the model integrates into your application architecture, how you manage conversation state, and which features you enable.
The SDK ecosystem includes both the base client SDKs for API calls and the Claude Agent SDK, which adds built-in tools for file operations, command execution, and code editing. The Agent SDK gives you the same capabilities that power Claude Code, packaged as a library you can embed in your own applications.

Key features of the Claude SDK

The Anthropic Claude SDK provides direct access to model capabilities with full control over implementation:
  • Direct model parameter control: Set temperature, top_p, max tokens, and other inference parameters per request to fine-tune behavior for your specific use case.
  • Prompt caching with manual control: Use cache_control parameters to cache system prompts and conversation history. Cache reads cost just 10% of base input token prices, significantly reducing API costs for repeated context.
  • Tool use and function calling: Define custom tools that Claude can invoke, with support for parallel tool execution and programmatic tool calling from within code execution environments.
  • Streaming responses: Stream tokens as they generate to build responsive interfaces, with support for fine-grained tool streaming to reduce latency on large parameters.
  • Multi-session conversation management: Resume previous sessions using session IDs to maintain context across interactions, with support forking sessions to explore different conversation branches.
  • Extended thinking and adaptive reasoning: Enable Claude's step-by-step reasoning for complex tasks using extended thinking mode, or let the model dynamically control thinking depth with adaptive thinking.
  • Platform flexibility: Deploy on Claude API, Amazon Bedrock, or Google Vertex AI by setting a provider environment variable and configuring the corresponding authentication credentials.

What is Dust?

Dust is an AI platform designed for enterprises to build and deploy specialized AI agents across their organization. Rather than writing code to connect AI models to your data sources, Dust provides managed infrastructure that handles data synchronization, agent orchestration, and deployment.
The platform works by maintaining live connections to your business tools and using those connections to give AI agents access to real company context. When someone asks an agent a question, Dust searches across connected sources in real time, retrieves relevant information, and feeds it to the model alongside the query.
This means agents can answer questions about last week's customer calls in Slack, reference documentation from Notion, and pull data from GitHub issues all in a single response, without anyone manually copying information between systems.

Key features of Dust

Dust provides enterprise AI infrastructure as a managed service:
  • Managed data connectors: Easy-to-set-up sync with Slack, Notion, GitHub, Google Drive, and other business tools, with automatic updates as documents change.
  • Advanced search across company data: Search through connected data sources using natural language queries, with results ranked by relevance and filtered by permissions automatically.
  • Built-in retrieval-augmented generation (RAG): Query company knowledge with vector embeddings and retrieval pipelines managed automatically, eliminating the need to build and maintain vector databases.
  • Collaborative agent builder: Define agent instructions, select data sources, and configure behavior through a visual interface accessible to non-technical team members.
  • Multi-agent orchestration: Build specialized agents for different use cases with dedicated context and instructions, then coordinate them for complex workflows.
  • Enterprise-grade security and compliance: GDPR compliant and SOC 2 Type II certified, enables HIPAA compliance, with SSO integration and granular permissions that map to your organizational structure.
  • Model flexibility: Switch between Claude, GPT-5, Gemini, Mistral, and other models per agent without changing implementation, protecting against vendor lock-in.
💡 Building AI agents for your team? See how Dust handles data connections and deployment infrastructure. Try Dust free for 14 days →

When to use Dust over the raw SDK

The choice between building with the SDK and using Dust comes down to what problem you're solving. If your first engineering sprint involves connecting AI to Slack, Notion, and Google Drive, then building authentication flows, sync mechanisms, and search infrastructure—you're already in platform territory.
Clay's operations team made this calculation when they needed agents that could answer questions using their sales enablement documentation, internal playbooks, and Slack conversations — to support a 4x growth of their GTM engineer team.
The platform approach works when you need agents across multiple teams without duplicating infrastructure. Marketing builds research agents, sales deploys prospecting assistants, engineering creates documentation bots—all using shared connectors and security policies but configured for each workflow. Non-technical users can modify agent instructions and add data sources without waiting for developer capacity.
The SDK makes more sense when you're building a consumer product where the AI is part of your core offering. If Claude powers your main SaaS workflow, or you're embedding AI into a developer tool or mobile app, you need the control to optimize every detail of the implementation. Applications with existing authentication and data infrastructure often find it simpler to add SDK calls to existing code rather than route through an external platform.
💡 Want to see how other teams use Dust to deploy agents fast? Read more customer case studies →

Frequently asked questions (FAQs)

Can I use both the Claude SDK and Dust together?

Yes, you can use both in the same organization for different use cases. Teams commonly prototype with the SDK to validate technical feasibility, then deploy production agents through Dust for company-wide access. You can also build custom integrations using the SDK that feed data into Dust or trigger Dust agents programmatically. The approaches are complementary rather than exclusive.

How does Dust handle Claude SDK updates and new features?

New Claude models become available in Dust shortly after Anthropic releases them — accessible directly from the agent builder without any code changes or redeployment. When Anthropic releases features like prompt caching, extended thinking, or new tool use capabilities, Dust makes them available through platform settings without requiring code changes. This means teams using Dust benefit from SDK improvements automatically while maintaining their existing agent configurations.

What's the best way to get started with the Claude SDK Python implementation?

For basic API access, install the client SDK with pip install anthropic and use the Anthropic() client class. For building autonomous agents with built-in tools, install the Agent SDK with pip install claude-agent-sdk and use the query() function for one-off requests or ClaudeSDKClient for multi-turn conversations.

Does the Anthropic Claude Agent SDK support multi-session conversations?

Yes, the Agent SDK provides session management for maintaining conversation context across multiple interactions. The SDK automatically creates a session ID when you start a query, which you can capture and use later to resume the conversation with full history. You can also fork sessions to create branches that explore different approaches from the same starting point, useful for testing variations without modifying the original conversation.

Where can I find the Anthropic Claude Agent SDK documentation?

The official documentation is available on the Anthropic website for both Python and TypeScript implementations. The docs cover installation, core concepts like tools and hooks, session management, and migration from earlier versions. GitHub repositories for both language implementations contain example code, detailed changelogs, and community issue tracking where you can see common implementation patterns and solutions to technical challenges.