Dash0 Raises $110M Series B at $1B Valuation

Changelog

Recent changes and improvements to Dash0

Join us on a journey through Observability and OpenTelemetry as we improve our product. Stay connected with our Newsletter or RSS for fresh updates.

Mar 23, 2026

Audit Log Export

Dash0 now supports exporting audit logs to external systems using the OpenTelemetry Protocol (OTLP). Stream organization-level audit events, including user actions, configuration changes, and access activity, to any OTLP-compatible backend, SIEM, or long-term storage system. Built on OpenTelemetry standards, this capability integrates natively with your existing observability and compliance tooling, giving security and platform teams full control over where audit data lives and how long it is retained.

How it Works

Audit log export is configured in Organization Settings > Integrations. Once enabled, Dash0 continuously streams audit events as structured OpenTelemetry logs to any OTLP-compatible endpoint you specify. Each audit log entry includes attributes such as the acting user, the action performed, the affected resource, and a timestamp.

Audit log export configuration dialog. Allowing configuration of an OTLP endpoint, authorization mechanisms and custom headers.

This means you can route audit logs into any system that speaks OTLP: a self-hosted OpenTelemetry Collector for further processing, a compliance-focused long-term storage backend, or simply back into Dash0 itself for correlation with
your operational telemetry.

Why OTLP?

Rather than building a proprietary webhook or CSV export, we chose OTLP as the export format. This keeps audit data compatible with the same pipelines and tooling you already use for traces, metrics, and logs. No custom parsers, no format translations, no additional agents required. If your infrastructure already ingests OTLP, audit logs slot right in.

The screenshot below shows how the exported audit logs appear as structured log entries inside Dash0.

The logging area in Dash0 showing audit logs
Read more

Mar 22, 2026

Agent mode for the Dash0 CLI

AI coding agents are rewriting how teams build and operate software. But most CLIs were designed for humans — with colored tables, interactive prompts, and prose help text that agents struggle to parse reliably. Agent mode makes the Dash0 CLI a first-class tool for AI coding agents, with zero configuration required.

What changed

Agent mode transforms five aspects of the CLI for machine consumption:

  • JSON output by default: All data retrieval commands (list, get, query, etc.) return JSON instead of tables, without needing the -o json flag.
  • Structured help: The --help flag returns a JSON object with command metadata, flags, subcommands, and usage patterns, so agents can discover capabilities programmatically.
  • Structured errors: Errors are emitted as JSON objects on stderr, with separate error and hint fields that agents can parse and act on.
  • No confirmation prompts: Destructive operations like delete and remove skip interactive prompts automatically, just as if the --force flag were passed.
  • No ANSI colors: All escape codes are suppressed like with the --color none flag, so output is clean for downstream parsing.

Zero-configuration activation

Agent mode auto-activates when it detects a known AI agent environment variable. It recognizes Claude Code, Cursor, Windsurf, Cline, Aider, GitHub Copilot, OpenAI Codex, and any MCP server session.

For explicit control, enable it with the --agent-mode flag or the DASH0_AGENT_MODE=true environment variable, and DASH0_AGENT_MODE=false to override and disable.

Why this matters

When an AI coding agent queries your dashboards, investigates error logs, or applies asset definitions, it needs structured, predictable output — not tables padded for terminal width. Agent mode removes that friction. The CLI becomes a tool that agents can drive as naturally as a human types commands, with every response machine-readable by default.

Combined with the comprehensive command reference, consistent naming conventions, and profile-based authentication, the Dash0 CLI is designed to be the interface between your AI coding agent and your observability platform.

Read more

Mar 21, 2026

The Dash0 Terraform Provider is now on the OpenTofu Registry

The Dash0 Terraform Provider is now also available on the OpenTofu registry.

That's it. That's the post.

The Terraform Provider for Dash0 in all its glory in the OpenTofu registry
Read more

Mar 18, 2026

New Trace Visualization: Trace Graph

Visualize the full topology of your distributed traces, dependencies, cross-boundary interactions, and service relationships, for faster root cause analysis.

What changed

The Trace Graph is a new view in Dash0 that transforms a trace into a functional architecture diagram. Each node represents a service or resource involved in the trace, connected by edges that show how they invoke one another, including asynchronous relationships across span links.

Why this matters

The Waterfall and Flame Graph are great for drilling into latency and execution details, but they make it hard to see the big picture. The Trace Graph shows you how your system actually behaved during a request: which services called which, where errors propagated, and how independent flows connected. It's the fastest way to understand the shape of a trace before diving into the details.

Read more

Mar 18, 2026

Changing filter operators now preserves values

Change the operator on an existing filter and keep the values you already picked. No more re-selecting the same things twice.

Editing a filter operator used to be a small annoyance: pick a new operator, watch your values disappear, and have to re-select them all over again.

Now that friction is gone.

When you change an operator in an existing filter, Dash0 keeps your selected values intact whenever they're compatible with the new operator. Switch from "is" to "is not"? Your values carry over. Change from "is one of" to "is not one of"? Same thing. Going from a multi-select operator to a single-value operator like "contains" works too, as long as you only had one value selected.

A small change, but one less thing interrupting your flow when building filters.

Read more

Mar 13, 2026

Teach your AI coding agent OpenTelemetry best practices

AI coding agents are transforming how teams write code, but they have a blind spot: observability. When an agent scaffolds a new service or adds a feature, it rarely sets up instrumentation — and when it does, the result is often noisy, non-standard telemetry that obscures more than it reveals. Dash0 Agent Skills close that gap.

Dash0 Agent Skills are packaged instructions that plug into any agent that supports the Agent Skills format — including Claude Code, Cursor, and Windsurf — and give it the knowledge to emit high-quality, cost-efficient OpenTelemetry telemetry from the start or to fix existing setups. All of it is guidance that will serve you well with any OpenTelemetery-based observability setup, grounded in semantic conventions and present and (soon-to-be) future best practices in OpenTelemetry.

One command to install

sh
1
npx skills add dash0hq/agent-skills

Once installed, the skills activate automatically whenever the agent works on a relevant task: instrumenting application code, configuring a Collector pipeline, choosing span attributes, or writing OTTL expressions.

Four skills, one coherent observability strategy

otel-instrumentation: Expert guidance for emitting traces, metrics, and logs across 10 languages and frameworks: Node.js, Go, Python, Java, Scala, .NET, Ruby, PHP, Browser, and Next.js. The agent learns when to use which signal, how to set resource attributes, how to handle errors and span status, and how to keep cardinality under control. Kubernetes-specific guidance covers the Downward API, environment variables, and pod-spec configuration.


otel-collector: Everything the agent needs to configure and deploy the OpenTelemetry Collector: receivers, processors, exporters, and pipelines. Covers agent-vs-gateway deployment patterns, memory limiting, batching, tail sampling, RED metrics, and four deployment methods — raw manifests, the Collector Helm chart, the OpenTelemetry Operator, and the Dash0 Operator.


otel-semantic-conventions: A decision framework for selecting, placing, and reviewing OpenTelemetry semantic convention attributes. The agent searches the Attribute Registry before inventing custom attributes, places them at the correct telemetry level, and flags common mistakes like high-cardinality metric dimensions.


otel-ottl: Guidance for writing OpenTelemetry Transformation Language expressions for the Collector's transform, filter, and routing processors. Covers syntax, contexts, converters, path expressions, and common patterns like sensitive-data redaction and attribute enrichment.

Prescriptive, not descriptive

Every rule is written so that an agent can act on it without human interpretation. Decisions are enumerable — lookup tables and explicit criteria replace open-ended advice. Code examples accompany every actionable rule, showing both the correct pattern and the anti-pattern.

Get started

Install the skills, open your agent, and ask it to add OpenTelemetry instrumentation to your app. The agent handles the rest: correct resource attributes, set span status code and message properly, the right metric instrument type, a Collector pipeline that actually works, and so much more.

Read our guide: Read our guide at https://www.dash0.com/guides/teach-your-ai-coding-agent-opentelemetry

Read more

Mar 12, 2026

Automatically monitor all namespaces with the Dash0 operator

Monitor the complete Kubernetes cluster with one command

The Dash0 Kubernetes operator just became much more powerful. Previously, you had to enable monitoring separately for each namespace. Now you can let the Dash0 operator automate this! This is very useful if you want to monitor all namespaces in your cluster. It is also useful if you create new namespaces frequently and want to have them monitored right away, without additional setup.

Refer to the operator's documentation for more information.

Read more

Mar 10, 2026

Signal Type Restrictions for Auth Tokens

Auth tokens already support restricting ingestion to specific datasets. Now, tokens can also be restricted to individual signal types — logs, spans, metrics, profiles, or web events — giving you fine-grained control over what data each token is allowed to send.

When creating or editing an auth token in Settings > Auth Tokens, a new Signal type dropdown appears between the dataset restriction and permissions fields. Select a signal type to lock the token to that single type of telemetry, or leave it set to "Unrestricted" to accept all signals as before. The restriction is enforced server-side in the collector pipeline: any data that does not match the permitted signal type is silently dropped.

Why this matters for Website monitoring

Website monitoring in Dash0 is powered by the Dash0 Web SDK, which sends both spans and logs over the same OTLP connection. In backend observability, each signal type typically arrives through a separate pipeline with its own token. But the browser is different — a single Web SDK instance emits page loads, web vitals, user interactions (as spans), and console errors or custom events (as logs) through one shared auth token.

With signal type restrictions, you can now issue a dedicated token that only accepts web events. This is valuable for two reasons:

1. Security boundary. The auth token embedded in your website's JavaScript is publicly visible to anyone who opens the browser's developer
tools. A token restricted to web events cannot be abused to ingest backend traces, metrics, or profiles into your organization — even if it is
extracted from the page source.
2. Blast-radius reduction. If a web-events-only token is compromised or accidentally shared, the damage is contained to a single signal type.
Backend observability data, alerting metrics, and profiling data remain unaffected.

Interplay with other features

Signal type restrictions combine with the existing dataset restriction — a token can be scoped to both a specific dataset and a specific signal type for maximum isolation.

And, by the way, you can go and retrofit your existing authentication tokens with the new restrictions!

Read more

Mar 10, 2026

Smarter Triage Defaults: Error Comparison Mode

When investigating issues in production, the first question is almost always "what's different about the errors?" Until now, the Triage panel in the Trace Explorer, Logs Explorer, and Web Events Explorer defaulted to comparing against the current timeframe — a useful but generic baseline that required an extra click to switch to error comparison. Starting today, all three explorers default to error comparison mode. The triage panel immediately highlights which attributes correlate with errors, so you can jump straight into root-cause analysis without changing any settings.

What changed

Error comparison as the default baseline. Opening the Triage tab in the Trace Explorer, Logs Explorer, or Web Events Explorer now automatically
compares error spans, error logs, or failed web events against their non-error counterparts. There is no longer a need to manually switch the
analysis method.

Conflicting filter detection. If you already have a general filter restricting data to errors (e.g., otel.span.status.code = ERROR), the error
comparison becomes meaningless — every record matches both the selection and the baseline. When this happens, the triage panel now detects the
conflict and shows a "Remove filter" button that strips the redundant filter in one click, restoring a meaningful comparison.

Why this matters

Error triage is the most common entry point when investigating incidents. Defaulting to error comparison mode removes a step from the most
frequent workflow, while the new conflicting-filter detection ensures users are never stuck on a "No major differences" screen without guidance
on how to fix it.

Read more

Mar 9, 2026

Stack Trace Translation with Source Maps

Stack traces from minified JavaScript are now automatically translated into their original source locations.

The translation works automatically as long as your JavaScript files and source maps are reachable via HTTP(S).

If your build tool generates source maps and deploys them alongside your JavaScript bundles, no additional configuration is required.

If your source maps are hosted behind authentication, you can configure a Source Map Integration to provide credentials (Basic Auth, Bearer token, or custom headers) for specific URL prefixes.

What’s included:

  • Automatic translation of minified JavaScript stack traces
  • Support for public and authenticated source maps
  • URL prefix–based integration configuration
  • Per-frame translation with detailed error feedback
  • Original stack trace always preserved

Each frame in a stack trace is translated independently. If a frame cannot be resolved, it remains in its original form and shows a warning icon. Hovering over the icon reveals the reason (for example, missing source map, authentication error, or file not found).

You can switch between the original and translated stack trace at any time using the toggle in the UI.

Read more