Dash0 Raises $110M Series B at $1B Valuation

Changelog

Recent changes and improvements to Dash0

Join us on a journey through Observability and OpenTelemetry as we improve our product. Stay connected with our Newsletter or RSS for fresh updates.

Mar 30, 2026

Fill operator support in PromQL

Binary operations in PromQL silently drop results when one side of the expression has no matching time series. If you divide requests by capacity, and a new instance hasn't reported capacity yet, that instance vanishes from your chart — no error, no gap, just missing data. This makes dashboards unreliable and alerts blind to exactly the moments that matter most. Dash0 now supports the fill modifier for binary operations, giving you explicit control over what happens when one side of an expression has no match.

It is complicated to troubleshoot why particular PromQL expressions to not return timeseries, and the workarounds with the OR operator have been historically difficult to learn.

Dash0 worked with the Prometheus community to fix these gaps, and we just shipped the result in Dash0.

How it works

The fill modifier lets you substitute a default value for missing series in a binary operation. Three variants cover every scenario:

  • fill(<value>) applies the default to whichever side is missing, e.g.
    http_requests_total / fill(1) capacity_total
  • fill_left(<value>) substitutes only when the left-hand side has no match:
    (vector(0) > 1) / fill_left(0) requests_total (the left side is always empty, and the fill operator handles it)
  • fill_right(<value>) substitutes only when the right-hand side has no match: requests_total / fill_right(1) (vector(0) > 1) (the right side is always empty, and the fill operator handles it)

Notice that, however, when both sides of an operator are missing, no time series is generated.

(We did wonder if the fill behaviour should have been the default, using fill(0) for addition and subtraction, fill(1) for multiplication, and fill_right(1) for division, but that would have been a breaking change for the Prometheus community.)

What changed

The fill operator is now available via the UI and the API in Dash0. Enjoy :-)

Want to know more?

For a detailed walkthrough of the problem and the fill solution, see the excellent Filling in Missing Series in Binary Operator Matching blog from PromLabs.

Want to learn more about PromQL in general?

We cannot recommend enough the Understanding PromQL course by PromLabs. Taking that course is effectively a right of passage for Dash0 engineers and product people :-)

Read more

Mar 30, 2026

Convert Span Events to Log Records

Span events are being deprecated in OpenTelemetry. With one click, you can transform them into logs in Dash0.

A deprecation long overdue

OpenTelemetry is deprecating span events in favour of log records linked with trace context. Given just how many times we are asked "Should it be a span event or a log record?" by our customers, we are overjoyed at this streamlining in OpenTelemetry.

Span events exist because, in the beginning of the OpenTelemetry project, tackling logs as a separate signal felt like trying to bite off too much. But events like uncaught exceptions were still needed in the model, and so span events were introduced.

Now that logs in OpenTelemetry are stable in terms of specification, and implementation is complete or well underway in pretty much all SDKs, span events are no longer necessary. But, realistically, instrumentations generating span events will remain in use for a considerable time.

The new world, today

To help you with the transition, we have introduced an opt-in, per-dataset option to automatically convert span events into correlated log records during ingestion.

The setting in Datasets to turn on the conversion

The resulting log records have the correct trace context on them, and "just work" the way you would expect. And in order for you to find out how many span events have been converted to logs, we add to the latter the dash0.span_event.converted = true log attribute.

That is, you do not need to modify your instrumentation, we just "fix your telemetry" for you. (But if you do want to fix your instrumentation, these agent skills we published a few weeks ago can help with the migration!)

Opting in to converting span events to logs will also hide the "Span events" column in your "All spans" Tracing view, but if you want, you can "bring it back" adding the Span event column to a custom view.

No impact on pricing

Span events and log records have always been equivalent in terms of pricing, as we saw this transition coming a long way.

The only effect on your bill is that the amount for "Spans and span events" will decrease by the same amount that "Logs" increases, based on how many span events you send.

Read more

Investigating a single log record rarely tells the full story. The new Related Logs tab surfaces the surrounding context — every log from the same resource and trace, organized chronologically around the record you're inspecting — so you can reconstruct what happened without leaving the sidebar.

How It Works

Open any log record in the sidebar and switch to the Related Logs tab. Dash0 automatically queries a ±30-minute window centered on the active log and displays the results in a scrollable timeline:

The Related Logs feature in action

The active log is pinned in the center, clearly marked with a THIS LOG badge and a highlighted background. Older logs appear above, newer logs below, each annotated with a relative time offset (e.g. -2m 15s, +50s) so you can instantly gauge how far apart events are. Infinite scroll loads additional records in both directions as you explore, without ever leaving the panel.

Two Ways to Correlate

Related Logs supports two independent correlation modes that can be combined:

  • Resource correlation finds all logs emitted by the same service or infrastructure component, giving you a local timeline of everything that resource was doing.
  • Trace correlation finds all logs sharing the same trace ID, letting you follow a single request across service boundaries.

When both are available, toggle each mode on or off with a checkbox to narrow or widen the view. Correlation badges on each log entry make it clear why a record was included.

Read more

Mar 23, 2026

Audit Log Export

Dash0 now supports exporting audit logs to external systems using the OpenTelemetry Protocol (OTLP). Stream organization-level audit events, including user actions, configuration changes, and access activity, to any OTLP-compatible backend, SIEM, or long-term storage system. Built on OpenTelemetry standards, this capability integrates natively with your existing observability and compliance tooling, giving security and platform teams full control over where audit data lives and how long it is retained.

How it Works

Audit log export is configured in Organization Settings > Integrations. Once enabled, Dash0 continuously streams audit events as structured OpenTelemetry logs to any OTLP-compatible endpoint you specify. Each audit log entry includes attributes such as the acting user, the action performed, the affected resource, and a timestamp.

Audit log export configuration dialog. Allowing configuration of an OTLP endpoint, authorization mechanisms and custom headers.

This means you can route audit logs into any system that speaks OTLP: a self-hosted OpenTelemetry Collector for further processing, a compliance-focused long-term storage backend, or simply back into Dash0 itself for correlation with
your operational telemetry.

Why OTLP?

Rather than building a proprietary webhook or CSV export, we chose OTLP as the export format. This keeps audit data compatible with the same pipelines and tooling you already use for traces, metrics, and logs. No custom parsers, no format translations, no additional agents required. If your infrastructure already ingests OTLP, audit logs slot right in.

The screenshot below shows how the exported audit logs appear as structured log entries inside Dash0.

The logging area in Dash0 showing audit logs
Read more

Mar 22, 2026

Agent mode for the Dash0 CLI

AI coding agents are rewriting how teams build and operate software. But most CLIs were designed for humans — with colored tables, interactive prompts, and prose help text that agents struggle to parse reliably. Agent mode makes the Dash0 CLI a first-class tool for AI coding agents, with zero configuration required.

What changed

Agent mode transforms five aspects of the CLI for machine consumption:

  • JSON output by default: All data retrieval commands (list, get, query, etc.) return JSON instead of tables, without needing the -o json flag.
  • Structured help: The --help flag returns a JSON object with command metadata, flags, subcommands, and usage patterns, so agents can discover capabilities programmatically.
  • Structured errors: Errors are emitted as JSON objects on stderr, with separate error and hint fields that agents can parse and act on.
  • No confirmation prompts: Destructive operations like delete and remove skip interactive prompts automatically, just as if the --force flag were passed.
  • No ANSI colors: All escape codes are suppressed like with the --color none flag, so output is clean for downstream parsing.

Zero-configuration activation

Agent mode auto-activates when it detects a known AI agent environment variable. It recognizes Claude Code, Cursor, Windsurf, Cline, Aider, GitHub Copilot, OpenAI Codex, and any MCP server session.

For explicit control, enable it with the --agent-mode flag or the DASH0_AGENT_MODE=true environment variable, and DASH0_AGENT_MODE=false to override and disable.

Why this matters

When an AI coding agent queries your dashboards, investigates error logs, or applies asset definitions, it needs structured, predictable output — not tables padded for terminal width. Agent mode removes that friction. The CLI becomes a tool that agents can drive as naturally as a human types commands, with every response machine-readable by default.

Combined with the comprehensive command reference, consistent naming conventions, and profile-based authentication, the Dash0 CLI is designed to be the interface between your AI coding agent and your observability platform.

Read more

Mar 21, 2026

The Dash0 Terraform Provider is now on the OpenTofu Registry

The Dash0 Terraform Provider is now also available on the OpenTofu registry.

That's it. That's the post.

The Terraform Provider for Dash0 in all its glory in the OpenTofu registry
Read more

Mar 18, 2026

New Trace Visualization: Trace Graph

Visualize the full topology of your distributed traces, dependencies, cross-boundary interactions, and service relationships, for faster root cause analysis.

What changed

The Trace Graph is a new view in Dash0 that transforms a trace into a functional architecture diagram. Each node represents a service or resource involved in the trace, connected by edges that show how they invoke one another, including asynchronous relationships across span links.

Why this matters

The Waterfall and Flame Graph are great for drilling into latency and execution details, but they make it hard to see the big picture. The Trace Graph shows you how your system actually behaved during a request: which services called which, where errors propagated, and how independent flows connected. It's the fastest way to understand the shape of a trace before diving into the details.

Read more

Mar 18, 2026

Changing filter operators now preserves values

Change the operator on an existing filter and keep the values you already picked. No more re-selecting the same things twice.

Editing a filter operator used to be a small annoyance: pick a new operator, watch your values disappear, and have to re-select them all over again.

Now that friction is gone.

When you change an operator in an existing filter, Dash0 keeps your selected values intact whenever they're compatible with the new operator. Switch from "is" to "is not"? Your values carry over. Change from "is one of" to "is not one of"? Same thing. Going from a multi-select operator to a single-value operator like "contains" works too, as long as you only had one value selected.

A small change, but one less thing interrupting your flow when building filters.

Read more

Mar 13, 2026

Teach your AI coding agent OpenTelemetry best practices

AI coding agents are transforming how teams write code, but they have a blind spot: observability. When an agent scaffolds a new service or adds a feature, it rarely sets up instrumentation — and when it does, the result is often noisy, non-standard telemetry that obscures more than it reveals. Dash0 Agent Skills close that gap.

Dash0 Agent Skills are packaged instructions that plug into any agent that supports the Agent Skills format — including Claude Code, Cursor, and Windsurf — and give it the knowledge to emit high-quality, cost-efficient OpenTelemetry telemetry from the start or to fix existing setups. All of it is guidance that will serve you well with any OpenTelemetery-based observability setup, grounded in semantic conventions and present and (soon-to-be) future best practices in OpenTelemetry.

One command to install

sh
1
npx skills add dash0hq/agent-skills

Once installed, the skills activate automatically whenever the agent works on a relevant task: instrumenting application code, configuring a Collector pipeline, choosing span attributes, or writing OTTL expressions.

Four skills, one coherent observability strategy

otel-instrumentation: Expert guidance for emitting traces, metrics, and logs across 10 languages and frameworks: Node.js, Go, Python, Java, Scala, .NET, Ruby, PHP, Browser, and Next.js. The agent learns when to use which signal, how to set resource attributes, how to handle errors and span status, and how to keep cardinality under control. Kubernetes-specific guidance covers the Downward API, environment variables, and pod-spec configuration.


otel-collector: Everything the agent needs to configure and deploy the OpenTelemetry Collector: receivers, processors, exporters, and pipelines. Covers agent-vs-gateway deployment patterns, memory limiting, batching, tail sampling, RED metrics, and four deployment methods — raw manifests, the Collector Helm chart, the OpenTelemetry Operator, and the Dash0 Operator.


otel-semantic-conventions: A decision framework for selecting, placing, and reviewing OpenTelemetry semantic convention attributes. The agent searches the Attribute Registry before inventing custom attributes, places them at the correct telemetry level, and flags common mistakes like high-cardinality metric dimensions.


otel-ottl: Guidance for writing OpenTelemetry Transformation Language expressions for the Collector's transform, filter, and routing processors. Covers syntax, contexts, converters, path expressions, and common patterns like sensitive-data redaction and attribute enrichment.

Prescriptive, not descriptive

Every rule is written so that an agent can act on it without human interpretation. Decisions are enumerable — lookup tables and explicit criteria replace open-ended advice. Code examples accompany every actionable rule, showing both the correct pattern and the anti-pattern.

Get started

Install the skills, open your agent, and ask it to add OpenTelemetry instrumentation to your app. The agent handles the rest: correct resource attributes, set span status code and message properly, the right metric instrument type, a Collector pipeline that actually works, and so much more.

Read our guide: Read our guide at https://www.dash0.com/guides/teach-your-ai-coding-agent-opentelemetry

Read more

Mar 12, 2026

Automatically monitor all namespaces with the Dash0 operator

Monitor the complete Kubernetes cluster with one command

The Dash0 Kubernetes operator just became much more powerful. Previously, you had to enable monitoring separately for each namespace. Now you can let the Dash0 operator automate this! This is very useful if you want to monitor all namespaces in your cluster. It is also useful if you create new namespaces frequently and want to have them monitored right away, without additional setup.

Refer to the operator's documentation for more information.

Read more