Dash0 Raises $110M Series B at $1B Valuation

Changelog

Recent changes and improvements to Dash0

Join us on a journey through Observability and OpenTelemetry as we improve our product. Stay connected with our Newsletter or RSS for fresh updates.

May 4, 2026

Pre-built check rules from the Integrations Hub

The Integrations Hub now ships 101 curated check rules like Kubernetes, Vercel, AWS RDS, Istio, Argo CD, Cilium, the OpenTelemetry Collector, and many more.

Setting up alerting for a new technology has always been the slow part of getting full coverage. You know which metrics matter, but you still have to write the PromQL, decide on thresholds, and pick severities. Now we do that for you.


Every integration in the Hub has a Check Rules section listing the alerts most worth running for that technology. Browse them on the public Integrations Hub at dash0.com/hub before you sign up, or open them from the in-app integration page and install with one click; already configured with sensible thresholds and severities.
Each rule card is transparent about what it does and what it needs. You can see which of its metrics are already flowing into your organization and get a clear pointer if a dependency integration isn't set up yet; added rules are also disabled by default so you don’t need to worry about them spamming you immediately before you make modifications.
Once added, these rules behave like any check rule you authored yourself; you can edit, route to notification channels, adjust thresholds. The Hub is a curated starting point, not a black box.

Read more

May 2, 2026

Spam filter support in Infrastructure as Code

Spam filters in Dash0 let you drop unwanted logs, spans, and metrics before they reach storage by matching on structured attribute conditions. You can now manage spam filters as code using the Dash0 CLI, Terraform provider, and Kubernetes operator, so your data-hygiene rules live right alongside the rest of your infrastructure.

How it works

Every spam filter is expressed as a YAML document with kind: Dash0SpamFilter. The Dash0 CLI, Terraform provider and Kubernetes operator all accept the same YAML document that you can download from the Dataset configurations:

Dash0 CLI

The spam-filters command is currently experimental and requires the --experimental (or -X) flag:

sh
spams-filters.sh
1234567891011121314
# Create a spam filter from a YAML file
dash0 spam-filters create -X --dataset default -f drop-health-checks.yaml
# List all spam filters in a dataset
dash0 spam-filters list -X --dataset default
# Get a single spam filter as YAML
dash0 spam-filters get -X --dataset default <id>
# Update an existing spam filter
dash0 spam-filters update -X --dataset default <id> -f drop-health-checks.yaml
# Delete a spam filter
dash0 spam-filters delete -X --dataset default <id>

Terraform provider

spam-filters.tf
1234
resource "dash0_spam_filter" "drop_health_checks" {
dataset = "default"
spam_filter_yaml = file("${path.module}/filters/drop-health-checks.yaml")
}

Kubernetes operator

yaml
spam-filter.yaml
1234567891011121314
apiVersion: operator.dash0.com/v1alpha1
kind: Dash0SpamFilter
metadata:
name: drop-health-checks
namespace: monitoring
spec:
contexts:
- log
filter:
- key: "k8s.namespace.name"
value:
stringValue:
operator: "equals"
comparisonValue: "kube-system"

One more thing…

We have reworked the Spam Filters UI under Datasets so that you can name your spam filters (which of course works also via Infrastructure as Code), and download both the SpamFilters CRD to use with the Terraform provider for Dash0, the CLI and the Kubernetes operator.

A screenshot of the new Spam Filters UI

OK, I meant two more things

With any luck, there will be another much requested capability about spam filters going live in the new few weeks. Keep your eyes peeled ;-)

Read more

Apr 30, 2026

Recording rule support in Infrastructure as Code

Dashboard queries that aggregate large volumes of raw metrics can be slow and expensive. You can now define recording rules as code using the Dash0 CLI, Terraform provider, and Kubernetes operator, so pre-computed time series are version-controlled and deployed alongside the rest of your infrastructure. Recording rules let you evaluate PromQL expressions on a schedule and store the results as new time series. This means dashboards and alerting rules can query the pre-computed series instead of recalculating them on every load. You author rules in the standard `PrometheusRule` CRD format (`monitoring.coreos.com/v1`), so the syntax is already familiar if you have worked with the Prometheus Operator.

How it works

Each recording rule is scoped to a dataset and defined as a YAML document that follows the PrometheusRule specification.Inside the YAML you declare one or more rule groups, each with an evaluation interval and a list of rules entries that pair a record name with a PromQL expr. Dash0 evaluates these expressions on the configured cadence and writes the resulting series back into the dataset, ready for dashboards and alert conditions.

Dash0 CLI

sh
cli-recording-rules.sh
1234567891011121314
# Create recording rules from a YAML file
dash0 recording-rules create --dataset default -f recording-rules.yaml
# List all recording rules in a dataset
dash0 recording-rules list --dataset default
# Get a single recording rule by ID
dash0 recording-rules get --dataset default <id>
# Update an existing recording rule
dash0 recording-rules update --dataset default <id> -f recording-rules.yaml
# Delete a recording rule
dash0 recording-rules delete --dataset default <id>

Dash0 Terraform Provider

recording-rule.tf
1234
resource "dash0_recording_rule" "http_metrics" {
dataset = "default"
recording_rule_yaml = file("${path.module}/rules/http-recording-rules.yaml")
}

Dash0 Operator for Kubernetes

yaml
recording-rule.yaml
12345678910111213141516171819
apiVersion: operator.dash0.com/v1alpha1
kind: Dash0RecordingRule
metadata:
name: http-recording-rules
namespace: monitoring
spec:
dataset: default
content: |
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: my-recording-rules
spec:
groups:
- name: http_metrics
interval: 1m
rules:
- record: http_requests:rate5m
expr: rate(http_requests_total[5m])

The operator syncs the recording rule to Dash0 and reports status back on the custom resource.

Read more
Jose Pereira

Jose Pereira

Michele Mancioppi

Michele Mancioppi

Apr 22, 2026

Google Cloud infrastructure monitoring — Now in Early Access

Google Cloud infrastructure monitoring is now in Dash0. Cloud Run, Pub/Sub, and Cloud Storage metrics flow the moment you connect a project — no setup required.

Your Google Cloud infrastructure is now fully visible in Dash0 the moment you connect a project. No agents. No YAML. No per-resource configuration. This release brings Cloud Run, Pub/Sub, and Cloud Storage into Dash0 with comprehensive metrics out of the box — and broader GCP coverage is already on its way.

Cloud Run Jobs

What's New

Zero-configuration metric collection: Connect a GCP project through a guided OAuth flow and Dash0 immediately starts collecting a default metric set across every discovered resource. Metrics flow the instant your project connects without anything to enable, or to configure.

Cloud Run monitoring: Instant visibility into Cloud Run Services and Cloud Run Jobs. Track request counts, latency, error rates, CPU, memory, and instance counts the moment you're connected.

Pub/Sub monitoring: Topics and subscriptions in a clean nested view. Catch message throughput issues, delivery latency spikes, undelivered message counts, and oldest unacked message age — without building a single dashboard first.

Pub/Sub monitoring​​​​‌‍​‍​‍‌‍‌​‍‌‍‍‌‌‍‌‌‍‍‌‌‍‍​‍​‍​‍‍​‍​‍‌​‌‍​‌‌‍‍‌‍‍‌‌‌​‌‍‌​‍‍‌‍‍‌‌‍​‍​‍​‍​​‍​‍‌‍‍​‌​‍‌‍‌‌‌‍‌‍​‍​‍​‍‍​‍​‍​‍‌​‌‌​‌‌‌‌‍‌​‌‍‍‌‌‍​‍‌‍‍‌‌‍‍‌‌​‌‍‌‌‌‍‍‌‌​​‍‌‍‌‌‌‍‌​‌‍‍‌‌‌​​‍‌‍‌‌‍‌‍‌​‌‍‌‌​‌‌​​‌​‍‌‍‌‌‌​‌‍‌‌‌‍‍‌‌​‌‍​‌‌‌​‌‍‍‌‌‍‌‍‍​‍‌‍‍‌‌‍‌​​‌‌‍‌‍​‌‌​​‍‌‍​‌‌‍​‌‍‌‍​‍​‌‍‌​​‍‌​‌​​‌​​‍​‌​​‍‌​‌​​‌‌‌‍‌​​​‍​‍‌‌‍​‌​‌‌‌‍​‌‌‍​‌​‍‌‌‍​‌‌‍​​‌​‍‌‌‍​​‌‍​‌‍​​‌​‌‍‌‍​‌‌‍​‍​‌​‍‌‌​‌‍‌‌​​‌‍‌‌​‌‌‍​‌‍‌‍‌​‍‌​​‌‍​‌‌‌​‌‍‍​​‌‌‍​‍‌‍‌‍‌​‌‍‌​‍‌‌​‌‌‌​​‍‌‌‌‍‍‌‍‌‌‌‍‌​‍‌‌​​‌​‌​​‍‌‌​​‌​‌​​‍‌‌​​‍​​‍​‌‌​‍​​​‍​‍​​‌​​​‌​‌‌​‍​‌‍‌​‌‍​‍​‌​​​​​‍‌‌​​‍​​‍​‍‌‌​‌‌‌​‌​​‍‍‌‍​‌‍‍​‌‍‍‌‌‍​‌‍‌​‌​‍‌‍‌‌‌‍‍​‍‌‌​‌‌‌​​‍‌‌‌‍‍‌‍‌‌‌‍‌​‍‌‌​​‌​‌​​‍‌‌​​‌​‌​​‍‌‌​​‍​​‍​‍‌​​​‌‍​​​​​‍​‌‍‌‍‌‍‌‍‌‍‌‌​‌​‌‍‌‌​‌‌‍‌‌​‍‌‌​​‍​​‍​‍‌‌​‌‌‌​‌​​‍‍‌‌​‌‍‌‌‌‍​‌‌​​‌‍​‍‌‍​‌‌​‌‍‌‌‌‌‌‌‌​‍‌‍​​‌​‍‌‌​​‍‌​‌‍‌​‌‌​‌‌‌‌‍‌​‌‍‍‌‌‍​‍‌‍‌‍‍‌‌‍‌​​‌‌‍‌‍​‌‌​​‍‌‍​‌‌‍​‌‍‌‍​‍​‌‍‌​​‍‌​‌​​‌​​‍​‌​​‍‌​‌​​‌‌‌‍‌​​​‍​‍‌‌‍​‌​‌‌‌‍​‌‌‍​‌​‍‌‌‍​‌‌‍​​‌​‍‌‌‍​​‌‍​‌‍​​‌​‌‍‌‍​‌‌‍​‍​‌​‍‌‍‌‌​‌‍‌‌​​‌‍‌‌​‌‌‍​‌‍‌‍‌​‍‌‍‌​​‌‍​‌‌‌​‌‍‍​​‌‌‍​‍‌‍‌‍‌​‌‍‌​‍‌‌​‌‌‌​​‍‌‌‌‍‍‌‍‌‌‌‍‌​‍‌‌​​‌​‌​​‍‌‌​​‌​‌​​‍‌‌​​‍​​‍​‌‌​‍​​​‍​‍​​‌​​​‌​‌‌​‍​‌‍‌​‌‍​‍​‌​​​​​‍‌‌​​‍​​‍​‍‌‌​‌‌‌​‌​​‍‍‌‍​‌‍‍​‌‍‍‌‌‍​‌‍‌​‌​‍‌‍‌‌‌‍‍​‍‌‌​‌‌‌​​‍‌‌‌‍‍‌‍‌‌‌‍‌​‍‌‌​​‌​‌​​‍‌‌​​‌​‌​​‍‌‌​​‍​​‍​‍‌​​​‌‍​​​​​‍​‌‍‌‍‌‍‌‍‌‍‌‌​‌​‌‍‌‌​‌‌‍‌‌​‍‌‌​​‍​​‍​‍‌‌​‌‌‌​‌​​‍‍‌‌​‌‍‌‌‌‍​‌‌​​‍​‍‌‌

Cloud Storage monitoring: Request counts, data transfer volumes, and error rates at the bucket level, live from the moment your project connects.

Metric coverage: Each service type ships with Core metrics (required for the Dash0 experience), Default metrics (on automatically, covering what you would check first during an incident), and Extended metrics (available on demand via integration settings). For finer control, spam filtering applies on top of any of these.

Available in early access: Google Cloud Run Services, Google Cloud Run Jobs, Pub/Sub, Cloud Storage.

Read more
Evgeni Wachnowezki

Evgeni Wachnowezki

Andrea Chomiak

Andrea Chomiak

Fredrik August Madsen-Malmo

Fredrik August Madsen-Malmo

Apr 21, 2026

Notification channel support in Infrastructure as code

Notification channels and the associated routing rules join the Dash0 settings you can manage as Infrastructure as Code in Dash0.

Keeping alerting configuration in sync across environments has always been tedious.You can now manage notification channels — Slack, PagerDuty, email, webhooks, and more — as code using the Dash0 CLI, Terraform provider, and Kubernetes operator.

Notification channels control where Dash0 sends alerts when check rules fire. Until now they could only be configured through the UI, which made it difficult to version-control, review, and replicate them across organizations. With IaC support, you define a channel once in a YAML file and apply it from your terminal, CI/CD pipeline, or GitOps workflow.

How it works

Each notification channel is defined as a CRD-enveloped YAML document with kind: Dash0NotificationChannel.The spec.type field selects the channel type (one of 17 supported integrations), and spec.config holds the type-specific settings such as webhook URLs or Slack channel names.Notification channels are organization-level resources, so no --dataset flag is required.

Dash0 CLI

Since v1.9.0:

sh
notification-channel-cli.sh
1234567891011121314
# Create a notification channel from a YAML file
dash0 -X notification-channels create -f slack-alerts.yaml
# List all notification channels
dash0 -X notification-channels list
# Get a single notification channel as YAML
dash0 -X notification-channels get <id> -o yaml
# Update an existing notification channel
dash0 -X notification-channels update <id> -f slack-alerts-updated.yaml
# Delete a notification channel
dash0 -X notification-channels delete <id>

Terraform provider

Since v1.8.0:

notification-channel.tf
123
resource "dash0_notification_channel" "slack_alerts" {
notification_channel_yaml = file("${path.module}/channels/slack-alerts.yaml")
}

Import existing channels with terraform import dash0_notification_channel.slack_alerts <origin>.

Kubernetes operator

Since v0.136.0:

yaml
notification-channel.yaml
123456789101112131415
apiVersion: operator.dash0.com/v1alpha1
kind: Dash0NotificationChannel
metadata:
name: slack-alerts
namespace: monitoring
spec:
content: |
kind: Dash0NotificationChannel
metadata:
name: Slack Alerts
spec:
type: slack
config:
url: "https://hooks.slack.com/services/T00/B00/xxxx"
frequency: 10m

The operator syncs the channel to Dash0 and reports status back on the custom resource.

Read more

Apr 20, 2026

New SQL Query Language

Dash0 SQL is now generally available. Write queries directly against your logs, spans, and web events — with templates to start fast, query history to revisit past work, and saved views to persist queries and share with your team.

What's New

Full SQL Support: Write SQL against logs, spans, and web events. Joins, subqueries, aggregations. Functions for aggregation, string manipulation, date, array, math, logic, JSON extraction, and type conversion are supported. Because it's standard SQL, any AI tool can write queries for you — just describe what you want and paste the result in. See the full function reference in the docs.

Cross-signal joins and aggregation on arbitrary attributes unlock questions no single explorer can answer. Join spans to themselves to compute connection KPIs across service boundaries. Pull custom span attributes — query IDs, URL parameters, business-specific flags — alongside performance data to understand not just that something is slow, but which request types are responsible.

Built-in Query Templates: A library of ready-to-run query examples ships with Dash0 SQL. Pick a template, run it, and modify it from there — no blank-page problem.

Recent Queries: Dash0 SQL keeps a local history of your last 20 queries. Get back to anything you ran earlier without rewriting it from scratch.

Saved Views with Sharing: Save any query as a named view that lives in Dash0 and persists across sessions and devices. Share saved views with teammates so the whole team can build on the same go-to queries.

Read more

Apr 17, 2026

Resizable table columns

Need more room for that service name? Now you can drag column edges to resize tables in Trace Explorer, Log Explorer, Web Events Explorer, and Resources.

We've added drag-to-resize handles to table columns across Dash0's explorers. Hover over the table header row to reveal grip handles at column edges, then drag to adjust the width. Save your view to keep your preferred column sizes across sessions.

To reset column sizes, use the view reset button. If you've saved a view with custom column sizes, a "Reset column sizes" button will appear in the table Settings menu.

Available in: Trace Explorer, Log Explorer, Web Events Explorer, and Resources.

Read more

Apr 17, 2026

Domain visibility for multi-site setups

Running multiple sites on a single web integration? Now you can finally see which domain each session came from.

If you're using a shared web integration across multiple domains — think franchise networks, regional storefronts, or white-label deployments — you've probably wondered which domain a particular session was visiting. Until now, there was no way to tell from the Dash0 UI.

This update surfaces the page.url.domain attribute in three places:

Session list sidebar

The session summary card now shows the domain alongside location and browser info, so you can identify which site a user was on at a glance.

Session detail page

When viewing a full session, the domain appears in the header for quick context as you step through page views and events.

Dashboard filters

A new Domain filter variable is available in both the Overview and Web Vitals dashboards, letting you slice metrics by domain across your entire network.

Also in this update:

  • On smaller screens, the session detail header items no longer get squeezed — they now wrap gracefully instead.

Read more

Apr 16, 2026

OAuth Authentication for MCP

Connecting your AI coding tools to Dash0 just got a lot simpler. Dash0's MCP server now supports OAuth 2.0 authentication, so tools like Claude Code, Cursor, Windsurf, and other MCP-compatible clients can authenticate without requiring you to create and manage static API tokens.

How It Works

When you add Dash0 as an MCP server in your AI tool of choice, the tool will automatically open a browser window where you log in and grant consent — just like connecting any other app. Behind the scenes, Dash0 handles dynamic client registration, token exchange, and automatic refresh. You never touch a token.

For example, in Claude Code

sh
1
claude mcp add --transport http dash0 {{endpoint_mcp}}

That's it. Claude Code will walk you through the OAuth flow on first use.

Why This Matters

  • No more token management. You don't need to visit the Dash0 UI to create an auth token, copy it into your config, and remember to rotate it.
  • Short-lived, auto-refreshing tokens. Access tokens expire in 15 minutes and refresh automatically - reducing the blast radius if a token is ever compromised.
  • Full audit trail. Every OAuth authorization, token exchange, and revocation is tracked in your organization's audit log.
  • Revocable at any time. You can review and revoke connected applications from your Dash0 settings (User Settings -> Applications).

What You Get

Once connected, your AI tool has access to 20+ MCP tools for querying your observability data - service catalogs, PromQL metrics, logs, traces, synthetic checks, dashboards, and more. All scoped to your user permissions and dataset access.

Traditional Tokens Still Work

If you prefer managing tokens yourself - for CI pipelines, scripts, or automated workflows - bearer token authentication remains fully supported. OAuth is an additional option, not a replacement.

Read more

Apr 9, 2026

Consistent Chart Legend Behavior

Chart legends now behave the same way everywhere in Dash0.

Click a legend entry to isolate that series — all other series are hidden so you can focus on the one that matters. Shift+click to select multiple series and build the exact comparison you need. Click the isolated entry again to restore all series.

Showing Dash0s log explorer, with a hovered legend item, showcasing a tooltip hint: Hold Shift + click to toggle

This interaction model is now applied consistently across all charts in Dash0: dashboards, explorers, and detail views. No more guessing which legend behavior you will get.

Read more