Dash0 Acquires Lumigo to Expand Agentic Observability Across AWS and Serverless

Changelog

Recent changes and improvements to Dash0

Join us on a journey through Observability and OpenTelemetry as we improve our product. Stay connected with our Newsletter or RSS for fresh updates.

Mar 10, 2026

Signal Type Restrictions for Auth Tokens

Auth tokens already support restricting ingestion to specific datasets. Now, tokens can also be restricted to individual signal types — logs, spans, metrics, profiles, or web events — giving you fine-grained control over what data each token is allowed to send.

When creating or editing an auth token in Settings > Auth Tokens, a new Signal type dropdown appears between the dataset restriction and permissions fields. Select a signal type to lock the token to that single type of telemetry, or leave it set to "Unrestricted" to accept all signals as before. The restriction is enforced server-side in the collector pipeline: any data that does not match the permitted signal type is silently dropped.

Why this matters for Website monitoring

Website monitoring in Dash0 is powered by the Dash0 Web SDK, which sends both spans and logs over the same OTLP connection. In backend observability, each signal type typically arrives through a separate pipeline with its own token. But the browser is different — a single Web SDK instance emits page loads, web vitals, user interactions (as spans), and console errors or custom events (as logs) through one shared auth token.

With signal type restrictions, you can now issue a dedicated token that only accepts web events. This is valuable for two reasons:

1. Security boundary. The auth token embedded in your website's JavaScript is publicly visible to anyone who opens the browser's developer
tools. A token restricted to web events cannot be abused to ingest backend traces, metrics, or profiles into your organization — even if it is
extracted from the page source.
2. Blast-radius reduction. If a web-events-only token is compromised or accidentally shared, the damage is contained to a single signal type.
Backend observability data, alerting metrics, and profiling data remain unaffected.

Interplay with other features

Signal type restrictions combine with the existing dataset restriction — a token can be scoped to both a specific dataset and a specific signal type for maximum isolation.

And, by the way, you can go and retrofit your existing authentication tokens with the new restrictions!

Read more

Mar 10, 2026

Smarter Triage Defaults: Error Comparison Mode

When investigating issues in production, the first question is almost always "what's different about the errors?" Until now, the Triage panel in the Trace Explorer, Logs Explorer, and Web Events Explorer defaulted to comparing against the current timeframe — a useful but generic baseline that required an extra click to switch to error comparison. Starting today, all three explorers default to error comparison mode. The triage panel immediately highlights which attributes correlate with errors, so you can jump straight into root-cause analysis without changing any settings.

What changed

Error comparison as the default baseline. Opening the Triage tab in the Trace Explorer, Logs Explorer, or Web Events Explorer now automatically
compares error spans, error logs, or failed web events against their non-error counterparts. There is no longer a need to manually switch the
analysis method.

Conflicting filter detection. If you already have a general filter restricting data to errors (e.g., otel.span.status.code = ERROR), the error
comparison becomes meaningless — every record matches both the selection and the baseline. When this happens, the triage panel now detects the
conflict and shows a "Remove filter" button that strips the redundant filter in one click, restoring a meaningful comparison.

Why this matters

Error triage is the most common entry point when investigating incidents. Defaulting to error comparison mode removes a step from the most
frequent workflow, while the new conflicting-filter detection ensures users are never stuck on a "No major differences" screen without guidance
on how to fix it.

Read more

Mar 9, 2026

Stack Trace Translation with Source Maps

Stack traces from minified JavaScript are now automatically translated into their original source locations.

The translation works automatically as long as your JavaScript files and source maps are reachable via HTTP(S).

If your build tool generates source maps and deploys them alongside your JavaScript bundles, no additional configuration is required.

If your source maps are hosted behind authentication, you can configure a Source Map Integration to provide credentials (Basic Auth, Bearer token, or custom headers) for specific URL prefixes.

What’s included:

  • Automatic translation of minified JavaScript stack traces
  • Support for public and authenticated source maps
  • URL prefix–based integration configuration
  • Per-frame translation with detailed error feedback
  • Original stack trace always preserved

Each frame in a stack trace is translated independently. If a frame cannot be resolved, it remains in its original form and shows a warning icon. Hovering over the icon reveals the reason (for example, missing source map, authentication error, or file not found).

You can switch between the original and translated stack trace at any time using the toggle in the UI.

Read more

Mar 6, 2026

Dash0 Semantic Conventions Are Now Public

Every metric, attribute, and event that Dash0 adds to your telemetry is now defined in a public, machine-readable registry. The Dash0 Semantic Conventions give you a single place to look up what each field means, where it comes from, and how to query it.

At Dash0, we live and breathe OpenTelemetry Semantic Conventions. And in building Dash0, we have defined attributes we add to telemetry you send us, and metrics we make available to you out of the box like the synthetic metrics that you use a lot for alerting.

We like a lot of what we see in the OpenTelemetry Weaver project, which provides the tooling that the OpenTelemetry project uses to publish the OpenTelemetry Semantic Conventions.

And so we did the same ourselves.

Why a public registry?

Dash0 enriches incoming telemetry with attributes such as dash0.operation.name, dash0.resource.type, or dash0.log.pattern.
Until now, the meaning of these fields lived in internal documentation.
Making them public means you can discover every Dash0-specific attribute, metric, and event in one place. And chances are, you may discover the next Dash0 feature you love!

What is in the Dash0 Semantic Conventions registry

The registry covers four areas:

Attributes

Every attribute Dash0 materialises on spans, logs, metrics, and web events is documented — from resource identity (dash0.resource.id, dash0.resource.name and dash0.resource.type) to AI-inferred log fields (dash0.log.ai.message_inferred).

Also OTLP protocol fields that Dash0 maps to queryable attributes (otel.trace.id, otel.span.kind, otel.log.severity.range, and many others) are included as well.

Metrics

All dash0.* metrics are listed with their instrument, unit, and stability level.
Synthetic metrics like dash0.spans and dash0.spans.duration are clearly marked, along with their deprecated Prometheus aliases.


The registry also covers website monitoring (dash0.web.*), synthetic checks (dash0.synthetic_check.*), alerting (dash0.check.*), and so on.

Events

The dash0.deployment event — that you can emit from CI/CD pipelines to mark a service deployment (for example using the brand new Dash0 CLI) is fully specified, including its required resource attributes and optional VCS metadata.

Open in Dash0

Every metric and attribute page includes an Open in Dash0 link that takes you straight to the relevant explorer with the right filters pre-applied.
Click a metric name to open it in the Metrics Explorer.
Click an attribute to jump to the Traces, Logs, Metrics, or Web Events Explorer with the right filters pre-set.

Browse the full registry at dash0hq.github.io/dash0-semantic-conventions. We will also integrate it in the Dash0 Documentation soon.

Read more

Mar 6, 2026

Linear integration

Agent0 now integrates with Linear, bridging project management and observability to bring issue context directly into debugging and investigation workflows in Dash0.

What's New

  • Linear integration for Agent0: Agent0 can now reference Linear issues, projects, and teams while investigating production behavior, so you can tie runtime signals to the work your team already tracks.
  • Invoke Agent0 from Linear: Mention @Dash0 in any Linear issue or comment to start an investigation without leaving Linear. Agent0 reads the issue context, runs analysis against your telemetry, and posts results back into the same thread, including tool execution steps and a deep link to the full session in Dash0.
  • Context preserved across tools: Agent0 responses posted into Linear include a deep link that reopens the investigation in Dash0 with organization, dataset, and time range already set.
  • Read-only access: Agent0 reads your Linear workspace but cannot create, modify, or close issues.

Availability

Available now for all Dash0 organizations. Set up in Organization Settings → Integrations → Add → Linear. Open integration settings.

Read more

Mar 6, 2026

Preferred dataset

Admins can now mark any dataset as preferred for the organization. Dash0 opens the preferred dataset by default for all members of the organization unless a different dataset is specified in the URL or if they previously visited a different dataset.

What's New

Preferred dataset: Enable the Preferred toggle on any dataset's Overview page. Dash0 opens that dataset for all organization members when no dataset is encoded in the URL.

Note that if users already opened a dataset before, that previously stored dataset has precedence over the preferred organisation dataset.

Available to all Dash0 users. Changing the preferred dataset requires admin permissions.


Read more

Mar 4, 2026

Manage Dashboards, Views, Check Rules, and Synthetic Checks from the Terminal

Infrastructure as code changed how teams manage servers. The same principle should apply to observability: your dashboards, alerting rules, views, and synthetic checks deserve version control, code review, and automated deployment. The Dash0 CLI gives you full CRUD control over all four asset types, plus the apply command that brings GitOps workflows to your observability configuration.

One consistent interface for every asset type

Dashboards, check rules, views, and synthetic checks all share the same set of subcommands:

sh
12345
dash0 dashboards list
dash0 check-rules get <id>
dash0 views create -f view.yaml
dash0 synthetic-checks update <id> -f check.yaml
dash0 dashboards delete <id>

list, get, create, update, delete: the same verbs, the same flags, the same output formats across every asset type. No need to learn a different interface for each one.

Export, edit, re-apply

The get command with -o yaml or -o json gives you the full asset definition, ready for editing:

sh
123
dash0 dashboards get <id> -o yaml > dashboard.yaml
# edit dashboard.yaml
dash0 dashboards update <id> -f dashboard.yaml

The apply command: GitOps for observability

apply is the single command that ties it all together.Point it at a file or a directory, and it figures out the rest:

sh
12345
# Apply a single file
dash0 apply -f dashboard.yaml
# Apply an entire directory recursively
dash0 apply -f assets/

The command auto-detects whether each asset needs to be created or updated. Multi-document YAML files (separated by ---) let you bundle related assets into a single file. Hidden files and directories are skipped, so your .git folder stays out of the way.

Prometheus alerting rules, native in Dash0

Already have Prometheus alerting rules? The CLI accepts PrometheusRule CRD files directly:

sh
12
dash0 check-rules create -f prometheus-rules.yaml
dash0 apply -f prometheus-rules.yaml

Each alerting rule in the CRD becomes a Dash0 check rule. Recording rules are skipped automatically. No manual conversion required.

Multiple output formats

Every list and get command supports table, wide, json, yaml, and csv output. Use wide for a quick overview that includes dataset, origin, and URL. Use csv with --skip-header for machine-readable automation.

Get started

sh
123456789
export DASH0_API_URL=... # Get the value at https://app.dash0.com/goto/settings/endpoints?endpoint_type=api_http
export DASH0_AUTH_TOKEN=... # Get the value at https://app.dash0.com/goto/settings/auth-tokens
# See what you have
dash0 dashboards list
# Export, tweak, and re-apply
dash0 dashboards get <id> -o yaml > my-dashboard.yaml
dash0 apply -f my-dashboard.yaml

Asset management is available in all stable releases of the Dash0 CLI: no experimental flag needed.

Read more

Mar 3, 2026

Query Spans and Traces from the Terminal

A slow API call rarely tells the whole story. To understand why a request took 3 seconds, you need to see every hop it made: the database query, the downstream RPC, the cache miss that should not have happened. Now you can explore all of that without leaving the terminal. The Dash0 CLI introduces two commands for distributed tracing: spans query to search across spans, and traces get to reconstruct a full trace end-to-end.

Search spans like you search logs

The spans query command brings to spans the same filtering, output formats, and custom columns you already know from logs query:

sh
1234
dash0 -X spans query \
--from now-1h \
--filter "service.name is checkout-service" \
--filter "otel.span.status.code is ERROR"

The default table shows timestamp, duration, span name, status, service name, parent ID, trace ID, and span links. Swap in any OTLP attribute as a column to surface the dimensions that matter to your investigation:

sh
12345
dash0 -X spans query \
--column timestamp \
--column duration \
--column "span name" \
--column http.request.method

JSON and CSV outputs are available for scripting and downstream processing.

Reconstruct full traces

Once you have a trace ID — from a span query, a log record, or an alerting rule — traces get fetches every span in the trace and displays them as an indented tree:

sh
1
dash0 -X traces get 0af7651916cd43dd8448eb211c80319c

Modern architectures do not always fit into a single trace. A message queue consumer might link back to the producer's trace; a batch job might reference the request that triggered it.

The --follow-span-links flag tells the CLI to chase those connections automatically:

sh
1
dash0 -X traces get 0af7651916cd43dd8448eb211c80319c --follow-span-links

The CLI walks span links recursively (up to 20 traces), displaying each linked trace under a clear header. You can set a custom lookback period for the linked traces, like --follow-span-links 2h, to control how far back the search reaches.

Get started

sh
1234
export DASH0_API_URL=... # Get the value at https://app.dash0.com/goto/settings/endpoints?endpoint_type=api_http
export DASH0_AUTH_TOKEN=... # Get the value at https://app.dash0.com/goto/settings/auth-tokens
dash0 -X spans query --from now-15m

Span and trace commands are experimental: enable them with -X and tell us how they fit into your workflow.

Read more

Mar 3, 2026

Query Logs from the Terminal

Every investigation involves logs at one point or another. With the dash0 CLI, you can search, filter, and inspect log records stored in Dash0 without leaving your shell.

Powerful filtering at your fingertips

The --filter flag accepts the same expression language you use in the Dash0 UI so you can zero in on exactly the records that matter. Combine multiple filters (AND logic) to slice through millions of log lines in seconds:

sh
12345
dash0 -X logs query \
--from now-1h \
--filter "service.name is api-gateway" \
--filter "otel.log.severity.range is_one_of ERROR FATAL" \
--limit 200

Supported operators range from exact matches (is, is_not) and substring checks (contains, starts_with) to regex (matches), numeric comparisons (gt, gte, lt, lte), and presence tests (is_set, is_not_set). If you can describe the condition in the Dash0 Logging Explorer, the CLI can express it.

Flexible output for humans and machines

By default, logs query prints a clean table with timestamp, severity, and body. Need the full OTLP payload for a script? Switch to --output json. Feeding results into awk or a spreadsheet? Use --output csv.

Custom columns let you pull any attribute into view:

sh
1234
dash0 -X logs query \
--column time \
--column service.name \
--column body

Any OTLP attribute key, at resource, scope, or log record level, works as a column, so you see exactly the dimensions you care about.

Built for automation

Relative timestamps (now-15m, now-1h), machine-readable CSV output, and --skip-header make logs query a natural fit for shell scripts, CI checks, and AI-agent workflows. Pipe it into jq, grep, or your favorite data tool.

sh
12345
# Count errors in the last 30 minutes
dash0 -X logs query \
--from now-30m \
--filter "otel.log.severity.range is ERROR" \
-o csv --skip-header | wc -l

Get started

Install or update the Dash0 CLI, set your credentials, and start querying:

sh
1234
export DASH0_API_URL=... # Get the value at https://app.dash0.com/goto/settings/endpoints?endpoint_type=api_http
export DASH0_AUTH_TOKEN=... # Get the value at https://app.dash0.com/goto/settings/auth-tokens
dash0 -X logs query --from now-1h

Logs query is an experimental command: enable it with the -X flag and let us know what you think.

Read more

Feb 27, 2026

Support for Managing Members and Teams via API, Go SDK, and CLI

You can now programmatically automate and manage your organization's members and teams using our full suite of interfaces.

We have extended the following tools with comprehensive management capabilities for members and teams:

Teams Management

The core team API endpoints, starting with /teams, cover operations like:

  • Retrieving a list of all teams in your organization.
  • Creating new teams with specific access configurations.
  • Adding and removing members from teams.
  • Fetching details for a single team.
  • Updating existing team properties, such as the name.
  • Deleting teams.

CLI Snippets

sh
1234567891011
# List all teams
dash0 teams -X list
# Create a new team
dash0 teams -X create --name "Frontend Developers"
# Add members to a team
dash0 teams -X add-members <teamId> <memberIdOrEmail> <memberIdOrEmail> <memberIdOrEmail>
# Delete a team
dash0 teams -X delete <teamId>

The teams command is currently experimental (like all recently introduced commands), and therefore requires the –X/-–experimental flag.

Members Management

The API endpoints under /members let you manage individuals in your organization:

  • Listing all members in the organization.
  • Inviting new users to the organization.
  • Managing member roles and permissions.

CLI Snippets

sh
12345678
# List all members
dash0 members -X list
# Invite a new user
dash0 members -X invite "user@example.com"
# Invite a new user
dash0 members -X remove "user@example.com"

The members command is currently experimental (like all recently introduced commands), and therefore requires the –X/-–experimental flag.

Read more