Dash0's new Slack Bot integration delivers real-time notification, threaded status changes, and quick-access links directly to your Slack channels reducing alert fatigue efficiency.
Dash0 has introduced a new Slack Bot integration to enhance real-time alerting within your team's Slack workspace. This feature allows you to receive immediate notifications directly in your chosen Slack channels, ensuring that failed checks are promptly addressed.
Key Features:
Instant Notifications: Receive up-to-date notification for failed checks directly in your specified Slack channels, facilitating swift responses to potential problems.
Threaded Discussions: Each alert initiates a Slack thread, maintaining organized discussions and tracking the history and evolution of incidents in a centralized location.
Quick Access Links: Hyperlinks embedded within check rule annotations are parsed and prominently displayed at the top of each alert, providing immediate access to relevant resources and dashboards for efficient troubleshooting.
Dash0 can now automatically update attributes (for Resources, as well as Spans, Logs and Metrics) to their latest naming as defined according to the OpenTelemetry semantic conventions.
OpenTelemetry semantic conventions regularly gets updated, where attributes or even metric names get changed. For example, http.status_code got updated to http.response.status_code for better consistency with other attributes.
For engineers to keep all names up-to-date, this can be very cumbersome work and where it is easy to miss some instrumentation sources, resulting in inconsistent attributes. This makes querying difficult, as not all signals will get in- or excluded as expected. Also Views or Dashboards might not work as expected.
Dash0 now supports executing the migrations as defined by the OpenTelemetry schemas itself. Via settings, the user can pick whether to always update to latest, or to one specific version from the list:
A specific version can be pinned to prevent e.g. breaking dashboards in the future, when new migrations become available. Signals will never get downgraded to a specified version.
We removed Dash0's base subscription fee, moving to a purely consumption-based model with the same rates we have had from the beginning.
Keeping things simple for our end users is one of the core values of Dash0. After reviewing our pricing, we realized we could make it even simpler.
From the start, our price is based on counting metric data points, spans and span events, and log records. This allows you to send us your telemetry irrespective of how much metadata you have on it, and we actively encourage you to send all the metadata you think necessary or useful, irrespective of how many GB it is. We also do not have per-seat pricing: we want everybody in your team to be able to enjoy and benefit from Dash0.
Up until today, we had a $50 subscription fee, which included 100 million metric data points, 25 million spans and span events, and 25 million log records. The $50 wasn’t chosen at random: it is the amount you’d pay for sending that much data in a month.
Today, we are removing our base subscription fee. Effective midnight of Friday, February 7th 2025, Dash0’s pricing works purely consumption-based using the same rates as before:
$0.20 for 1 million metric data points
$0.60 for 1 million spans and span events
$0.60 for 1 million log records
For most Dash0 users, the amount due in the bill will be the same. But the bill itself will be easier to read, and the pricing simpler to understand.
And for users who have been sending less telemetry than what was included in the base subscription, Dash0 just got even cheaper!
I am sending less telemetry than what is included in the base subscription fee and my billing cycle ends after February 7th. Will I pay one more month of the base subscription fee?
No, we are waiving the base subscription fee of all current billing cycles. That is, every billing cycle beginning on or after January 7th will be charged with the simpler consumption-based model for the entirety of the data you sent us in that billing cycle. For example, if you sent us 1 million logs, 2 million spans and 4 million metric data points during the current billing cycle that started after January 7th, you will pay 1x $0.60 + 2x $0.60 + 4x $0.20 = $2.60.
I send a lot of spans and logs, but only a few tens of millions of metric data points. Will my bill be cheaper with the new model?
Yes, you will only pay the millions of metric data points you send, not at least 100 millions like before. You can save up to $20 per month (100 million, charged $0.20 per million). Similar math can be done for spans and log records.
So I could have Dash0 bills that go down to zero?
If you do not send us any telemetry in a billing cycle, we charge you nothing. You’ll still receive a $0 invoice for bookkeeping purposes.
Will you delete my organization and my telemetry and configurations in it if I don’t send telemetry for an entire month?
No. You paid for that telemetry and you get the full retention period on it. For metrics, it’s 13 months, so you have the right to your organization for at least that after you stop sending telemetry.
If I delete my Dash0 organization in the middle of a billing cycle, do I still get an invoice at the end of it?
Yes, you get one last invoice for the amount of telemetry sent since the beginning of the last billing cycle.
A lot of the logs that are sent to Dash0 are unstructured: they are strings of text, with important information like severity, “hidden” inside the message. Until recently, such logs have been marked in Dash0 logs as having severity UNKNOWN. But this is a thing of the past.
Our log ingestion pipeline uses AI to understand the log structure of applications and automatically extract the severity. Previously in Beta, this capability is now available for everyone, out of the box. And at no extra charge!
Logs with known severity like INFO, WARN and ERROR are not only easier to filter in the UI, but can also be used as system health indicators when setting up dashboards and check rules.
Log AI in action: Shortly after the capability was enabled, logs with unknown severity (in grey) were replaced with color coded logs, also highlighting a couple errors and warnings that would have been indistinguishable otherwise.Read more
Meet dash0.span.events — our newest synthetic metric designed to provide deeper visibility into the events associated with your trace data.
Built on the foundation of the dash0.spans metric, dash0.span.events empowers you to query the number of events associated with your spans. Its advanced filtering and grouping capabilities give you full access to span and event attributes, such as event names (otel.span.event.name), offering a new level of insight into your trace data via PromQL.
Analyze the occurrences of specific span events over time using the otel.span.event.name attribute
This metric is also at the heart of the all-new Dash0 Cost Estimate dashboard, now available in the Integration Hub. By breaking down event volumes across services, the dashboard provides a clear view of how your event data contributes to overall observability costs. With this knowledge, you can make smarter, data-driven decisions to manage and optimize expenses while maintaining visibility.
Dash0's Total Span Events by Service Panel from the Cost Estimate Dashboard
Start exploring dash0.span.events today and experience a more detailed, actionable view of your trace data — helping you control costs.
Dash0-specific Prometheus alert rule extensions are now available behind the dash0- prefix in the Dash0 operator's 0.37.1 release, ensuring a clear distinction from all other labels and annotations.
As part of this update, our two thresholds annotations have been renamed to better align with other Dash0-specific extensions:
threshold-degraded -> dash0-threshold-degraded
threshold-critical -> dash0-threshold-critical
Additionally, we've reimagined our handling of the severity label, offering you the freedom to assign any custom value that fits your needs.
These updates reflect our commitment to providing a seamless and intuitive experience for users adopting the Dash0 operator. For new users, we recommend adopting these updated naming conventions as detailed in our documentation. Existing users managing Dash0 Check Rules with the Prometheus alert rule format can rest easy—if a migration becomes necessary, we'll reach out to provide step-by-step guidance, ensuring a seamless transition.
With the Dash0 operator, managing check rules is simpler, more powerful, and tailored to your environment.
To help everybody get started with their observability journey, the Dash0 Integrations Hub is now available.
You can learn setup instructions through the Integrations Hub that help you get started quickly with AWS, Vercel, Node.js, and much more. It even comes with ready-made dashboards that model everyday observability needs, e.g., for the OpenTelemetry collector, Vercel, GitHub Actions, and more. We plan to extend it in the future with content about alerting and more. Check it out!
We’re introducing a new notification channel in Dash0 that allows you to send alerting notifications to an external Prometheus Alertmanager. This integration enables Dash0 to work alongside tools in the Prometheus ecosystem, offering flexibility for teams already using Alertmanager as part of their alerting infrastructure.
With this feature, Dash0 alerting notifications can be configured to route into Alertmanager for handling alerts, leveraging its existing mechanisms for routing, deduplication, and grouping. This provides an opportunity to integrate Dash0’s alerting with workflows already built around Prometheus and its related tools, without disrupting your current setup.
This notification channel is available in Beta and joins our recently added incident.io and BetterStack integrations. Together, these features reflect our ongoing commitment to giving you more options to align Dash0 with your existing observability workflows, whether they’re centralized within Dash0, or involve other tools in your ecosystem.
We’re excited for you to try this feature and explore how Dash0 can fit seamlessly into your alerting strategy.
Semantics are essential to turning data into information. In dashboarding, the person crafting the dashboard is responsible for making this a reality. One thing that has always been annoying is the coloring of time series.
When looking at a chart, you really don't want to present information about errors in a neutral (or even positive) color tone. Instead, you will want the color to signal that something is off by making it red or yellow. This now works by default within Dash0!
Dash0's semantic coloring system (see our article "Why is this red?") now works for custom charts, too!
Dash0's charting system automatically colorizing HTTP status codes and logs within a dashboard visualizing Vercel log drain information.
How does it work? When a time series is labeled with an attribute containing status-like information, such as log severities, span status codes, HTTP & gRPC status codes, etc., the charts automatically pick the right colors!
The Dash0 operator version 0.37.0 introduces out-of-the-gox support for collecting traces, metrics and logs from Java applications.
The Dash0 operator is an open-source Kubernetes operator built on OpenTelemetry, Prometheus, Perses and other open-source projects, that provides you with an appliance-like way (one command, and all works!) of monitoring your Kubernetes clusters and the applications running on top of it.
Today we release the 0.37.0 version of the Dash0 operator, which automatically installs and configures the OpenTelemetry Java agent in your Kubernetes pods to monitor your Java applications out of the box.
We just released our brand new Vercel integration. It makes sending your Vercel logs to Dash0 a breeze.
Dash0’s Vercel integration will automatically send all logs from your Vercel workloads to Dash0 for analysis and visualization. Instead of configuring a log drain in Vercel manually, you can just install the integration at https://vercel.com/integrations/dash0 and we do it all for you.
When the integration has been installed, logs from your Vercel projects will be sent automatically to Dash0. This is great news as the log retention within Vercel is only 3 days, while Dash0 offers 30 days of retention for logs and traces. You will also be able to analyze your Vercel logs in context with telemetry from your non-Vercel infrastructure, which makes troubleshooting so much easier.
Dash0 now supports extraction of context from journald logs. The severity levels from the Journald logs are now mapped to OpenTelemetry semantic conventions, allowing for more consistent log analysis.
Additionally, we have integrated the extraction of host and process data from these logs, which is also aligned with OpenTelemetry standards. This improvement significantly enhances your ability to analyze and interpret Journald log data effectively.
The Dash0 Kubernetes operator now automatically adds Kubernetes resource attributes to all workloads that are deployed with an OTel SDK.
Having good resource attributes on your telemetry is paramount to understanding what is going on in your workloads. Logs, metrics and traces without resource information are basically just data without context. And yet, making sure there are correct Kubernetes resource attributes on all your telemetry is not always easy.
That v0.36 of the Dash0 operator now automatically adds k8s.namespace.name, k8s.pod.name, k8s.pod.uid, and k8s.container.name to all workloads via the Dash0 injector. Under the hood, these attributes are added to the resource attributes sent by your applications via the OTEL_RESOURCE_ATTRIBUTES environment variable. If the workload uses an OpenTelemetry SDK, this environment variable is automatically picked up, and the attributes defined in it are sent along with all the spans, metrics and log records that the application emits.
What's more, the Kubernetes Attributes Processor (which is deployed automatically by the Dash0 operator) will enrich telemetry with a host of other Kubernetes related resource attributes, like the Kubernetes namespace identifier, deployment name and unique identifier, replicaset name and unique identifier etc.
One less thing to worry about, one more thing that just works with Dash0.
Dash0 will now remember your last changes and indicate that you have a pending dashboard modification. Whether you accidentally refresh, close the tab, your browser crashes, or continue editing in a different tab, Dash0 will keep your changes and allow you to continue where you left off.
A common problem when creating a dashboard is forgetting to save. Painfully crafted queries and panels are lost. It's so annoying when that happens! No more with Dash0, though!
We even synchronize state across browser tabs for all those tab hoarders! 👀
The Dash0 dashboarding area indicates through a tag that there are unsaved modifications.Read more
Sidebars in Dash0 carry essential information about telemetry, context, and configuration. With the latest update they are now resizable too.
Some of our users wished to resize the sidebar to allow them to look at a chart with more space. You can now do this. Just drag the edge of the sidebar to where you want it to be!
Our log ingestion pipeline now uses AI to understand the log structure of your applications and automatically extract the severity information.
Logs without a severity aren't nearly as actionable as those stating ERROR or WARN. Explicitly mapping these log levels can be challenging and commonly requires the painful definition of regular expressions.
This capability is currently in closed beta while we validate its effectiveness. Reach out to us to join this closed beta!
Dash0 automatically enriched most logs with unknown severity (in grey) with severity information (colored). Going from noise to signal in seconds!Read more
Our predefined filters and column configurations for Logs and Traces make it easier than ever to dive into the topics that matter. Quickly apply these views to focus on the data relevant to your use case, helping you get to the insights you need faster.
Views in a nutshell help you to:
quickly find a starting point for your search
learn more about Dash0’s powerful filtering capabilities
save time by focusing instantly on relevant data
explore and understand complex datasets
Spoiler alert: We’re just getting started! Coming soon: The ability to create and save your own custom views, tailoring Dash0 perfectly to your unique datasets and workflows. Stay tuned for even more ways to make your data work for you!
You can now easily send logs to Dash0 using the most commonly used observability pipelines and log shippers! The onboarding dialog now shows you how to configure new options – including a particular version for Amazon EKS on Fargate (which is especially fun)!
The Dash0 onboarding screen showing how to configure Vector's HTTP sink with Dash0.Read more
PromQL can be difficult to understand. Dash0 now helps you understand queries using generative AI (genAI). With our quick reference, you can also see whether the referenced metrics exist and click them to go to metric explore.
Wherever you see a PromQL query, you can now click the small 🎓-button to learn more about the query. Dash0 will then…
format the PromQL query to visualize its hierarchy,
list the referenced metrics, including a quick reference of its type and availability and
provide a textual description of what the query is doing.
Dash0 synthetic metrics are now available under the dash0.* namespace, and we have expanded the grouping capabilities!
We have improved the consistency of synthetic metrics. The metrics are now all grouped under the dash0 namespace. You can use dash0.spans in the query builder to get the span count, dash0.spans.duration for the span duration and dash0.logs for the log count. For greater compatibility, we have also made it so that dash0.spans.duration is interpreted as a native Prometheus histogram!
We have extended grouping capabilities for these metrics. Now can now group them by the dash0.*, otel.span.* and otel.log.* attributes, e.g., dash0.operation.name and otel.log.severity_range. Giving you more control than ever before!
Query builder showing a 90th percentile span duration, grouped by service name operation name.Read more
Searching and filtering for the data you are about is central to your observability experience. Now, we have made the experience even better.
The filter control is now a permanently visible input field. This notably improves the experience, aids discoverability, allows you to see already applied filters, and makes for a much more natural keyboard interaction.
We have also made the content of the filter popup much more intelligent. You can now see values immediately on the first page, guiding you and allowing you to avoid having to know all attribute keys by heart. Also go straight into search made when your input doesn't full match an existing value!
Easy search and filter – with suggestions for keys, values and direct access to thecommonly used filter operation.Read more
Dashboards are flexible and allow easy tailoring to your needs. They are now improved with tree maps, pie charts, and more!
We have been improving the dashboarding experience. Now, you can experience faster and more flexible dashboarding than before!
Starting with two new widget types, you can now visualize distributions using tree maps and pie charts. Both are handy for determining which pod is using the most memory or which service is generating the most errors.
Next, we have revised how we are showing unformatted series names. You know, those my_metric{service_name="shop", k8s_pod_name="shop-abcd"} selectors that are used as fallbacks when no better name is available. These are now color-coded for easier reading, and the attributes are sorted in the same as within the filter dialog.
Last, we highlight the chart cursor position across charts for much easier visual correlation!
Dashboard showing a tree map, pie chart and a time series chart.Read more
Managing dashboard configurations and check rules manually can become a huge burden. Now you can take back control over both of these with the Dash0 API or with the Dash0 Kubernetes operator.
Dashboards are a very powerful tool to see all the telemetry for your specific use cases on one screen. Check rules are indispensable to get alerted when conditions in your system require human intervention.
Editing a dashboard or a check rule via Dash0's UI is probably the best way to get fast feedback, and iterate quickly. However, if you have a lot of dashboards or checks, at some point you might want to go with a more systematic approach. Configuration as code is the way to go.
The Dash0 Kubernetes operator now supports managing dashboards as well as check rules. Simply deploy them as Kubernetes resources, the Dash0 operator will automatically pick them up and synchronize them with your dashboards and check rules in Dash0. Additionally, you can also directly manage them via Dash0's API, if you want to do so from your CI system or via scripting.
Last but not least, Dash0 is committed to open standards. For dashboards, we use the emerging CNCF standard Perses, and for check rules we use the Prometheus rules custom resource definition. This enables you to build your dashboards and check rules with zero vendor lock-in.
Resources are a cornerstone of OpenTelemetry, and we have revised how we show information about them.
Resources are everywhere within Dash0: prominently within the resource map and table and explicitly called out within tracing, logging, metrics, and check rules. Starting today, resources are even more helpful.
We have thoroughly revised the sidebar and removed our modes concept from the resource map and table. The latter added unnecessary complexity and confusion. We also moved the table and map switcher to the main navigation, resulting in an overall much cleaner look without any information loss!
The hovercard now gained quick references to the reported signals in the footer, while we have reorganized the sidebar for clarity and visual hierarchy.
Within the sidebar, you can now always find the resource's health—with a dedicated tab for more details. Similarly, we have merged the requests, errors, and duration tabs into one, allowing us to switch between aggregations for these RED metrics!
The OpenTelemetry demo adservice within the resource table. Showing the sidebar RED-metric tab at the side.
Resources are also present whenever you look at spans, logs, and metrics. We also provide the same powerful hovercard-context experience within these locations. See a slow gRPC span? Quickly look up which service or Kubernetes pod is involved in the call!
The powerful resource hovercard is also available within all other areas where we reference resources.Read more
The resource telemetry tab is now complete: Showing reported metrics and allowing navigation.
The resource sidebar's telemetry tab has been showing information about tracing and logging data for some time. Now, we are extending it with details about metrics reported by a resource!
Resource sidebar's telemetry tab showing a metrics card.
You can quickly navigate to metrics available for a service or pod through the new metrics card. This card makes identifying whether a resource emits metrics easier and facilitates jumping into the metric explorer.