Resources

In Dash0, everything revolves around resources because they are the core of what users care about.

Instead of focusing narrowly on individual signals like metrics, traces, or logs, Dash0 provides a resource-centric approach.

As an OpenTelemetry-native platform, Dash0 adheres to semantic conventions for defining and modeling resources, ensuring a complete picture even when data sources are fragmented.

Views

For convenient and quick access to relevant information, Dash0 provides two resource views:

  1. Table view
  2. Map view

Resources Table View

Our resources table supports different views. The views are grouped by service monitoring and infrastructure monitoring.

Under service monitoring you find views of all services and operations with detailed metrics on requests, errors and durations.

Under infrastructure monitoring you find views for Kubernetes, AWS and other infrastructure related views. The kubernetes views are structures similar to k9s CLI which is a very popular open source tool for managing Kubernetes clusters.

See how to navigate the Resources table below

Resources Map View

The resource map shows all services. The connections are automatically extracted from the OpenTelemetry signals (Logs, Metrics, Traces). When you hover a resource you get an overview that contains high level information about the service, version, runtime and health status of the service. When you select a resource the side bar will show more detailed information like request, error and duration metrics.

Metrics

Here is an overview of all Kubernetes metrics that are used in Dash0 views. These metrics are automatically collected via the dash0-operator. Here are the installation instructions for dash0-operator https://www.dash0.com/hub/integrations/int_dash0_operator/overview.

In case you do not use the dash0-operator you can collected these metrics via the k8sclusterreceiver, https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/receiver/k8sclusterreceiver/documentation.md

Metric NameDescription
k8s.container.readyWhether a container has passed its readiness probe (0 for no, 1 for yes).
k8s.cronjob.active_jobsThe number of actively running jobs for a cronjob.
k8s.daemonset.current_scheduled_nodesNumber of nodes that are running at least one daemon pod and are supposed to run the daemon pod.
k8s.daemonset.desired_scheduled_nodesNumber of nodes that should be running the daemon pod (including nodes currently running the daemon pod).
k8s.daemonset.misscheduled_nodesNumber of nodes that are running the daemon pod, but are not supposed to run the daemon pod.
k8s.daemonset.ready_nodesNumber of nodes that should be running the daemon pod and have one or more of the daemon pod running and ready.
k8s.deployment.availableTotal number of available pods (ready for at least minReadySeconds) targeted by this deployment.
k8s.deployment.available_podsTotal number of available pods targeted by this deployment (same as above, sometimes used interchangeably).
k8s.deployment.desiredNumber of desired pods in this deployment.
k8s.deployment.desired_podsNumber of desired pods in this deployment (same as above, sometimes used interchangeably).
k8s.job.active_podsThe number of actively running pods for a job.
k8s.job.desired_successful_podsThe desired number of successfully finished pods the job should be run with.
k8s.job.failed_podsThe number of pods which reached phase failed for a job.
k8s.job.max_parallel_podsThe max desired number of pods the job should run at any given time.
k8s.job.successful_podsThe number of pods which reached phase succeeded for a job.
k8s.node.condition_readyWhether the node is in a ready condition (typically 1 for ready, 0 for not ready).
k8s.node.cpu.usageCPU usage on the node (usually measured in cores or millicores).
k8s.node.filesystem.availableAmount of available filesystem space on the node.
k8s.node.filesystem.capacityTotal filesystem capacity on the node.
k8s.node.filesystem.usageFilesystem space used on the node.
k8s.node.memory.availableAmount of available memory on the node.
k8s.node.memory.working_setAmount of working set memory on the node (memory actively used).
k8s.pod.cpu.usageCPU usage by the pod.
k8s.pod.cpu_limit_utilizationPod CPU utilization as a fraction of the CPU limit.
k8s.pod.cpu_request_utilizationPod CPU utilization as a fraction of the CPU request.
k8s.pod.memory.usageMemory usage by the pod.
k8s.pod.memory_limit_utilizationPod memory usage as a fraction of the memory limit.
k8s.pod.memory_request_utilizationPod memory usage as a fraction of the memory request.
k8s.pod.phaseCurrent phase of the pod (e.g., Pending, Running, Succeeded, Failed, Unknown).
k8s.replicaset.availableTotal number of available pods (ready for at least minReadySeconds) targeted by this replicaset.
k8s.replicaset.available_podsSame as above (used interchangeably).
k8s.replicaset.desiredNumber of desired pods in this replicaset.
k8s.replicaset.desired_podsNumber of desired pods in this replicaset (used interchangeably).
k8s.statefulset.available_podsNumber of available pods in the StatefulSet.
k8s.statefulset.desired_podsNumber of desired pods in the StatefulSet.
k8s.statefulset.ready_podsNumber of ready pods in the StatefulSet.
k8s.statefulset.updated_podsNumber of updated pods in the StatefulSet.

Last updated: May 7, 2025