VEXXHOST Logo
Purple pattern background

Chasing Away Kubernetes DaemonSet Issues with Prometheus & AlertManager

Mohammed NaserMohammed Naser

Are your pods up, running, and functional, but have Ready statuses that aren't True? Learn how you can figure out which pods are affected with Prometheus and AlertManager, so that you don't alert DaemonSet .

As we slowly continue our migration to Prometheus alarms with AlertManager, we took a strategy of building out a vague set of alerts and then building more accurate, narrowed down ones after we see them firing. In this case, I want to share an example of how we noticed an issue, built out more accurate alarms to identify precisely the source to reduce resolution time, and then fixed the said issue.

We currently use all of the rules that are provided by kubernetes-monitoring/kubernetes-mixin repository. In our case, we started seeing the KubeDaemonSetRolloutStuck firing, which meant that certain pods were reporting that they were not ready. However, we started doing some basic PromQL queries and noticed that all of these pods were up, running, and functional; their Ready condition status was, however, not True.

At this stage, we figured the next step would be finding a way to figure out the exact affecting pods more reliably. Therefore, we can alert those specific pods instead of a DaemonSet, in general, and avoid having to do all the research of figuring out which pods aren't working for a much faster resolution time. We started with something as simple as this:

kube_pod_status_readyfalse == 1

We then proceeded to add a few more bits that are necessary for monitoring our infrastructure and joined them to both the pod info (to get the node) and pod owner metrics. Afterward, we reshuffled some labels so that we could adequately do inhibitions inside our monitoring infrastructure.

private_quote_banner

Once we implemented that, we went from a few firing alerts for DaemonSets to ~50ish firing (but the initial alert was inhibited). While there are more alerts overall, they're all grouped by DaemonSet *and* include pod information. Therefore, it actually still fired 6 alerts in our monitoring system. But this time, we have complete information regarding the nodes and pods affected; therefore, we could start getting all the necessary information to resolve things.

At this point, we started noticing that these issues are all affecting specific nodes within the infrastructure for those DaemonSets by using this PromQL query, which helped us start narrowing down what was the most common issue across these nodes:

sum ((kube_pod_status_readyfalse == 1) + on(pod) group_left(node) (0 * kube_pod_info)) by (node)

We started by deleting all of the affected pods and seeing if they would come back cleanly. In our architecture, killing and restarting these pods has no effect overall in the health of the system. Once we deleted all of the pods that were in an unready state, we noticed that they were coming back up with the Ready condition set to True, which means that everything was okay and all the alarms got cleared out.

Now, we hit a point where we know that everything is back up and running, but we still don't know what happened exactly. It's time to start digging and investigating the precise root cause of this issue. But that's for another small post that goes through the step-by-step process.

Solving mysteries by intense testing and deduction brings me joy. So, if you are scratching your head over a Kubernetes/OpenStack puzzle and you think my team and I could be helpful, we'd be glad to troubleshoot private cloud components. Contact us by visiting our Consulting Solutions page.

This article was originally posted as a guest blog post on the CNCF Blog titled Chasing Away Kubernetes DaemonSet Issues with Prometheus & AlertManager. The article was written by VEXXHOST CEO, Mohammed Naser.

Share on social media

Virtual machines, Kubernetes & Bare Metal Infrastructure

Choose from Atmosphere Cloud, Hosted, or On-Premise.
Simplify your cloud operations with our intuitive dashboard.
Run it yourself, tap our expert support, or opt for full remote operations.
Leverage Terraform, Ansible or APIs directly powered by OpenStack & Kubernetes