Following on from 1.0.0, Prometheus 1.1.0 has been released. Let's have a look at the main improvements!
A blog on monitoring, scale and operational Sanity
It can seem like a good idea to use recording rules to make more explicit the content of a time series, particularly for those not used to labels. However this usually leads to confusing names and losing the benefits of labels.
I've previously mentioned that you shouldn't have the version of your software as either a target label, or exposed via a label on all metrics of your server as it'll make using the metrics more challenging. What should you do instead?
In a previous post I said that rather than adding another label such as host
or alias
to a target to give it a useable name, you should instead change the instance
label. Let's see how you do that.
How you choose to name metrics is important. If everyone choose different schemes it'd lead to confusion, irritation and prevent us from sharing and reusing each others' work. I'd like to share some guidelines to help keep things sane for everyone.
Another not uncommon question we get about Prometheus is as to why we don't have a single per-machine agent that handles all the collection, and instead have one exporter per application. Doesn't that make it harder to manage?
How should you choose the labels to put on your Prometheus monitoring targets? Let's take a look.
After almost 4 years of development, Prometheus has reached 1.0. Hot on the heels of 0.20.0, this release brings new features and more importantly guarantees.
When designing a monitoring system and the datastore that goes with it, it can be tempting to go straight for a clustered highly consistent approach. But is that the best approach?