Prometheus: Configuration, Querying and PromQL

Some core terms

An endpoint is an instance – e.g. a single process.

A collection of instances with the same purpose (e.g. a replicated process such as an API server) is called a job.

A node is a target – e.g. localhost on port 9090.


Prometheus is configured via /etc/prometheus/prometheus.yml

and typically starts with:



e.g. let’s dissect this:

See alerting rules:

and recording rules:

and this on notifications

and this on expr:


Basics of querying:

1. Go to Prometheus –https://prom-server/graph

2. Enter time series selectors



or with a label




Label matching operators:

  • = Select labels that are exactly equal to the provided string
  • != Select labels that are not equal to the provided string
  • =~ Select labels that regex-match the provided string (or substring)
  • !~ Select labels that do not regex-match the provided string (or substring)


Get list of metrics available on Prom server using:

curl http://localhost:9090/metrics


And targets:

curl http://localhost:9090/api/v1/targets

/api/v1 is the HTTP API.

E.g. see

More later.


More useful docs:


Note: Prometheus was developed to monitor web services. To monitor a node, you’ll need Node Exporter:



is exposed at /api/v1.

and label values:

E.g. curl http://localhost:9090/api/v1/label/job/values

gets all the label values for the job label.



It’s the job of an exporter to export values from a node into Prometheus. E.g. on an Elasticsearch node:

we can see here an Elasticsearch exporter and a node exporter (for CPU, etc metrics).

The Elasticsearch exporter is configured to send data to Prometheus as follows:


and we can check the data in Prometheus via:




Marvel allows you to monitor Elasticsearch via Kibana. As of 5.0, Marvel is part of X-Pack.


Leave a Reply

Your email address will not be published. Required fields are marked *