Prometheus: Configuration, Querying and PromQL

Some core terms

An endpoint is an instance – e.g. a single process.

A collection of instances with the same purpose (e.g. a replicated process such as an API server) is called a job.

A node is a target – e.g. localhost on port 9090.


Prometheus is configured via /etc/prometheus/prometheus.yml

and typically starts with:



e.g. let’s dissect this:

- name: example
  - alert: HighErrorRate
    expr: job:request_latency_seconds:mean5m{job="myjob"} > 0.5
    for: 10m
      severity: page
      summary: High request latency

See alerting rules:

and recording rules:

and this on notifications

and this on expr:


Basics of querying:

1. Go to Prometheus –https://prom-server/graph

2. Enter time series selectors



or with a label




Label matching operators:

  • = Select labels that are exactly equal to the provided string
  • != Select labels that are not equal to the provided string
  • =~ Select labels that regex-match the provided string (or substring)
  • !~ Select labels that do not regex-match the provided string (or substring)


Get list of metrics available on Prom server using:

curl http://localhost:9090/metrics


And targets:

curl http://localhost:9090/api/v1/targets

/api/v1 is the HTTP API.

E.g. see

More later.


More useful docs:


Note: Prometheus was developed to monitor web services. To monitor a node, you’ll need Node Exporter:



is exposed at /api/v1.

and label values:

E.g. `curl http://localhost:9090/api/v1/label/job/values`

gets all the label values for the job label.



It’s the job of an exporter to export values from a node into Prometheus. E.g. on an Elasticsearch node:

ps -ef | grep export
root 11637 1 0 Mar21 ? 00:44:18 /usr/local/bin/elasticsearch_exporter -web.listen-address=:9000
root 15603 1 0 2017 ? 03:10:45 /usr/local/bin/node_exporter -web.listen-address=:10000

we can see here an Elasticsearch exporter and a node exporter (for CPU, etc metrics).

The Elasticsearch exporter is configured to send data to Prometheus as follows:


and we can check the data in Prometheus via:




Marvel allows you to monitor Elasticsearch via Kibana. As of 5.0, Marvel is part of X-Pack.


Monitoring containers with Prometheus and Grafana

Architecting Monitoring for Containerized Applications

Why not use Nagios?

Can’t use same method as traditional servers. E.g. putting an agent into a container doesn’t really work.

/metrics exposed for container runtime. Docker uses Prometheus format (i.e. simple text with Key Value format)

Prometheus stores data in time series database.

Prometheus configuration

Is in YAML. E.g.




- job_name: <name here>


scrape_interval: 60s

Prometheus Dashboard

Status > Targets: lists all monitored targets

Graph > Graph > select from insert metric at cursor


Collecting Metrics with Prometheus

Exposing Runtime Metrics with Prometheus

Exposing Application Metrics to Prometheus

Exposing Docker Metrics to Prometheus

Building Dashboards with Grafana




Prometheus: storage

Prometheus has its own local storage using a local on-disk time series database. However, this is not clustered or replicated. i.e. it’s not scalable or durable.

It does provide interfaces to integrate with remote storage. E.g.



One of these options is PostgreSQL and TimescaleDB. Note, TimescaleDB uses PostgreSQL but scales it for better performance using automatic partitioning across time and space:



Prometheus remote storage adapter for PostgreSQL

1. Install packages (both provided by Timescale):

  • remote storage adapter

The adapter is a translation proxy used by Prometheus for reading/writing data to the PostgreSQL/Timescale database. The data from Prometheus arrives as a Protobuf. The adapter deserializes it and converts it into the Prometheus native format (see Prometheus’ Exposition Formats) before inserting it into the database.

A Docker image provides the Prometheus PostgreSQL remote storage adapter:

  •  pg_prometheus

pg_prometheusimplements the Prometheus data model for PostgreSQL.

A Docker image which provides PostgreSQL and TimescaleDB:

2. Configure Prometheus to use this remote storage adapter

i.e. add this to prometheus.yml

  - url: "http://<adapter-address>:9201/write"
  - url: "http://<adapter-address>:9201/read"


See also this tutorial: