AWS Billing Alerts

Here’s how to get an alert if your AWS Bill exceeds a certain amount.

After you’ve enabled Billing Alerts:

1. change Region to us-east-1 (N. Virginia)

2. CloudWatch > Alarms > Create Alarm

3. Select Metric > Billing > Total Estimated Charge

4. Tick USD | EstimatedCharges

5. click Select Metric

6. Enter value in exceed

7. pick from send a notification to: (I use a List called NotifyMe with my email address). It seems to take a while for the drop down to respond.

8. click Show Advanced Option (at bottom) then update Name, Description

Note: you cannot change the Name after you’ve created an Alarm. The only way to do so is to delete your Alarm and start again.

9. click Create Alarm


Top tip:

  • go create alarms up to 100x your current billing in suitable increments – one day you’ll hit them and you want to be warned!


Kubernetes: Service Accounts

A service account provides an identity for processes that run in a Pod.


e.g. if you access the cluster using kubectlyou’re authenticated by apiserver as a user account (e.g. admin).

Processes in containers also contact apiserver and are authenticated (e.g. if you don’t specify an account then it’s assigned default).


Check pod service account name via:

kubectl get pods/podname -o yaml

and see spec.serviceAccountName


List service accounts:

kubectl get serviceAccounts

There doesn’t seem to be a way to view them via the Kubernetes Dashboard.



Service Meshes on Kubernetes: Istio, Linkerd, SuperGloo

Quick note: there’s a lot going on in the Service Mesh space for Kubernetes.

Istio (based on Envoy) is the elephant in the room with a ton of funding.

But there’s also Linkerd and SuperGloo.

And a recent announcement from AWS: AWS App Mesh.


Great summary of Istio:

Generally traffic is defined as north/south (into and out of the datacenter) or east/west (between servers in the datacenter).

Istio is for east/west traffic within your K8S cluster, designed to connect your services together by moving all the network traffic through the Envoy proxy. It is usually done by wrapping your deployments with an extra sidecar pod (automatically using K8S APIs) that intercepts all the networking to other services and pods. You would still use a load balancer or ingress to route external traffic into the cluster, although there are options like Heptio Contour that also use Envoy for this.

This provides a single data and control plane to centralize all network reliability, security, service discovery, and monitoring.

Note: Istio uses an extended version of the Envoy proxy:
Istio provides:
  • Dynamic service discovery
  • Load balancing
  • TLS termination
  • HTTP/2 and gRPC proxies
  • Circuit breakers
  • Health checks
  • Staged rollouts with %-based traffic split
  • Fault injection
  • Rich metrics
And an interesting post about Service Meshes:

Prometheus: Configuration, Querying and PromQL

Some core terms

An endpoint is an instance – e.g. a single process.

A collection of instances with the same purpose (e.g. a replicated process such as an API server) is called a job.

A node is a target – e.g. localhost on port 9090.


Prometheus is configured via /etc/prometheus/prometheus.yml

and typically starts with:



e.g. let’s dissect this:

- name: example
  - alert: HighErrorRate
    expr: job:request_latency_seconds:mean5m{job="myjob"} > 0.5
    for: 10m
      severity: page
      summary: High request latency

See alerting rules:

and recording rules:

and this on notifications

and this on expr:


Basics of querying:

1. Go to Prometheus –https://prom-server/graph

2. Enter time series selectors



or with a label




Label matching operators:

  • = Select labels that are exactly equal to the provided string
  • != Select labels that are not equal to the provided string
  • =~ Select labels that regex-match the provided string (or substring)
  • !~ Select labels that do not regex-match the provided string (or substring)


Get list of metrics available on Prom server using:

curl http://localhost:9090/metrics


And targets:

curl http://localhost:9090/api/v1/targets

/api/v1 is the HTTP API.

E.g. see

More later.


More useful docs:


Note: Prometheus was developed to monitor web services. To monitor a node, you’ll need Node Exporter:



is exposed at /api/v1.

and label values:

E.g. `curl http://localhost:9090/api/v1/label/job/values`

gets all the label values for the job label.



It’s the job of an exporter to export values from a node into Prometheus. E.g. on an Elasticsearch node:

ps -ef | grep export
root 11637 1 0 Mar21 ? 00:44:18 /usr/local/bin/elasticsearch_exporter -web.listen-address=:9000
root 15603 1 0 2017 ? 03:10:45 /usr/local/bin/node_exporter -web.listen-address=:10000

we can see here an Elasticsearch exporter and a node exporter (for CPU, etc metrics).

The Elasticsearch exporter is configured to send data to Prometheus as follows:


and we can check the data in Prometheus via:




Marvel allows you to monitor Elasticsearch via Kibana. As of 5.0, Marvel is part of X-Pack.