Grafana Monitoring on a Raspberry Pi
As you might have seen from my last few posts I have quite a lot running on my Raspberry Pi.
I am currently using a Raspberry Pi 2 B which is a great device but only has 1GB of RAM and 900 MHz CPU. So I am a little worried sometimes that I am going to overload it with all the docker services I am running on it.
I use Grafana a lot at work and love it, so I thought it would be good to use it to monitor my Raspberry Pi.
With any monitoring, it is important to know what you want to keep an eye on.
In my case I am interested in the following:
- CPU — If the CPU ends up running at 100% a lot of the time, I might need to scale down the services running on it
- Memory — With only 1GB of memory I need to keep an eye on how much I run on it.
- Hard Disk Space — I have a 32GB SD card in my Pi but I have had one fill up before which makes the whole thing unresponsive.
- Container data — I want to know which containers are causing high CPU and Memory usage.
What we need
There are probably quite a few services that work with Grafana for monitoring. However, I am using the following:
- Grafana — obviously!
- Prometheus — for gathering the data in a time series.
- cAdvisor — A container monitor from Google to monitor the resources used by containers.
- Node Exporter — Prometheus exporter for hardware and OS metrics
- Docker — obviously
- Traefik — I use this as my reverse proxy if you don’t have a reverse proxy set up you can follow my previous post, Traefik vs Nginx for Reverse Proxy with Docker on a Raspberry Pi.
Docker Compose Set up
I will go straight to the Docker Compose file you need and will explain what you need to change for your setup:
You will notice I am using
braingamer/cadvisor-arm:latest for cAdvisor. This is because the official Google image doesn't support ARM and is marked as deprecated. Of course, if you wanted to you could build your own docker image.
This is the Prometheus config file I am using,
# my global config
scrape_interval: 120s # By default, scrape targets every 15 seconds.
evaluation_interval: 120s # By default, scrape targets every 15 seconds.
# scrape_timeout is set to the global default (10s).
# Attach these labels to any time series or alerts when communicating with
# external systems (federation, remote storage, Alertmanager).
# Load and evaluate rules in this file every 'evaluation_interval' seconds.
# - "alert.rules"
# - "first.rules"
# - "second.rules"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'
# Override the global default and scrape targets from this job every 5 seconds.
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
- targets: ['localhost:9090', 'cadvisor:8080', 'node-exporter:9100']
Dashboard Set Up
This is what my Grafana dashboard looks like. If you want something similar then you can copy my dashboard json.
Originally published at https://www.alexhyett.com on January 28, 2021.