05-26-2023, 08:34 PM
The story so far...
I have Reef-pi running, set up on a Rpi3B. I have the software auto-starting AFTER the network comes up, and is now binding to HTTPS automagically. I've got the sensors and outputs all working and doing reef-pi things as they should.
Okay, the next chapter in my adventure with reef-pi... Telemetry.
In this same network, I have two more servers (both VM's). One is running Prometheus (via docker container managed via Portainer), and another is running Grafana. Not liking the graphs in Reef-Pi, I wanted to stitch together the various parts mentioned so as to leverage Grafana for graphing. The post on R2R is based on running Reef-pi on a x86 environment and is a vertical stack on a single host, so I've had to deviate from the published directions. And it's bitten me in the @#$%. :P
(IP's in this post changed for security)
So, I have set up Reef-pi and Node_Exporter on the Rpi (192.168.1.2). The post doesn't provide installation guidance for Node_Exporter, so I had to fart around a little but now have it installed and running:
If I browse to http://192.168.1.2:9100/metrics - I can see data (Exert):
On a second server, I have Prometheus up on running in a container. I've modified the /etc/prometheus/prometheus.yml file to add the reef-pi job. I had to modify the 'metrics path' as there was none published at /x/metrics:
Finally, I have a third server (192.168.1.4) with Grafana installed. I've created a Prometheus data source, provided the http://192.168.1.3:9090 URL and set it to server mode. Hitting "Save & Test", it tells me "Data source is working". Finally, I have the Node_Exporter dashboard installed (ID: 1860).
I SHOULD, I would think, be able to open the dashboard, select my data source, job, and host, and see data. I can't. When I open the dashboard, I get three errors:
"{Templating} Failed to upgrade legacy queries e.replace is not a function", "{Templating [job]} Error updating options: e.replace is not a function", and "{Templating [node]} Error updating options: e.replace is not a function". The Datasource drop-down populates, but the Job and Host drop-downs do not.
Any help is appreciated.
I have Reef-pi running, set up on a Rpi3B. I have the software auto-starting AFTER the network comes up, and is now binding to HTTPS automagically. I've got the sensors and outputs all working and doing reef-pi things as they should.
Okay, the next chapter in my adventure with reef-pi... Telemetry.
In this same network, I have two more servers (both VM's). One is running Prometheus (via docker container managed via Portainer), and another is running Grafana. Not liking the graphs in Reef-Pi, I wanted to stitch together the various parts mentioned so as to leverage Grafana for graphing. The post on R2R is based on running Reef-pi on a x86 environment and is a vertical stack on a single host, so I've had to deviate from the published directions. And it's bitten me in the @#$%. :P
(IP's in this post changed for security)
So, I have set up Reef-pi and Node_Exporter on the Rpi (192.168.1.2). The post doesn't provide installation guidance for Node_Exporter, so I had to fart around a little but now have it installed and running:
Code:
# node_exporter --version
node_exporter, version 1.5.0 (branch: HEAD, revision: 1b48970ffcf5630534fb00bb0687d73c66d1c959)
build user: root@6e7732a7b81b
build date: 20221129-18:59:41
go version: go1.19.3
platform: linux/arm64
If I browse to http://192.168.1.2:9100/metrics - I can see data (Exert):
Code:
# HELP node_scrape_collector_duration_seconds node_exporter: Duration of a collector scrape.
# TYPE node_scrape_collector_duration_seconds gauge
node_scrape_collector_duration_seconds{collector="arp"} 0.000299374
node_scrape_collector_duration_seconds{collector="bcache"} 5.5989e-05
node_scrape_collector_duration_seconds{collector="bonding"} 7.9896e-05
node_scrape_collector_duration_seconds{collector="btrfs"} 0.000721872
node_scrape_collector_duration_seconds{collector="conntrack"} 8.8645e-05
node_scrape_collector_duration_seconds{collector="cpu"} 0.089248409
{there's more but this post was gonna be long enough}
On a second server, I have Prometheus up on running in a container. I've modified the /etc/prometheus/prometheus.yml file to add the reef-pi job. I had to modify the 'metrics path' as there was none published at /x/metrics:
Code:
global:
scrape_interval: 15s # By default, scrape targets every 15 seconds.
# Attach these labels to any time series or alerts when communicating with
# external systems (federation, remote storage, Alertmanager).
external_labels:
monitor: 'codelab-monitor'
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
static_configs:
- targets: ['localhost:9090']
- job_name: 'pihole-1'
static_configs:
- targets: ['x.x.x.x:99999']
- job_name: 'pihole-2'
static_configs:
- targets: ['x.x.x.x:99999']
- job_name: 'reef-pi'
metrics_path: '/metrics'
static_configs:
- targets: ['192.168.1.2:9100']
Finally, I have a third server (192.168.1.4) with Grafana installed. I've created a Prometheus data source, provided the http://192.168.1.3:9090 URL and set it to server mode. Hitting "Save & Test", it tells me "Data source is working". Finally, I have the Node_Exporter dashboard installed (ID: 1860).
I SHOULD, I would think, be able to open the dashboard, select my data source, job, and host, and see data. I can't. When I open the dashboard, I get three errors:
"{Templating} Failed to upgrade legacy queries e.replace is not a function", "{Templating [job]} Error updating options: e.replace is not a function", and "{Templating [node]} Error updating options: e.replace is not a function". The Datasource drop-down populates, but the Job and Host drop-downs do not.
Any help is appreciated.