Blog

Downsampling smart meter data with InfluxDB

This article is based on the official InfluxDB documentation on Downsampling and data retention.

I’m using a P1 (smart energy meter) database for this example.

First, change your default retention policy and create at least one additional retention policy:

CREATE RETENTION POLICY "168_hours" ON "P1_External" DURATION 168h REPLICATION 1 DEFAULT
CREATE RETENTION POLICY "2yr" ON "P1_External" DURATION 104w REPLICATION 1 

Create a test query that summarizes the data that needs to be stored in the downsampled data:

SELECT mean("current_delivery") as "current_delivery", mean("current_usage") as "current_usage", last("total_usage_gas") as "total_usage_gas", last("total_usage_t1") as "total_usage_t1", last("total_usage_t2") as "total_usage_t2", last("total_delivery_t1") as "total_delivery_t1", last("total_delivery_t2") as "total_delivery_t2" FROM energy_p1_actual GROUP BY "name", time(1h) ORDER BY time DESC LIMIT 10

Then, define a continuous query from this:

CREATE CONTINUOUS QUERY "cq_60m" on "P1_External" BEGIN SELECT mean("current_delivery") as "current_delivery", mean("current_usage") as "current_usage", last("total_usage_gas") as "total_usage_gas", last("total_usage_t1") as "total_usage_t1", last("total_usage_t2") as "total_usage_t2", last("total_delivery_t1") as "total_delivery_t1", last("total_delivery_t2") as "total_delivery_t2" INTO "2yr"."energy_p1_history" FROM energy_p1_actual GROUP BY "name", time(1h) END

As our retention policy is set to 2 hours the continuous query will run every two hours to summarize the data.

Configure TLS for Mosquitto using a self-signed certificate

This article describes how to configure TLS for Mosquitto using a self-signed certificate. I assume that Mosquitto is installed and running.

Browse to the right directory:

cd /etc/mosquitto/certs 

Generate a 3DES private key using OpenSSL and put it in the moquitto directory for certificates:

openssl genrsa -des3 -out ca.key 2048

Generate the 3DES certificates using the private key:

openssl req -new -x509 -days 3650 -key ca.key -out ca.crt

Copy the certificate to the right directory:

sudo cp ca.crt /etc/mosquitto/ca_certificates/

Generate an RSA private key :

openssl genrsa -out server.key 2048

Generate the RSA public key:

openssl req -new -out server.csr -key server.key

Generate the RSA certificates using the private key:

openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out server.crt -days 3650

Configure Mosquitto to listen for TLS connections:

cd /etc/mosquitto/conf.d
nano listener.conf

listener xxxx 192.168.x.x
cafile /etc/mosquitto/ca_certificates/ca.crt
certfile /etc/mosquitto/certs/server.crt
keyfile /etc/mosquitto/certs/server.key
require_certificate false

I don’t enforce the usage of a certificate.

Go to the certificates folder and give the right permissions to the generated certificates.

cd /etc/mosquitto/certs
chmod 400 server.key
chmod 444 server.crt
chown mosquitto server*

Restart the Mosquitto service:

systemctl restart mosquitto.service

This is working for me now. However, while I was documenting this process I figured out I might have mixed up the 3DES and RSA certificates in the Mosquitto configuration. Something to look into at a later moment in time.

Calculating energy (in)dependence

One of the goals of building a digital twin of our house is to reduce our dependence on external energy sources. To achieve this goal, and to validate if my measures are having a positive effect, I need to compute our dependence on external energy sources. Our house will be fully electric, so that makes things a bit easier as I don’t have to take gas into account.

Basically, we will have two sources of energy (the grid and the PV (solar) panels) and one consumer: the house itself, including all the appliances consuming energy.

Energy situation of our future house.

External energy, in this context, is energy consumed from the grid. I calculate the total amount of energy we consume using:

total_consumption = pv_production + grid_consumption - pv_overproduction

When the PV panels are not producing energy, there will not be any overproduction, and the total_consumption will be equal to the grid_consumption. When there is no grid_consumption and the PV panels are producing sufficient energy to meet the demand, the total_consumption is equal to the pv_production minus the pv_overproduction.

I’m interested in our dependence from the grid. This is then an easy next step:

dependence = grid_consumption / total_consumption

This gives me a number that gives me the amount of energy consumed from the grid related to the total consumption. Initially, I will calculate our dependence based on 30 minute intervals.

The good news is that I already have the data, but it’s spread over multiple time-series. I’m building a small service that consumes the required data from the time-series, computes the dependence, and writes it back into a new time-series for historic analysis.

Grafana showing pv_overproduction (green) and grid_consumption(yellow).

Make Proxmox VLAN aware

I’m using Proxmox as a hypervisor to run my virtual machines and use two VLANs in my home network: one for normal traffic and one separate VLAN for IoT traffic. Virtual machines should be connected to one of those networks. The normal network is typically untagged (vlan ID 20) while the IoT traffic is tagged with VLAN 21.

Configuration file: /etc/network/interfaces

auto lo
iface lo inet loopback

iface eno1 inet manual

auto vmbr0
iface vmbr0 inet manual
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094

auto vmbr0.20
iface vmbr0.20 inet static
        address 192.168.20.x/24
        gateway 192.168.20.1

This should result in the following Proxmox network configuration:

Proxmox host system network configuration

Now you can easily add a network adapter to a virtual machine and tag it with the correct VLAN.

Virtual machine network adapter configuration, including tagged VLAN.

Install Telegraf for monitoring purposes

curl -fsSL https://repos.influxdata.com/influxdata-archive_compat.key -o /etc/apt/keyrings/influxdata-archive_compat.key
echo "deb [signed-by=/etc/apt/keyrings/influxdata-archive_compat.key] https://repos.influxdata.com/debian stable main" | tee /etc/apt/sources.list.d/influxdata.list
apt update
apt -y install telegraf 

The telegraf configuration is located in /etc/telegraf/

Example configuration:

[global_tags]
[agent]
  interval = "60s"
  round_interval = true
  metric_batch_size = 1000
  metric_buffer_limit = 10000
  collection_jitter = "0s"
  flush_interval = "10s"
  flush_jitter = "0s"
  precision = ""
  hostname = "DB152"
  omit_hostname = false
[[outputs.influxdb]]
  urls = ["https://192.168.21.152:8086"]
  database = "Verhaeg_Monitoring"
  username = "xxx"
  password = "xxx"
  insecure_skip_verify = true
[[inputs.cpu]]
  percpu = true
  totalcpu = true
  collect_cpu_time = false
  report_active = false
[[inputs.disk]]
  ignore_fs = ["tmpfs", "devtmpfs", "devfs", "iso9660", "overlay", "aufs", "squashfs"]
[[inputs.mem]]
[[inputs.swap]]
[[inputs.net]]
  interfaces = ["ens18"]
[[inputs.netstat]]
[[inputs.kernel]]
[[inputs.system]]
[[inputs.processes]]
[[inputs.diskio]]

Installing Eclipse Ditto

Download Ditto from Github and unzip it in your favorite directory:

cd /data/install
wget https://github.com/eclipse/ditto/archive/master.zip
unzip master.zip

Adjust the nginx password:

openssl passwd -quiet
 Password: <enter password>
 Verifying - Password: <enter password>

Append the printed hash in the nginx.httpasswd (in the same folder as docker-compose.yml) file placing the username who shall receive this password in front like this:

ditto:A6BgmB8IEtPTs

Configure the docker data directory in /etc/docker/deamon.json:

{
   "data-root": "/data/docker"
}

And finally, install Ditto using the Docker compose script:

cd ditto-master/deployment/docker/
docker-compose up -d

To automatically start Ditto at system start, and clean up the related log-files and the following two lines to crontab:

crontab -e

@reboot sleep 30 && cd /data/docker && find . -name "*json.log" -type f -delete
@reboot sleep 60 && cd /data/install/ditto-master/deployment/docker && sudo docker-compose up -d

Done!

Updating Eclipse Ditto

First, kill all docker containers. Then remove them, and remove their related images:

docker kill $(docker ps -q) && docker rm $(docker ps -a -q) && docker rmi $(docker images -q)

Then we get the latest version of the docker-compose from GitHub:

wget https://github.com/eclipse/ditto/archive/refs/heads/master.zip
unzip master.zip

Unzip it, browse to the right directory, and start it:

cd master/deployment/docker
docker-compose up -d

Don’t forget to adjust the password:

openssl passwd -quiet
 Password: <enter password>
 Verifying - Password: <enter password>

Append the printed hash in the nginx.httpasswd (in the same folder as docker-compose.yml) file placing the username who shall receive this password in front like this:

ditto:A6BgmB8IEtPTs