I was going to leave some of my updated notes on how my Home-Assistant-on-Kubernetes setup works until I’d perfected some of the details, but as it happens questions were asked and answers given on the Home Assistant Discord server, and thus now became the time.
So, yes, things have changed just a tad since I posted this:
We’ll get to the two minor things later, because the big one is in the revised configuration for Home Assistant itself, whose manifest now looks like this:
--- | |
apiVersion: v1 | |
kind: Secret | |
metadata: | |
name: mysql-recorder-pass | |
namespace: homeassistant | |
type: Opaque | |
data: | |
password: <REDACTED> | |
--- | |
apiVersion: apps/v1 | |
kind: Deployment | |
metadata: | |
labels: | |
app: homeassistant | |
name: homeassistant | |
namespace: homeassistant | |
spec: | |
replicas: 1 | |
selector: | |
matchLabels: | |
app: homeassistant | |
template: | |
metadata: | |
labels: | |
app: homeassistant | |
spec: | |
nodeName: princess-celestia | |
volumes: | |
- name: ha-storage | |
nfs: | |
server: mnemosyne.arkane-systems.lan | |
path: "/swarm/harmony/homeassistant/ha" | |
- name: ha-mysql-storage | |
hostPath: | |
path: /opt/ha-mysql | |
type: DirectoryOrCreate | |
containers: | |
- image: ghcr.io/home-assistant/home-assistant:stable | |
name: home-assistant | |
volumeMounts: | |
- mountPath: "/config" | |
name: ha-storage | |
- image: mysql:latest | |
name: mysql | |
env: | |
- name: MYSQL_ROOT_PASSWORD | |
valueFrom: | |
secretKeyRef: | |
name: mysql-recorder-pass | |
key: password | |
volumeMounts: | |
- name: ha-mysql-storage | |
mountPath: /var/lib/mysql | |
--- | |
apiVersion: v1 | |
kind: Service | |
metadata: | |
name: homeassistant | |
namespace: homeassistant | |
spec: | |
selector: | |
app: homeassistant | |
ports: | |
- protocol: TCP | |
port: 8123 | |
name: http | |
--- | |
apiVersion: networking.k8s.io/v1 | |
kind: Ingress | |
metadata: | |
name: homeassistant-ingress | |
namespace: homeassistant | |
annotations: | |
traefik.ingress.kubernetes.io/router.entrypoints: 'websecure' | |
traefik.ingress.kubernetes.io/router.tls: 'true' | |
spec: | |
rules: | |
- host: jeeves.harmony.arkane-systems.lan | |
http: | |
paths: | |
- pathType: Prefix | |
path: / | |
backend: | |
service: | |
name: homeassistant | |
port: | |
number: 8123 | |
- host: automation.arkane-systems.net | |
http: | |
paths: | |
- pathType: Prefix | |
path: / | |
backend: | |
service: | |
name: homeassistant | |
port: | |
number: 8123 |
Two things to note here:
First, I’ve added a second container in the Home Assistant pod containing the recorder database. One of the things I noticed with the previous setup was that various things hitting the recorder database were slow (history graphs appearing, for example) and generally non-performant, partly because of having to go out over the network (when using a remote SQL Server), and/or partly because SQLite on NFS is no-one’s friend (when not).
But, fortunately, in Home Assistant none of the configuration information (i.e., things that it would really hurt to lose) is stored in the recorder database; that’s all under .storage. It only contains dynamic information (states and events) which, sure, it would be annoying to lose, but certainly not critical.
So we can gain a considerable performance improvement by putting a MySQL instance into the home assistant pod, and having that instance use node-local (i.e., a hostPath) storage, such that talking to the DB is a purely local operation.
A caveat, however! While this WORKS, this is NOT FINISHED. You’ll also observe this line in the configuration, tying the pod to one node of the cluster:
nodeName: princess-celestia
This is for two reasons. Mainly, it’s because just creating the MySQL container isn’t sufficient. Per here, It needs a user set up for Home Assistant to access it (“ha@127.0.0.1”), and the actual recorder database (“ha_recorder“) to be created, which at the moment I’ve done manually using the mysql tool within the container; in the longer run, I intend to customize the MySQL container to do this automatically if needed, but I haven’t done that yet, hence the pin to prevent the pod from moving to another node and requiring that it be done. More details on that when I’ve done it.
[Note: the user needs to be @127.0.0.1 because by default, connecting to a MySQL server on localhost (with a user @localhost ) will attempt to connect using a Unix socket, which obviously won’t work because Home Assistant and MySQL are running in separate containers even though they’re in the same pod. You need to connect using a TCP socket via the (pod) localhost interface instead, hence that user, and the following connection string:
mysql://ha:YOUR-PASSWORD-HERE@127.0.0.1/ha_recorder?charset=utf8mb4
]
To a lesser extent. it’s also because with the recorder database being node-specific in this configuration, if the Home Assistant pod fails and restarts on another node, it will do so with a different recorder database without the latest data in it. (This is an issue somewhat limited in scope that should only happen on failover, but it’s still an issue.)
The other thing to note here is that I’ve put both the internal and external hostnames into the Ingress configuration rather than having the external reverse proxy rewrite the Host header. That’s just a minor change that lets me clean up the proxy configuration somewhat, so no biggie here.
Moving on! The first of the minor things changed since my last configuration is that I got around to moving the MQTT server into the cluster, too. Its manifest looks like this:
--- | |
apiVersion: v1 | |
kind: ConfigMap | |
metadata: | |
name: mosquitto-configmap | |
annotations: | |
reloader.stakater.com/auto: "true" | |
labels: | |
app.kubernetes.io/name: mosquitto | |
data: | |
mosquitto.conf: | | |
listener 1883 | |
allow_anonymous true | |
persistence true | |
persistence_location /mosquitto/data | |
autosave_interval 1800 | |
--- | |
apiVersion: v1 | |
kind: Service | |
metadata: | |
name: mosquitto | |
labels: | |
app.kubernetes.io/name: mosquitto | |
spec: | |
type: ClusterIP | |
ports: | |
- port: 1883 | |
targetPort: mqtt | |
protocol: TCP | |
name: mqtt | |
selector: | |
app.kubernetes.io/name: mosquitto | |
--- | |
apiVersion: apps/v1 | |
kind: Deployment | |
metadata: | |
name: mosquitto | |
labels: | |
app.kubernetes.io/name: mosquitto | |
spec: | |
revisionHistoryLimit: 3 | |
replicas: 1 | |
strategy: | |
type: Recreate | |
selector: | |
matchLabels: | |
app.kubernetes.io/name: mosquitto | |
template: | |
metadata: | |
labels: | |
app.kubernetes.io/name: mosquitto | |
spec: | |
containers: | |
- name: mosquitto | |
image: "eclipse-mosquitto:2.0.12" | |
imagePullPolicy: IfNotPresent | |
ports: | |
- name: mqtt | |
containerPort: 1883 | |
protocol: TCP | |
volumeMounts: | |
- name: data | |
mountPath: /mosquitto/data | |
- name: mosquitto-config | |
mountPath: /mosquitto/config/mosquitto.conf | |
subPath: mosquitto.conf | |
livenessProbe: | |
tcpSocket: | |
port: 1883 | |
initialDelaySeconds: 0 | |
failureThreshold: 3 | |
timeoutSeconds: 1 | |
periodSeconds: 60 | |
readinessProbe: | |
tcpSocket: | |
port: 1883 | |
initialDelaySeconds: 0 | |
failureThreshold: 3 | |
timeoutSeconds: 1 | |
periodSeconds: 60 | |
startupProbe: | |
tcpSocket: | |
port: 1883 | |
initialDelaySeconds: 0 | |
failureThreshold: 30 | |
timeoutSeconds: 1 | |
periodSeconds: 5 | |
volumes: | |
- name: data | |
nfs: | |
server: mnemosyne.arkane-systems.lan | |
path: "/swarm/harmony/mosquitto" | |
- name: mosquitto-config | |
configMap: | |
name: mosquitto-configmap | |
--- | |
apiVersion: traefik.containo.us/v1alpha1 | |
kind: IngressRouteTCP | |
metadata: | |
name: mosquitto-ingress-tcp | |
labels: | |
app.kubernetes.io/name: mosquitto | |
spec: | |
entryPoints: | |
- mqtt | |
routes: | |
- match: HostSNI(`*`) | |
services: | |
- name: mosquitto | |
kind: Service | |
port: 1883 |
Not many surprises there, I feel. Quick notes:
If you noticed that this isn’t in the “homeassistant” namespace, you’re right. That’s just because my MQTT server is older than my Home Assistant install and is also used for other, unrelated things. If it was dedicated to Home Assistant, it would be in its dedicated namespace. So, no special or technical reason, to be clear.
I use Stakater Reloader to automatically restart the pod if the config file changes.
And as you can see, access to Mosquitto goes through my Traefik ingress controller. Not included in this gist: the addition to the Traefik configuration that creates an “MQTT” entry point on host port 1883, but obviously you need that.
And finally, one tweak to my Ring-MQTT config:
--- | |
apiVersion: apps/v1 | |
kind: Deployment | |
metadata: | |
labels: | |
app: ring-mqtt | |
name: ring-mqtt | |
namespace: homeassistant | |
spec: | |
replicas: 1 | |
selector: | |
matchLabels: | |
app: ring-mqtt | |
template: | |
metadata: | |
labels: | |
app: ring-mqtt | |
spec: | |
volumes: | |
- name: ring-config | |
nfs: | |
server: mnemosyne.arkane-systems.lan | |
path: "/swarm/harmony/homeassistant/ring-mqtt" | |
containers: | |
- image: tsightler/ring-mqtt:4.8.3 | |
name: ring-mqtt | |
env: | |
- name: "MQTTHOST" | |
value: "mosquitto.default.svc.cluster.local" | |
- name: "ENABLECAMERAS" | |
value: "true" | |
- name: "SNAPSHOTMODE" | |
value: "all" | |
- name: "ENABLEPANIC" | |
value: "true" | |
- name: "DISARMCODE" | |
value: "<REDACTED>" | |
- name: "DEBUG" | |
value: "ring-mqtt" | |
volumeMounts: | |
- mountPath: "/data" | |
name: ring-config | |
ports: | |
- containerPort: 8554 | |
name: rtsp | |
--- | |
apiVersion: v1 | |
kind: Service | |
metadata: | |
name: ring-mqtt | |
namespace: homeassistant | |
spec: | |
selector: | |
app: ring-mqtt | |
ports: | |
- protocol: TCP | |
port: 8554 | |
name: rtsp |
The only change here is that I’m now using the camera functionality built into recent versions of Ring-MQTT rather than the built-in Home Assistant Ring integration to manage my cameras, hence the addition of the RTSP port and service to allow convenient access to the camera streams.
And that should be it for this update. Happy Homekubing!
Home Assistant on Kubernetes: More
awesome work. please keep inventing the wheel so that I will not have to when I have the balls to off HassOS