So, since last year, I’ve had to rebuild most of my network, thanks to the FBI and their unparalleled talents for breaking things. (Details at the link, and we got the stuff back last month, mostly in some sort of working order; that’s not really what this post is about.)
Previous to those events, I had shared access to our Home Assistant (primarily for its API, to enable our Alexa skill to work with it (as described here and here) using a reverse-proxy server (with a free certificate provided by Let’s Encrypt) and access to that via our border router. Which worked fine; it’s the typical way to set this up.
But it was late, I’d been frantically bashing things back into shape using newly acquired hardware and restored backups - and restoring all our domain infrastructure from scratch - and getting all those pieces back into place seemed like a lot of work. So how could I, well, not?
Enter Tailscale, which we’d been using for a while anyway as a replacement for our older point-to-point WireGuard VPN tunnels. Since our VPN graph was starting to look a little mesh-y, it only made sense to replace it with a software-defined network that would offer us that same meshness, only without the extremely tedious manual configuration. (If you aren’t familiar with it, you can read more about how Tailscale works here.)
So, let me give you the brief introduction to our tailnet. The “main” node of it, if such is really meaningful in a tailnet, is our border router, which is based on Raspberry Pi hardware and runs OpenWRT. While there isn’t an OpenWRT-specific Tailscale package (hint hint), downloading the static Linux binary for arm64 and using it with a procd script works just fine. That one serves as a subnet router for all the NASes, servers, and miscellaneous devices on our network that can’t run Tailscale themselves, and as an optional exit node.
We started out with just having our mobile devices (tablets, phones, etc.) and cloud servers on the tailnet, but had started adding Tailscale to our desktops for convenience, and in the process of network rebuilding decided to make that policy, so all our desktops are now directly on the tailnet too. There are also a couple of containers that make use of Tailscale’s authentication services, and another subnet router which connects to the network at my mother-in-law’s house so she can access some of our services (like Plex). We use the built-in Tailscale ACLs to limit what our cloud servers and my MIL’s network can access to only necessary ports.
So how did we use all this to make Home Assistant available to Alexa?
The first part of this was how to get the Home Assistant API on the tailnet. Your first reaction might be “the Home Assistant Tailscale addon”? And, y’know, that’s fine for normies who dedicate a whole machine to Hass.io (and for us, a Pi won’t do; we have, I believe, one of the more intense Home Assistant installations out there) or some other approved method of running HA, but since what we have for running our hosted applications is a Kubernetes cluster, by Jove, we’re going to run Home Assistant on that, too. Lack of official support notwithstanding.
Your second reaction might be “well, given the tailnet layout you explained above, can’t you just use its internal-network IP?”, and we probably could have, and it would have worked fine.
But on the other hand, that would have been a little inelegant, since it would have meant involving the Traefik ingress controller we use on the k8s cluster, and while that wouldn’t have caused any problems, it wouldn’t have added any value, either. And besides, the latest Tailscale news had announced this:
The Tailscale Kubernetes operator!
Which lets you, handily enough, expose services from your k8s cluster directly to your tailnet. So, having followed the steps at that link for a Helm deployment - creating the ACL tags and OAuth client, and then rolling out the k8s operator, we had the option to do that.
Meanwhile, the second part of this was how to expose the Home Assistant API from the tailnet to Alexa.
My first thought for this was to build a container for the AWS Lambda functions that the Alexa skill uses to talk to the outside world, and make that container part of our tailnet, such that there would be no need for Alexa traffic to travel over the public Internet at any point.
Unfortunately, while it should be possible - well, I was in a hurry, and while it is not hard to get an (ephemeral) container to connect to one’s tailnet, I had a problem with machine entries being left for a series of “dead” containers and cluttering up my admin console.
I also use tailnet lock, because I’m a mite twitchy about my security - and, security procedures aside, having a bunch of my hardware in the hands of the Feds for a few months did not make me any less twitch about security - which I’m not prepared to turn off. While it’s great, it did add an extra frisson of complexity to the whole thing.
Fortunately, while it does still involve the public Internet, another feature, Tailscale Funnel, came right to the rescue. Funnel allows you to publish a service from your tailnet to the Internet using a TCP proxy - running on a Tailscale relay server - to get the traffic to your specified node. This, too, requires a little setup, described at that link, to enable HTTPS certificates for your tailnet and to add an attribute to your tailnet policy file, all done through one GUI.
And with both Funnel and the k8s operator set up, you can do the rest in one simple Ingress on your k8s cluster:
--- | |
apiVersion: networking.k8s.io/v1 | |
kind: Ingress | |
metadata: | |
name: homeassistant-alexa-ingress | |
namespace: homeassistant | |
annotations: | |
tailscale.com/funnel: "true" | |
spec: | |
ingressClassName: tailscale | |
rules: | |
- http: | |
paths: | |
- path: /api/alexa | |
pathType: Prefix | |
backend: | |
service: | |
name: homeassistant | |
port: | |
number: 8123 | |
- path: /auth/authorize | |
pathType: Prefix | |
backend: | |
service: | |
name: homeassistant | |
port: | |
number: 8123 | |
- path: /auth/token | |
pathType: Prefix | |
backend: | |
service: | |
name: homeassistant | |
port: | |
number: 8123 | |
tls: | |
- hosts: | |
- ha-alexa |
Shortly after kubectl apply-ing this, you will see a new node (“ha-alexa”, which you specify in the file under spec.tls.hosts) appear on your tailnet, and if you access it at your tailnet’s full domain (say: ha-alexa.foo-bar.ts.net) that traffic will be passed on to your Home Assistant pod in k8s.
(Or, at least, some of it will. I deliberately limited the accessible paths to /api/alexa and the /auth paths needed for account linking in the file above, because why present more surface area than you have to?)
So now you can set up your lambda functions just as per these articles (Smart Home / Custom), point the BASE_URL to https://ha-alexa.foo-bar.ts.net or whatever you called it, and Bob’s your uncle!