After receiving some data points that other people on those versions have not been experiencing this, and some digging with Wireshark, it turned out that it was not, strictly speaking, the iOS updates that were responsible for this. It was a little weirder than that.
Looking at packets did reveal the iPhones making DNS-over-HTTPS queries to places they weren't supposed to, and digging further revealed that somehow - and figuring out how is going to occupy me for some considerable time, I suspect - someone slipped them a mickey.
Somehow they both ended up with a device configuration profile installed that was not supposed to be there. I have no idea how that managed to end up installed and activated without the usual manual permission prompts drawing the attempt to do so to our attention, and yet there it was, in our base pwning our DNS.
A couple of erase-and-resets later, and normal operation has been restored. To the iPhones, anyway. (The less said about my paranoia the better.)
Previous Post
Not, you understand, that I would complain if someone has all the details of how to make iOS play nice again to pull out of their pocket before I find time to figure it out myself.
So, having recently updated to iOS 18.4.1 and/or 18.5 (yes, the beta), I note that ads and other such things are no longer blocked on iPhones hereabouts, which ordinarily Pi-hole takes care of nicely.
This is definitely not a Pi-hole issue, insofar as it continues to work just fine for every other device on the network: evidently Apple have figured out some ingenious way to bypass local DNS servers. (Even though it claims to be using said DNS servers - which I suppose is true for local addresses, while lying through its damn teeth otherwise.) Until a means is found to block this new perversion, y’all may want to upgrade cautiously to 18.4.1 and beyond.
Damn their sleek and shiny hides.
(I also note that the iPhones not playing ball has changed my average percentage queries blocked from in the 13-14% range down to 0.7% blocked. Don't you just love 'em?)
My Setup, For Those Bothered To Repro
Very standard IPv4/IPv6 setup: Pi-hole queries OpenDNS, and in turn local DNS servers forward all non-local queries to Pi-hole. Everything else goes via those servers, only those servers query the Pi-hole directly, and they never make recursive queries of their own.
Versions:
Core Version is v6.0.6-66-g3cbaee7 (Latest: null) Branch is development Hash is 3cbaee7b (Latest: 3cbaee7b)
Web Version is v6.1-60-ge851470e (Latest: null) Branch is development Hash is e851470e (Latest: e851470e)
FTL Version is vDev-f0966a9 (Latest: null) Branch is development
Specifically, for small self-hosted home or office networks which contain services that:
want to use e-mail
will never communicate with anyone on the internet
should not ever communicate with anyone on the internet
i.e., a very common requirement for people who want to run services for family and friends but have absolutely no use whatsoever for making those services available for anyone else.
This should be very simple. Ideally, it should be like this:
Understands one (non-public) mail domain. Rejects all e-mail going anywhere else.
Talks to a configurable number of local networks (for me, the local LAN, the tailnet, and a couple of minor local subnets). Rejects all packets coming from or going to anywhere else.
Has configurable mailboxes (not a catchall). Bounces all mail that does not go to a configured mailbox on its one mail domain.
Makes said mailboxes available through POP3. IMAP if you want to get fancy. No web interface needed or wanted.
(I once thought, in a peak of bizarre enthusiasm, that instead of/alongside mailboxes a feature that would rewrite both envelope and header destination addresses from foo@internal.lan to foo@external.net and then forward the mail to a configurable external e-mail server would be desirable, but that’s probably way too fancy to be in scope for this project.)
And that is it!
It is, like I say, a moron. That’s because mail servers are complex beasties which require great complexity and sophistication to function out on the wilds of the Internet, and consequently are a giant pain in the ass to administer securely, to the point that advice given these days to people who want to run their own mail servers can be summed up as “Don’t”.
(I used to. I probably wouldn’t any more.)
Which is part of the problem when what one wants is a simple idiot mail handler to deal with mail from cron jobs and self-hosted servers that just want you to confirm your password and notify you that jobs have been completed, because while virtually any mail server can be set up to meet the above description, it is neither simple nor quick nor, given complexity, necessarily all that robust.
So how about it, folks? Anyone seen a good candidate for MoronMail out there? Or inspired to write one? Don’t make me beg.
Specifically, there’s been this intentional breaking change here to disable snapd-apparmor outright on WSL regardless of version, on the grounds that the Microsoft default kernel does not include LSM support and the Microsoft-supplied init doesn’t enable AppArmor stacking.
Rather unfortunate, in my opinion, after all the work various of us, myself included, put in to make both AppArmor and snapd work under WSL2 (with, of course, an AppArmor-enabled custom kernel and a cautionary note regarding cross-distro leaking). It’s not the decision I’d have made, inasmuch as snapd detected and responded to the presence of AppArmor correctly on WSL as it would on any other platform, and anyone compiling such a custom kernel and using it under WSL assuredly knows what they’re doing and that certain customizations and configurations will leak from distro to distro.
But whatever: I’m not on the snapd team and those who are may configure it as they see fit. I’m just a kibitzer, albeit a kibitzer who’s just a little salty that a breaking change like this didn’t make it into the release notes, when it would have been real nice if apt upgrade had warned me ‘bout this aforetime.
That being said, when things break, I fix. No elegant fix this time: I just cloned a copy of the snapd repo:
git clone https://github.com/snapcore/snapd.git cd snapd
(And, of course, held future updates of snapd so I can be sure to do this manual bit at the same time.)
sudo apt-mark hold snapd
And that’s how you fix your machine that just broke.
(I could, I suppose, include a copy of my patched file here, but I’m not going to encourage you to download executable files promising fixes from random people on the internet, especially as I just had someone try to phish me that way on GitHub, of all places. Y’all know how to herd a compiler. Go to.)
Those of you who have been using the patch I gave you some time ago to let you build apt packages for WSL Linux kernels (bypassing the problem with the version string - for details, see here) will have noticed that due to changes in the upstream kernel it doesn’t work any more. As I found out myself today, when building myself a 6.10 WSL kernel.
Anyway, the patch has actually got much simpler in recent versions, requiring just a one-line change in scripts/package/mkdebian:
commit eaaf50fb25f93d438d4a60b1bb581e889ff3287c
Author: Alistair Young <avatar@arkane-systems.net>
Date: Thu Jul 18 17:31:46 2024 -0500
Adjusted mkdebian for WSL packages.
diff --git a/scripts/package/mkdebian b/scripts/package/mkdebian
index 070149c985fe..2b3c7055e4a4 100755
--- a/scripts/package/mkdebian
+++ b/scripts/package/mkdebian
@@ -146,7 +146,8 @@ if [ "$1" = --need-source ]; then
fi
# Some variables and settings used throughout the script
-version=$KERNELRELEASE
+version=$(echo $KERNELRELEASE | tr '[:upper:]' '[:lower:]')
+
if [ -n "$KDEB_PKGVERSION" ]; then
packageversion=$KDEB_PKGVERSION
else
Sure would be nice if we could get this into upstream.
So, since last year, I’ve had to rebuild most of my network, thanks to the FBI and their unparalleled talents for breaking things. (Details at the link, and we got the stuff back last month, mostly in some sort of working order; that’s not really what this post is about.)
Previous to those events, I had shared access to our Home Assistant (primarily for its API, to enable our Alexa skill to work with it (as described here and here) using a reverse-proxy server (with a free certificate provided by Let’s Encrypt) and access to that via our border router. Which worked fine; it’s the typical way to set this up.
But it was late, I’d been frantically bashing things back into shape using newly acquired hardware and restored backups - and restoring all our domain infrastructure from scratch - and getting all those pieces back into place seemed like a lot of work. So how could I, well, not?
Enter Tailscale, which we’d been using for a while anyway as a replacement for our older point-to-point WireGuard VPN tunnels. Since our VPN graph was starting to look a little mesh-y, it only made sense to replace it with a software-defined network that would offer us that same meshness, only without the extremely tedious manual configuration. (If you aren’t familiar with it, you can read more about how Tailscale works here.)
So, let me give you the brief introduction to our tailnet. The “main” node of it, if such is really meaningful in a tailnet, is our border router, which is based on Raspberry Pi hardware and runs OpenWRT. While there isn’t an OpenWRT-specific Tailscale package (hint hint), downloading the static Linux binary for arm64 and using it with a procd script works just fine. That one serves as a subnet router for all the NASes, servers, and miscellaneous devices on our network that can’t run Tailscale themselves, and as an optional exit node.
We started out with just having our mobile devices (tablets, phones, etc.) and cloud servers on the tailnet, but had started adding Tailscale to our desktops for convenience, and in the process of network rebuilding decided to make that policy, so all our desktops are now directly on the tailnet too. There are also a couple of containers that make use of Tailscale’s authentication services, and another subnet router which connects to the network at my mother-in-law’s house so she can access some of our services (like Plex). We use the built-in Tailscale ACLs to limit what our cloud servers and my MIL’s network can access to only necessary ports.
So how did we use all this to make Home Assistant available to Alexa?
The first part of this was how to get the Home Assistant API on the tailnet. Your first reaction might be “the Home Assistant Tailscale addon”? And, y’know, that’s fine for normies who dedicate a whole machine to Hass.io (and for us, a Pi won’t do; we have, I believe, one of the more intense Home Assistant installations out there) or some other approved method of running HA, but since what we have for running our hosted applications is a Kubernetes cluster, by Jove, we’re going to run Home Assistant on that, too. Lack of official support notwithstanding.
Your second reaction might be “well, given the tailnet layout you explained above, can’t you just use its internal-network IP?”, and we probably could have, and it would have worked fine.
But on the other hand, that would have been a little inelegant, since it would have meant involving the Traefik ingress controller we use on the k8s cluster, and while that wouldn’t have caused any problems, it wouldn’t have added any value, either. And besides, the latest Tailscale news had announced this:
Which lets you, handily enough, expose services from your k8s cluster directly to your tailnet. So, having followed the steps at that link for a Helm deployment - creating the ACL tags and OAuth client, and then rolling out the k8s operator, we had the option to do that.
Meanwhile, the second part of this was how to expose the Home Assistant API from the tailnet to Alexa.
My first thought for this was to build a container for the AWS Lambda functions that the Alexa skill uses to talk to the outside world, and make that container part of our tailnet, such that there would be no need for Alexa traffic to travel over the public Internet at any point.
Unfortunately, while it should be possible - well, I was in a hurry, and while it is not hard to get an (ephemeral) container to connect to one’s tailnet, I had a problem with machine entries being left for a series of “dead” containers and cluttering up my admin console.
I also use tailnet lock, because I’m a mite twitchy about my security - and, security procedures aside, having a bunch of my hardware in the hands of the Feds for a few months did not make me any less twitch about security - which I’m not prepared to turn off. While it’s great, it did add an extra frisson of complexity to the whole thing.
Fortunately, while it does still involve the public Internet, another feature, Tailscale Funnel, came right to the rescue. Funnel allows you to publish a service from your tailnet to the Internet using a TCP proxy - running on a Tailscale relay server - to get the traffic to your specified node. This, too, requires a little setup, described at that link, to enable HTTPS certificates for your tailnet and to add an attribute to your tailnet policy file, all done through one GUI.
And with both Funnel and the k8s operator set up, you can do the rest in one simple Ingress on your k8s cluster:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Shortly after kubectl apply-ing this, you will see a new node (“ha-alexa”, which you specify in the file under spec.tls.hosts) appear on your tailnet, and if you access it at your tailnet’s full domain (say: ha-alexa.foo-bar.ts.net) that traffic will be passed on to your Home Assistant pod in k8s.
(Or, at least, some of it will. I deliberately limited the accessible paths to /api/alexa and the /auth paths needed for account linking in the file above, because why present more surface area than you have to?)
So now you can set up your lambda functions just as per these articles (Smart Home / Custom), point the BASE_URL to https://ha-alexa.foo-bar.ts.net or whatever you called it, and Bob’s your uncle!