I don't understand why people are so negative about IPv6. I have done essentially zero home networking work and I just ran this successfully. It just works!
```
> ping6 google.com
PING6(56=40+8+8 bytes) 2605:59c0:236f:3a08:7883:9d04:c26d:5fa1 --> 2607:f8b0:4005:806::200e
16 bytes from 2607:f8b0:4005:806::200e, icmp_seq=0 hlim=117 time=22.262 ms
16 bytes from 2607:f8b0:4005:806::200e, icmp_seq=1 hlim=117 time=26.124 ms
16 bytes from 2607:f8b0:4005:806::200e, icmp_seq=2 hlim=117 time=26.807 ms
^C
--- google.com ping6 statistics ---
3 packets transmitted, 3 packets received, 0.0% packet loss
round-trip min/avg/max/std-dev = 22.262/25.064/26.807/2.001 ms
```
> I don't understand why people are so negative about IPv6. [...] It just works!
Networking is a lot more than being able to ping a single host.
As a concrete counter-example, IPv6 routinely broke for me when I was using pfSense as a router. Why? Because pfSense, with no way of disabling this behavior, published its public IP as the DNS server for internal clients.
So each time I got a new prefix from my ISP, which happens about once a week or more often, machines stopped being able to perform DNS lookups for hours or until I rebooted them.
And, if I had bothered configuring IPv6 firewall rules, those would have had to be reconfigured manually with the new prefix. I understand this is mostly fixed in pfSense recently, but this was the case for many, many years.
Another counter-example is that Android only supports SLAAC, and SLAAC only supports providing a few key infrastructure details like router and DNS. If you want to tell the Android client something else, like NTP server, you're outta luck. Also, if Android successfully gets an IPv6 address via SLAAC, it requires the DNS server IP to also be an IPv6 address. So your internal DNS server must then also serve on IPv6. If that wasn't the case, it would just silently use Google's own DNS servers, breaking any local configuration you had.
Another point is that a lot of us tried using IPv6 decades ago, and so we still have scars from that time. IPv6 today is a lot better, but I still have a lot of IPv6 frustration associated with it from 15-20 years ago.
> And, if I had bothered configuring IPv6 firewall rules, those would have had to be reconfigured manually with the new prefix. I understand this is mostly fixed in pfSense recently, but this was the case for many, many years.
Why would you have to reconfigure your firewall rules when you're getting a new IPv6 prefix?
My consumer router uses iptables under the hood, so it accepts a mask in the firewall rule (so e.g. I can do ::0123:4567:89ab:cdef/::ff:ffff:ffff:ffff:ffff as a target, and when my /56 changes, the rules Just Work™)
But I think it further strengthens my case, software support for IPv6 has been quite spotty over the years, which combined with the less-than ideal deployments out there has made things frustrating for many users over the past couple of decades.
Please don't put words in my mouth. I did not say "Because pfSense, does really bad things."
How pfSense works is fairly reasonable if every IPv6 deployment had been as the original designers intended, ie you have a static prefix.
It's just that the way IPv6 ended up getting deployed in practice was often not aligned with that original vision. And that has been a large source of IPv6 frustration.
There's a few things here that are a bit iffy tbh!
I can't see why an ISP is dynamically changing the IPv6 addressing for a client, but if that's what is going on, then v6 NPT is your friend (RFC6296 - https://datatracker.ietf.org/doc/html/rfc6296).
But pfsense's behaviour is a bit iffy too, unless when you say 'public IP', you mean the IPv6 address being used on the pfsense facing the clients? (I'm assuming it's using DHCPv6 prefix delegation, and the delegation is being changed? And potentially the uplink subnet as well).
opnsense can use the delegated prefix for DHCPv6, it then automatically becomes the “LAN net” firewall alias and you can refer to it in a firewall rule I believe. I assume it’s the same for pfsense and I suspect they are not the only ones.
> unless when you say 'public IP', you mean the IPv6 address being used on the pfsense facing the clients?
Well, that's kinda the thing, pfSense seems to assume global means it's also the IP facing the local clients. I couldn't get pfSense to advertise its ULA as the DNS server for example. But if you have a static prefix, that's not a bad assumption. And a static prefix is what the IPv6 designers envisioned.
> I'm assuming it's using DHCPv6 prefix delegation, and the delegation is being changed?
ISP indeed uses DHCPv6 prefix delegation. The prefix I get can change "randomly". It always changes when my router or modem reboots, but other times too (perhaps when their equipment reboots).
I should note that after getting very frustrated with pfSense, I threw it away a few years ago and switched to OpenWRT which has worked much, much better when it comes to IPv6.
That's literally impossible, to hear some people tell it. "And also, look how hard it'd be to memorize that address", say the people who remember like 2 IPv4 addresses, one of them being 127.0.0.1.
Tailscale, perhaps ironically in this context, has shown me the value of not caring about an IP address.
I used to. When I had a home network I'd carefully assign `10.52.1.x` where `x` was the periodic number corresponding to the machine name! (I write from `lutetium`.)
Now, with Tailscale's magic DNS – `lutetium` being all I need – why on Earth would I give a crap about an IP address? I've gone from being obsessed to truly not caring at all.
So, give me IPv6. Auto-assign everything! All I want is a name.
Hah, I kinda love your naming and numbering convention!
But yeah. On my own LAN, everything is DHCP for IPv4 and SLAAC for v6. Everything uses mDNS and I connect to everything by name, not address. I can only remember the static IP of one of the servers; the rest are purely names.
It's akin to remembering the phone numbers. Even 20 years ago I had like 10-20 of most important ones memorized despite some of them not used often ie once in a years. Nowadays I have 'me myself' in the Contacts because I can't remember it despite using it for 5+ years nor I care.
I remember like 10 different IPv4 addresses, 6 of which are DNS servers where each octet is a single number, 1 is my router, 1 is my home network switch, 1 is my home server and the last one localhost.
The main thing all those have in common is they are either something I frequently use (all mentioned local IPs) or just stupid easy to remember (DNS servers), neither of which isn't possible for IPv6.
From memory isn't localhost for IPv6 not shorter than for IPv4? The answer is yes, it is ::1 and I was thinking of the Multicast and Link-local address prefixes which are ff00:: and fe80:: respectively.
"ten oh oh 7" (how I'd say it or remember it) still seems simpler than "eff dee dee dee colon colon 7". While with ipv4 the dots can be assumed for pauses, v6 doesn't put colons as often, also I could easily see myself forgetting the amount of "d"s. I don't wanna seem too anti-v6, though, I am in favor of everyone adopting the more modern thing.
edit: Well, you said easier to type. I guess I probably agree with that.
There is also the fact that an IPv6 IP has a maximum and minimum number of characters and separators, but not a set one, so the length of any given address is variable.
Instead of being able to run a groove in my head mentally, and read with any sort of rhythm, I have to read them like binary bytes. Every address feels like a foreign phone number where your normal rhythm doesn't fit, but it never gets better.
Perhaps, IMO, the greatest and only sin of IPv6. That and using fucking colons.
When people are managing 20 devices on a network, they access everything by IP address directly and struggle with constant DNS issues.
Introducing a more complex system without easing any of the cognitive load and making fun of it is just cruel at this moment.
Users need a simpler way to connect to their devices, and what tailscale did with magic dns shows that users don’t even care about IPv4 they just want to connect to their devices with something simple they can remember.
I have 68 devices on the line at this moment. I just checked. I remember exactly one of their IPs and that’s just one that stuck in my head. I never connect to it by address.
I agree with the sibling comment: crummy CPE is crummy CPE. This is a solvable problem, but people end up with junky routers and it causes them anguish.
Weirdly this might be a CPE problem, e.g. crappy ISP routers.
Put in something more interesting, e.g. OpenWRT, or there are proprietary options too, that provides simple & reliable local LAN DNS, then the problem just goes away.
Why would you be running sudo in production? A production environment should usually be setup up properly with explicit roles and normal access control.
Sudo is kind of a UX tool for user sessions where the user fundamentally can do things that require admin/root privileges but they don't trust themselves not to fat finger things so we add some friction. That friction is not really a security layer, it's a UX layer against fat fingering.
I know there is more to sudo if you really go deep on it, but the above is what 99+% of users are doing with it. If you're using sudo as a sort of framework for building setuid-like tooling, then this does not apply to you.
> A production environment should usually be setup up properly with explicit roles and normal access control.
… and sudo is a common tool for doing that so you can do things like say members of this group can restart a specific service or trigger a task as a service user without otherwise giving them root.
Yes, there are many other ways to accomplish that goal but it seems odd to criticize a tool being used for its original purpose.
PSA for anyone reading this, you should probably use polkit instead of sudo if you just want to grant systemd-related permissions, like restarting a service, to an unprivileged user.
It's roughly the same complexity (one drop-in file) to implement.
I’d broaden that slightly to say you should try to have as few mechanisms for elevating privileges as possible: if you had tooling around sudo, dzdo, etc. for PAM, auditing, etc. I wouldn’t lightly add a third tool until you were confident that you had parity on that side.
Privilege escalation (superuser capabilities) and RBAC ought to be viewed differently, IMO.
There's a place for true superusers, such as auditing, where no stone should be too heavy. But mostly for securing systems, we want RBAC, and sudo is abused as a pile-driver where only a mallet was needed. Polkit is more of a proper policy toolkit.
That’s a valid choice. I’m just saying that you should pick ideally one tool for that class of work. For example, if you support one tool for Mac and Linux users that’s probably worth more than supporting two similar tools even if one of them is better.
> Why would you be running sudo in production? A production environment should usually be setup up properly with explicit roles and normal access control.
And doing cross-role actions may be part of that production environment.
You could configure an ACME client to run as a service account to talk to an ACME server (like Let's Encrypt), write the nonce files in /var/www, and then the resulting new certificate in /etc/certs. But you still need to restart (or at least reload) the web/IMAP/SMTP server to pick up the updated certs.
But do you want the ACME client to run as the same service user as the web server? You can add sudo so that the ACME service account can tell the web service account/web server to do a reload.
In your example certbot is given permission to write to /var/www/.well-known/acme-challenge and to write certs somewhere. Your web server also has permission to read those files too.
There is no need for the acme client and web server to run as the same user. For reloads the certbot user can be given permission to just invoke the reload command / signal directly. There does not need to be sudo in between them.
Can you share how the scripts work? That seems to be the most interesting part, but is omitted from the article. The only technical details are UART + an opto-coupler.
> Both devices run custom scripts designed to handle data transmission reliably rather than quickly. This approach limits throughput, but reliability is paramount for critical monitoring, where losing data is unacceptable. The scripts are finely tuned to ensure that every log entry is transmitted securely without risk of cross-contamination between networks.
Yep they are pretty simple, on one end you have a python script that listens to syslog messages, when get gets an interesting one it converts into a binary string and sends out over GPOI14.
This goes through an opto coupler
On the other end there a python script listening on GPIO16, it takes a string of binary data, decodes, checks it's valid, then creates a tagged syslog message.
Syslog is configured to forward everything onto a central location for folks to monitor.
I'm not sure which dimensions you're talking about, but in terms of bed size the F-150 has been very consistent over the years (although I think Crew Cabs — although they always existed — have become more popular). The Ranger still cannot fit a full sized sheet of plywood flat in the bed.
Quick research: the new Ranger's bed size has only increased 0.9" (width) relative to the 1990 version. Bed length seems to be the same.
Everything except the bed size has grown enormously on modern consumer trucks. Nowadays truck beds look proportionally tiny compared to trucks from 20-30 years ago when the bed made up a much larger percent of the vehicle.
Ford knows their market. Most F-150 buyers aren't looking for a functional truck, they want a comfortable commuter car that looks like a cool truck.
I was considering getting a Rivian and decided that in fact I would probably not allow the 24 year old dude at my local construction supply co to use a skid steer to drop a load of gravel into the bed of my $75k+ electric vehicle.
So instead I got a used Ford F150 (gas) and when the skid steer guy drops gravel into the bed I feel fine.
The bed of more traditional pickups like the F-150 can be swapped out in a couple of hours by one or two dudes with a lift and an impact wrench. Heck, you can buy blank F-250s without a bed at all.
> The proposed Active-AWD Trade Platform utilizes a Through-the-Road (TTR) Hybrid architecture to decouple the mechanical drivetrain while maintaining synchronized propulsion via a Vehicle Control Unit (VCU). By integrating high-topology Axial Flux or Radial-Axial (RAX) in-wheel motors, the system achieves exceptional torque density within the limited packaging of a trailer wheel well. The control strategy relies on Zero-Force Emulation, utilizing a bi-directional load cell at the hitch to modulate torque output via a PID loop, ensuring the module remains neutrally buoyant to the tow vehicle during steady-state cruising. In low-traction environments, the system transitions to Virtual AWD, employing Torque Vectoring to mitigate sway and Regenerative Braking to prevent jackknifing, effectively acting as an intelligent e-Axle retrofit. This configuration leverages 400V/800V DC architecture for rapid energy discharge and V2L (Vehicle-to-Load) site power, solving the unsprung weight damping challenges through advanced suspension geometry while eliminating the parasitic drag of traditional passive towing.
A modular truck bed could have
Through-the-road TTR AWD (given a better VCU) and e.g. hub motors or an axle motor.
There's always a chance the new Scout will fit that model. I'm not getting my hopes up though. It seems every company that releases an EV truck says they'll sell it for $30-40k and then suddenly it's $80k+.
The only disappointing aspect of the Iocaine maze is that it is not a literal maze. There should be a narrow, treacherous path through the interconnected web of content that lets you finally escape after many false starts.
If this is a concern, pass your UUIDv7 ID through an ECB block cipher with a 0 IV. 128 bit UUID, 128 bit AES block. Easy, near zero overhead way to scramble and unscramble IDs as they go in/out of your application.
There is no need to put the privacy preserving ID in a database index when you can calculate the mapping on the fly
This is, strictly speaking, an improvement, but not by much. You can't change the cipher key because your downstream users are already relying on the old-key-scrambled IDs, and you lose all the benefits of scrambling as soon as the key is leaked. You could tag your IDs with a "key version" to change the key for newly generated IDs, but then that "key version" itself constitutes an information leak of sorts.
I edited that out of my post, as I'm not sure it's the correct term to use, but the problem remains. If the key leaks, then all IDs scrambled with that key can be de-scrambled, and you're back to square one.
Then that's just worse and more complicated than storing a 64 bit bigint + 128 UUIDv4. Your salt (AES block) is larger than a bigint. Unless you're talking about a fixed value for the AES (is that a thing) but then that's peppering which is security through obfuscation.
Uhh... What? You just use AES with a fixed key and IV in block mode.
You put in 128 bits, you get out 128 bits. The encryption is strong, so the clients won't be able to infer anything from it, and your backend can still get all the advantages of sequential IDs.
You also can future-proof yourself by reserving a few bits from the UUID for the version number (using cycle-walking).
Hmm yeah, I decided that one counts because the new packages have (slightly) different content, although it might be the case that the changes are junk/pointless anyway.
``` > ping6 google.com PING6(56=40+8+8 bytes) 2605:59c0:236f:3a08:7883:9d04:c26d:5fa1 --> 2607:f8b0:4005:806::200e 16 bytes from 2607:f8b0:4005:806::200e, icmp_seq=0 hlim=117 time=22.262 ms 16 bytes from 2607:f8b0:4005:806::200e, icmp_seq=1 hlim=117 time=26.124 ms 16 bytes from 2607:f8b0:4005:806::200e, icmp_seq=2 hlim=117 time=26.807 ms ^C --- google.com ping6 statistics --- 3 packets transmitted, 3 packets received, 0.0% packet loss round-trip min/avg/max/std-dev = 22.262/25.064/26.807/2.001 ms ```
reply