This is indeed a useful approach to limiting the scope of environment variables, and I try to use that rather than exporting when possible. Using files (especially "special" files like the command-substitution fd reference) is still preferable by a wide margin, and I hope that applications trend towards using files as the primary mechanism for passing in secrets.
Yes, it makes a difference: about 8 milliseconds.
Properly implemented IPv6 has a lower latency.
(and is more efficient, though i believe the energy savings are negligible)
See this map: https://stats.labs.apnic.net/v6perf
For sure string like "zqb" would give me a pause with this letterform, because it looks a lot like "ząb". Maybe it would be clearer in surrounding text, though.
I doubt it. There’s a reason you can’t call from iPad (despite having SIM card variant). There’s a reason you cannot use Pencil with Mac on touchpad. There’s a reason you have very limited multitasking support on iPad and none on iPhone.
Apple wants you to buy more devices to fill gaps that another one doesn’t support.
Why go that way. I’m no digital signal processing expert, but images (and series thereof, i.e videos) are 2D signals. What we see is spatial domain and analyzing pixel by pixel is naive and won’t get you very far.
What you need is going to frequency domain. From my own experiment in university times most significant image info lays in lowest frequencies. Cutting off frequencies higher than 10% of lowest leaves very comprehensible image with only wavey artifacts around objects. You have plenty of bandwidth to use even if you want to embed info in existing media.
Now here you have full bandwidth to use. Start with frequency domain, set expectations of lowest bandwidth you’ll allow and set the coefficients of harmonic components. Convert to spatial domain, upscale and you got your video to upload. This should leave you with data encoded in a way that should survive compression and resizing. You’ll just need to allow some room for that.
You could slap error correction codes on top.
If you think about it, you should consider video as - say - copper wire or radio. We’ve come quite far transmitting over these media without ML.
We started with that approach, by assuming that the compression is wavelet based, and then purposefully generating wavelets that we know survive the compression process.
For the sake of this discussion, wavelets are pretty much exactly that: A bunch of frequencies where the "least important" (according to the algorithm) are cut out.
But that's pretty cool, seems like you've re-invented JPEG without knowing it, so your understanding is solid!
How about Fourier transform (or cosine, whichever works best), and keep data as frequency components coefficients? That’s the rough idea behind digital watermarking. It survives image transforms quite well.
I don’t clearly see how it’s ground-breaking. Non-repudiation could be achieved long before blockchain, because we have cryptography and can sign stuff.
var=value some_command
This will still show up in /proc, but a lot of internal tools often rely on environment variables, so it’s kind of inevitable.