This largly depends on the location you are referring to. In many german places / cities, OSM data is often more accurate than Google Maps, even for commercial infos like opening hours. In the end it is up to people to ensure data is up-to-date. Apps like StreetComplete can help with that. Organic Maps also allows for some editing.
As someone without a Twitter/X account, their links are bad. I can only see the first post of a chain, can't see replies, etc. Mastodon is better in that regard.
This has nothing to do with the content of the platform, Musk, etc, btw. It's just the fact that now it's a bit hostile for logged out users/people without accounts. It used to be fine, but now it hides content, which is bad for me.
It was always awful, in my opinion. Twitter does a good job at letting people publish "sound bytes"; little bits of what's on their mind. Past that (into longer form and discussion) it has never been good.
Right, it was never a good platform for longer posts, but before at least you could try to follow the different posts. Now, public links only show one post and that's it.
It shows you the linked post and its content just fine. If you want to engage in the conversation, you should probably just go through the account creation process which takes anywhere from 10 seconds to a minute.
Clicking on almost any of the UI elements leads to a log in prompt, without showing anything else. For people without accounts (or those who don't want to make one), that's probably not functionally different from dropping a link to some forum that requires registration, albeit in this case I guess at least the main post is visible.
The Mastodon link in comparison has the discussion visible up front, which is nice to see! Now whether the fediverse is popular enough to actually have good discussions, that's a bit harder to say, but at least it's something!
I think Facebook also had similar issues, where it gates a lot of things behind a login prompt, quite user hostile, though also understandable why they'd do it that way.
The business scheme is perfectly valid. It’s the same on Instagram as well.
I don’t have an Instagram account and don’t see a reason to create one, but my friends can still link me pictures or videos that I can look at. If I want to engage in any way or read past the couple top comments, I need to create an account.
Showing the user what they came for is a good way to get free advertisement, but requiring them to create an account to actually use the platform is perfectly reasonable.
The UX to people without accounts is actively hostile. That's why.
If someone was telling me to look up information from a print edition of the Encyclopaedia Britannica from the 1970s, I'd have the same problem: it is an absurd thing to ask of me because there are better, more frictionless ways to obtain the information.
Why stop linking there? X hides threads. If you click a link, you can't see a thread. If you are logged in you can see it, but you can't see it if you aren't logged in. So, you can one little post, and frequently you miss a lot of other information in the replies. See ANY post on where the person is posting more than one single post. Basically, if you link to a post there, the person reading it usually won't get the full story.
Why is posting there make the user a problem? Because, if the user is trying to communicate something, they are choosing a platform that isn't interested in making it effective at communication. A closed off community isn't the town square it claims to be. If you are communication on that site, you can be sure people directed aren't getting the full story.
Who's the judge? Me. I am the judge of what a problem is. So is the parent poster you were replying to. They are also a judge. It's odd that you hand off opinions to others and don't make your own.
Reading anything on Twitter is (subjectively) miserable. The platform is good only for since thoughts / sound bytes; not long articles (spread across many posts) and discussions. It's _worse_ if you don't use an account so you can't see anything but the first post... but it's awful even if you can see the whole thing.
I don't know who was the first moron who decided to post the first "long form writeup" on a platform that only supports blurbs... but I am absolutely amazed that people thought it was a good idea and followed suit.
1. You will notice Hacker News does not require a login to view content - this simple approach is a big reason why twitter links are looked down upon. The platform used to foster simple sharing, and now does not. It is in effect, telling you to stop sharing things publicly and only with twitter users.
2. Because you are basically linking to a deep link in the dark web.
3. We all get to make our own decisions, and the person calling out shitty websites that you should not bring to the group has my support.
Because you are limited to view only particular single tweet. And if it's a thread (which is just dumb use of the platform itself) you are out of luck.
For better or for worse at least you can view this single tweet now, but right after Musk took over he blocked all access without account which was just annoying.
Imagine if HN would require you to have account browse and read it. That's what's mostly happened to twitter (and happens to the rest of our benevolent overlords/social platforms like fb and instagram, to which regular web migrated with the information :/)
Humans will always be humans, independently of the platform. :-P
In any case, at least we can see the replies and other posts from the account without having to create an account. Still better than Twitter/X in my view.
I did the same thing (and then downvoted the guy who suggested it was better!) It was much worse than the discourse on Twitter, with folks hurling insults at each other and saying things like "don't turn this into [X]itter".
I mean, I just clicked through and scrolled through a bunch of messages and all I'm seeing is people helping each other find the app via other means until the Play Store issue is resolved. In fact, the further I scroll, the more I see totally normal discourse about the issue of the app being delisted. Now that I look more closely, I see that the "last night" / "xitter" stuff is a single thread out of dozens of on-topic messages.
From what I can tell there are some new changes that may break backwards compat with stuff. One group wants that resolved the other does not think it is that big of deal. Something like this that goes back to the early 90s and thousands of random unknown number of installed clients/servers. backwards compat is probably something to look into and properly deal with?
Sure, but I'm not talking about that. I mean the one sentence responses to carefully thought out messages, the feet-dragging, etc etc, and each response being Brazil[0]-like nothing to do with the previous ones which were responded to.
It’s not actually clear from reading through this document or others, or the linked email threads, etc. that there is a personal dispute at play, or what that might be. I’m also not sure who the target of this document is, or who wrote it. It’s also not clear what the forcing function behind the strong recommendations at the end is — will the author fork GnuPG in the event a resolution can’t be reached?
It doesn't sound like a personal dispute to me, it sounds technical. One camp (Open) wants to move faster and break backward compatibility, the other (Libre) wants to move slower and maintain backwards compatibility
Both groups would create a new format (Libre = v5; crypto-refresh = v6). v4-only wouldn't be able to handle either new format, and newer software could presumably be told to create files in the older format.
The Proton folks are choosing to support both v5 and v6:
The correct way to maintain backwards compatibility in those contexts is to decrypt and re-encrypt, not support broken ciphers or weak modes of encryption indefinitely. The latter is security theater.
A read-only operation should not cause an insane amount of writes. This is perilous for a great many reasons, one of which is the risk of data corruption should something go wrong.
You're thinking about this the wrong way: if your data needs to be secure, then it's already perilous to keep it around with with weak or broken encryption. Security models where data is too important to risk encryption upgrades but not important enough to encrypt correctly are internally incoherent.
(This is sidestepping the other parts of the comment that don't make sense, like why a single read implies multiple writes or why performing cryptographic upgrades is somehow uniquely, unacceptably risky from a data corruption perspective.)
No, I think you're thinking about it the the wrong way: write failures are common. The failure mode for a bad disk is often that reads will succeed and writes will lose data. Something that silently writes like this is increasing the risk of data loss.
It probably depends a lot on the application, but I think it's often much better to have something that will warn the user about security risks and let them decide what to do with that risk. If you do design something with these silent writes, you absolutely need to think hard about failure cases and test them, and not handwave them away. Having the most "secure" data be corrupted is ultimately an unacceptable outcome.
That's not even getting into the other problems, such as ... is it ok for the user to take a performance hit of writing X GB when all they want to do is read a file?
Your cryptosystem is not responsible for the stability of your storage medium, and your storage medium is not responsible for the security of your cryptosystem. They are black boxes to each other; to confound their responsibilities is to ensure doom in your designs.
Put another way: your cryptosystem isn't responsible for saving your ass from not making backups. If your data is valuable, treat it that way.
> Your cryptosystem is not responsible for the stability of your storage medium, and your storage medium is not responsible for the security of your cryptosystem
This is exactly why your crypto system should not rely on spontaneously writing many gigabytes on a read operation, without asking. I couldn't have said it better myself.
What you are advocating is crypto intruding on the storage mechanism inappropriately. It's a layer violation.
I think if it's important to the end user, you could write fairly decent code at the app layer that asynchronously re-encrypts old data in a way that doesn't harm the user. That code would need to have a strategy for write failures. A basic cryptography tool should probably not have this as a built-in feature however, for a few reasons including those I've stated.
> This is exactly why your crypto system should not rely on spontaneously writing many gigabytes on a read operation, without asking.
Again: nobody has said this.
Whether or not the tool does this in bulk, or asynchronously, or whatever else is not particularly important to me. The only concern I have in this conversation is whether it's contradictory to simultaneously assert the value of some data and refuse to encrypt it correctly. Which it is.
This is silly. Nothing that happens with the standard or its implementation is going to prevent you from decrypting a 20 year old email. It shouldn't need saying, but one reason for that is that PGP's cryptography is schoolbook cryptography.
Upstream spoke of deprecated support for older emails. My response is aimed there.
And there is loads of software that will not compile on modern hardware. End users don't often have that ability to re-write, or even to easily validate a random bit of code on github.
A project like this, needs to maintain backwards operability. For decades.
Once again: this is silly, because whatever conversation we are having about the standard, your ability to decrypt old messages would not have been impacted. Standard revisions don't turn the previous standard into secret forbidden knowledge.
What's really being asked for here is the capability to seamlessly continue sending messages with the previous, weak constructions, into the indefinite future, and have the installed base of the system continue seamlessly reading them. I think that is in fact a goal of PGP, and one of its great weaknesses.
When standards remove the requirements for something after a period of obsolescence, that tends to send a message to the implementors to remove that from the software.
Users who still rely on that have to use the old software, against which there can be barriers:
- old executables don't run on newer OS (particularly in the Unix world).
- old source code won't build.
- old code won't retrieve the old data from the newer server it has been migrated do.
Things like that.
The barriers could be significant that even someone skilled and motivated such as myself would be discouraged.
> Users who still rely on that have to use the old software, against which there can be barriers
Not all reliance is reasonable though.
Some legacy software can only do SSLv3 or lower, does that mean the rest of the internet has to carry that support around? Abso-f-lutely not.
The same applies here. If you really need that ancient stuff that loses support, repackage them in newer encryption or remove the obsolete layer. It's highly probable that information no longer needs to stay encrypted at rest anyways.
In my opinion, the Internet should not be removing support for older SSL. The highest SSL version that is common to server and client should always be used.
> The highest SSL version that is common to server and client should always be used.
That is how it works. What you're missing is that everyone, both servers and clients, agrees that supporting old SSL versions is a bad idea. And they're right.
If that were the actual principle being accurately followed, the first feature to have been removed from browsers would have been plain HTTP before any version of SSL.
Plain HTTP is what people resort to when their browser refuses to connect to an old device or server using HTTPS, which is worse than old SSL.
No, because clear lack of security is better than faux security. With older SSL versions, it's security that even creates extra risk for all clients (by leaking server secrets and allowing ciphersuites that don't have PFS).
The absurd idea that a user will have a 20 year old encrypted mail, because software still supports it, is ridiculous. What really happens is someone has a 20 year old mail no matter what, itcwill always exist, and the choice is, support it or not. Support it to be read, support it to be converted, warn the user, suggest fixes.
And your SSL example is senseless! In what world do you envision super secure stuff alongside weaker legacy, on the same damned server. You literally are not thinking sensibility about any of this, you examples are paper tigers.
How do you alert the users that are running the problematic software and haven't yet updated it? The very premise is ridiculous.
> What really happens is someone has a 20 year old mail no matter what, itcwill always exist, and the choice is, support it or not. Support it to be read, support it to be converted, warn the user, suggest fixes.
Well yeah and the choice should be to not support it. If the user needs those letters they can either decrypt or just re-encrypt them. It's silly to claim that a message can somehow be both so vital to be protected by encryption, but not upgraded to something more modern.
> And your SSL example is senseless! In what world do you envision super secure stuff alongside weaker legacy, on the same damned server.
I'm not envisioning it. Nobody should be running such old useless garbage. What was suggested earlier in this thread does not work and must not happen in practice.
How do you alert the users that are running the problematic software and haven't yet updated it? The very premise is ridiculous.
Where did you get the weird idea the software isn't updated? This entire discussion is about deprecation of older encryption methods in new versions of software.
Yes, but that is only needed to connect with old software that has not updated. Two pieces of new software will not negotiate on using old crypto even if they both support it.
In the trouble situations, one of the two pieces of software being upgraded is thrust upon the user.
The article specifically refers to backward compatibility with RFC4880, RFC5581, and RFC6687. These specifications include encryption techniques. So no. I was not just "free associating". Please do not assume the worst, because it could be that you just haven't understood the implications of the article.
On the other hand, as long as earlier versions or their sources remain available, it doesn't sound like a major problem to me. The sources would effectively be documentation for the previous format and can be reimplemented if needed.
>The whole situation regarding key servers, key rotation, and the web of trust is a complete dumpster fire.
Can you explain why?
People elsewhere in this thread are saying that PGP sucks because it tries to do too many things at once, but it seems to me that the one big advantage of a tool which does everything at once is that you only need to solve authenticity one time for everything you do.
For example, if I'm communicating with an open source dev, having their known-authentic PGP key allows me to simultaneously verify the authenticity of their software updates, verify the authenticity of the email they send me, and encrypt my emails to them. Is there anything outside of PGP that accomplishes this?
Well, the key servers are useless because they are susceptible to that poisoning attack from a few years ago, and they happily send you fraudulent or revoked keys.
And the web of trust doesn't scale. The trust ratings mean different things to different people, the propagation of revocation certs and signatures is slow, and rotating keys is onerous.
>For example, if I'm communicating with an open source dev, having their known-authentic PGP key allows me to simultaneously verify the authenticity of their software updates, verify the authenticity of the email they send me, and encrypt my emails to them. Is there anything outside of PGP that accomplishes this?
How often do you check the fingerprints of that key? Do you verify out of band when the developer rotates their key? (Haha just kidding, PGP users essentially never rotate keys)
If you care enough to encrypt your emails, then what is the virtue of verifying less frequently that you're talking to the correct persons?
Why wouldn't you want separate keys for all those things?
Why would you want an adversary to be able to compromise a single key and have the ability to forge commits, emails, and whatever else?
>How often do you check the fingerprints of that key? Do you verify out of band when the developer rotates their key?
I'm almost certain PGP best practice is to have a single master key, kept on an airgapped device, that's used to sign subkeys for various purposes like email, commit signing, etc. So I only have to verify out of band once, unless the airgapped device gets compromised or the master key encryption is broken.
PGP users are a minority to begin with. I wouldn't be surprised if a lot of them do this. I think I got that rec from a PGP beginner guide I found the other month.
Don't forget about PGP smart cards either. You could keep the master key you use to sign subkeys on a smart card. A smart card should be harder to hack than your phone.
Qubes has built-in "split GPG" support that allows you to e.g. sign something using your private key while keeping it in a different VM. See https://www.qubes-os.org/doc/split-gpg/
I know PGP isn't for everyone, I just like the idea of keeping high-security options available for those who want them.
>More importantly, how do you know that your counterparty is one of that (extremely small) minority?
>I know PGP isn't for everyone, I just like the idea of keeping high-security options available for those who want them.
But PGP doesn't provide a high-security anything.
- In order to achieve some reasonable level of protection from MITM attacks you can't just get someone's key from a keyserver. You have to go hunting for it and you're never really sure if there's a revocation cert out there that you just missed.
- Some people publish PGP keys on their websites, and you could use that to contact them over encrypted email. You are still vulnerable to metadata analysis and unless you manually re-key on every message (which you don't), you don't enjoy forward secrecy. Additionally, all it takes is one oopsie moment for someone to Reply-All and forget to encrypt first and now the entire conversation went out unencrypted. This has happened to me.
- You claim there's some unspecified benefit to signing commits with the same key you encrypt your emails with, though I don't see why that's superior to signify/minisign
- Best practices demand that you keep an airgapped machine with a long lived master key on it. No mention is made of how to prevent BadUSB-type attacks from jumping the air gap. If you really want to be sure nobody mints their own key from your airgapped machine to impersonate you, you now need to monitor your machine. That raspi in a drawer is still vulnerable to Evil Maid attacks, and the worst part is you won't know someone's impersonating you until it's too late.
All this attack surface just for the purported convenience of having some kind of unified "crypto identity" wherein you only need to verify someone once?
These are not the characteristics of a high security system. This is why people think PGP is a for security LARPers. It's objectively just not a very good tool.
>- In order to achieve some reasonable level of protection from MITM attacks you can't just get someone's key from a keyserver. You have to go hunting for it and you're never really sure if there's a revocation cert out there that you just missed.
This is a convenience consideration, not a security one.
>- Some people publish PGP keys on their websites, and you could use that to contact them over encrypted email. You are still vulnerable to metadata analysis and unless you manually re-key on every message (which you don't), you don't enjoy forward secrecy. Additionally, all it takes is one oopsie moment for someone to Reply-All and forget to encrypt first and now the entire conversation went out unencrypted. This has happened to me.
>No mention is made of how to prevent BadUSB-type attacks from jumping the air gap.
You could write newly minted keys to single-use CD-Rs.
>If you really want to be sure nobody mints their own key from your airgapped machine to impersonate you, you now need to monitor your machine. That raspi in a drawer is still vulnerable to Evil Maid attacks, and the worst part is you won't know someone's impersonating you until it's too late.
If physical access is part of your threat model, you'll want to monitor access to your stuff anyways.
Outstanding work. There is some finetuning needed and the ecosystem needs to catch up - looking at you Firefox and KeePassDX, probably more. But this really is a milestone. So glad KeePassXC exists and allowed me to transition over from 1Password.
1Password overdid it a while ago for me and I switched to KeePassXC. It is certainly not as polished as 1Password but the developer team is very responsive and since I started using it, especially on macOS, things have improved a lot already. It is a pleasure to watch things evolve and Passkey support is the next big thing that will be added.
Compared to Keychain Access, KeePassXC works cross platform and I need that, since my devices are cross platform. Also helps to avoid vendor lock-in.
Probably the important part of the announcement is:
Four of the major functionalities missing in the first official release of macOS Sonoma are:
Entire message data is not always passed to the extension making processing the encrypted message impossible
Reliably encrypted drafts
Support for setting the default state of the sign and encrypt button in compose windows which can lead to dangerous side-effects
Sign and encrypt button could go out of sync with internal state, if keyring changes are detected
I bought Affinity Photo and Affinity Designer in May of this year, less than 6 months ago. Some companies that offer perpetual licenses would simply give me the upgrade to the next version for free since it came out within a year, and basically all of them would offer an upgrade discount that isn't simply a launch sale, into perpetuity.
The price is good for decent software but it's kind of discouraging me from wanting to upgrade, when their goal should be the exact opposite of that.
This was one thing that annoyed me about Affinity as well, no AVIF or WEBP support. There are a number of threads on their forums asking for it, but Serif staff stopped responding to those a while ago.