Anyone here familiar with the details of GendBuntu[1], the Ubuntu distro used by the French Gendarmerie? I'd love to hear what is working and what isn't on the ground.
A year ago I used Azure Trusted Signing to codesign FOSS software that I distribute for Windows. It was the cheapest way to give away free software on that platform.
A couple of months ago I needed to renew the certificate because it expired, and I ran into the same issue as the author here - verification failed, and they refused to accept any documentation I would give them. Very frustrating experience, especially since there no human support available at all, for a product I was willing to pay and use!
We ended up getting our certificate sourced from https://signpath.org and have been grateful to them ever since.
For what it’s worth, Trusted Signing verification has been a moving target over the last 12 months. It was open for individuals, then it was closed to anyone except (iirc) US businesses with DUNS numbers, then it opened again to US based individuals (and a few other countries perhaps).
My completely uninformed guess was that someone had done something naughty with Trusted Signing-issued code signing certificates.
Anyway, when I first saw the VeraCrypt thing this morning my initial reaction was “I wonder if this is them pushing developers onto trusted signing the hard way?”
I don't know anything about Trusted Signing verification, but I do know from reports on 'mini umbrella company fraud' that if you're a fraudster, there are people in the Philippines who will happily sign their name to western countries' official paperwork in exchange for $2000 or so. Understandably, as that's more than the country's median annual income.
So I can see why offering trusted signing for individuals worldwide would come with certain challenges.
Most RATs are signed, that's a hurdle but it's clearly not a big deal to bypass for criminals, many "SSL companies" provide them, just have to use fake docs and you'll be issued it, many shady services sell those signatures as well and it doesn't look like it cost more than $15 per binary, so obviously, not so secure in practice.
I'm in Europe and ended up creating an organization since I have my own company, but they messed up the verification of one of the legitimate documents, and there was no way to reach them once they made that mistake. Frustrating, and definitely a lost customer for them.
I like the idea of a central signing authority for open source. While this might go against the spirit of open source, I think it eventually creates a critical mass and outcry if Microsoft or Google would play games with them. Also foundations might be a good way to protect against legal trouble distributing OSS under different regulations. I am imagining e.g. an FDroid that plays Googles game. With reproducible or at least audited builds also some trusted authorities could actually produce more trusted builds especially at times of supply chain attacks. However, I think such distribution authorities would need really good governance and a lot of funding.
There is no real advantage of a central signing authority. If you use Debian the packages are signed by Debian, if you use Arch they're signed by Arch, etc. And then if one of them gets compromised, the scope of compromise is correspondingly limited.
You also have the verification happening in the right place. The person who maintains the Arch curl package knows where they got it and what changes they made to it. Some central signing authority knows what, that the Arch guy sent them some code they don't have the resources to audit? But then you have two different ways to get pwned, because you get signed malicious code if a compromised maintainer sends it to the central authority be signed or if the central authority gets compromised and signs whatever they want.
All PKI topologies have tradeoffs. The main benefit to a centralized certification/signing authority is that you don't have to delegate the complexity of trust to peers in the system: a peer knows that a signature is valid because it can chain it back to a pre-established root of trust, rather than having to establish a new degree of trust in a previously unknown party.
The downside to a centralized authority is that they're a single point of failure. PKIs like the Web PKI mediate this by having multiple central authorities (each issuing CA) and forcing them to engage in cryptographically verifiable audibility schemes that keep them honest (certificate transparency).
It's worth noting that the kind of "small trusted keyring" topology used by Debian, Arch, etc. is a form of centralized signing. It's just an ad-hoc one.
> a peer knows that a signature is valid because it can chain it back to a pre-established root of trust, rather than having to establish a new degree of trust in a previously unknown party.
So the apt binary on your system comes with the public keys of the Debian packagers and then verifies that packages are signed by them, or by someone else whose keys you've chosen to add for a third party repository. They are the pre-established root of trust. What is obtained by further centralization? It's just useless indirection; all they can do is certify the packages the Debian maintainers submit, which is the same thing that happens when they sign them directly and include their own keys with the package management system instead of the central authority's, except that now there isn't a central authority to compromise everyone at once or otherwise introduce additional complexity and attack surface.
> PKIs like the Web PKI mediate this by having multiple central authorities (each issuing CA) and forcing them to engage in cryptographically verifiable audibility schemes that keep them honest (certificate transparency).
Web PKI is the worst of both worlds omnishambles. You have multiple independent single points of failure. Compromising any of them allows you to sign anything. Its only redeeming quality is that the CAs have to compete with each other and CAA records nominally allow you to exclude CAs you don't use from issuing certificates for your own domain, but end users can't exclude CAs they don't trust themselves, most domain owners don't even use CAA records and a compromised CA could ignore the CAA record and issue a certificate for any domain regardless.
> It's worth noting that the kind of "small trusted keyring" topology used by Debian, Arch, etc. is a form of centralized signing. It's just an ad-hoc one.
Only it isn't really centralized at all. Each package manager uses its own independent root of trust. The user can not only choose a distribution (apt signed by Debian vs. apt signed by Ubuntu), they can use different package management systems on the same distribution (apt, flatpak, snap, etc.) and can add third party repositories with their own signing keys. One user can use the amdgpu driver which is signed by their distribution and not trust the ones distributed directly by AMD, another can add the vendor's third party repository to get the bleeding edge ones.
This works extremely well. There are plenty of large trustworthy repositories like the official ones of the major distributions for grandma to feel safe in using, but no one is required to trust any specific one nor are people who know what they're doing or have a higher risk tolerance inhibited from using alternate sources or experimental software.
Nothing, I can’t think of a reason why you would want to centralize further. But that doesn’t mean it isn’t already centralized; the fact that every Debian ISO comes with the keyring baked into it demonstrates the value of centralization.
> Each package manager uses its own independent root of trust.
Yes, each is an independent PKI, each of which is independently centralized. Centralization doesn’t mean one authority; it’s just the way you distribute trust, and it’s the natural (and arguably only meaningful) way to distribute trust in a single-source packaging ecosystem like most Linux distros have.
> cen·tral·i·zation: the concentration of control of an activity or organization under a single authority.
I mean people try to motte and bailey this all the time. You have someone proposing or defending a monopoly by putting it up against the false dichotomy alternative where no party trusts any other party whatsoever and then everyone is required to do everything on their own because no delegation is possible.
There is an alternate which is neither of those things, and it's a competitive market. You have neither a single authority nor the total absence of trust. Instead there are numerous alternatives that each try to maintain a good reputation for themselves because people can choose freely among them without their choice being coerced by tying it to numerous otherwise-unrelated factors.
Notice how this is importantly different. If you have a PC, you can install Debian or Arch or Windows; if you install Debian, you can install software with apt or flatpak or snap; if you use apt, you can use the official repositories or numerous third party ones. If you have an iPhone, you get iOS and you get Apple's store and everything else is anti-competitively excluded.
My point was that Debian, etc. as conceptually distinct organizations, and so there’s no point in centralizing beyond their organizational boundaries. Each already performs centralized key management, but nobody would particularly benefit from a single global keyring for all Linux distributions, because nobody (?) is transferring package formats across distribution families.
> I like the idea of a central signing authority for open source.
It would be the most corrupt(ible) org ever involved in open source and it would promote locked-down computing, as that would be their main reason to exist. Be careful what you wish for!
While agree that this is a problem if becoming an attack vector, FDoid does already do central signing of their own builds. With reproducible builds actually the attack vector would be minimal and actually maybe there could be multiple of such entities, which would make this even more robust. I just think the answer to power is not always decentralization. Alternatively government actors could also build open source for their citizens. Here would have at least democratically mandated corruption. IMHO this is much better than the current quasi government of the internet by a few powerful gatekeepers.
Then it wouldnt' be a central signing authority. Not that it matters. Several signing authorities would not equally divide the clients. One would emerge as central, become corrupted by industry commercial interests, promoted further by them, and end up some sort of Google of signing where you can't do anything on your computer without their knowledge and approval.
If someone is willing to put in the work in governance, FOSS projects would be willing to fund it - at least Mudlet would be. We get income from Patreon to cover the costs.
There is ossign.org, Certum offers a cheap certificate for FOSS [1], and Comodo offers relatively cheap (but still expensive) certs as well [2]. Not affiliated with either service, but these are the ones I remember last time I had to dig into this mess, so there might be even more services that I don't recall at the moment.
I've been doing something similar by letting Claude run in a Virtualbox VM. It's easy to use, no issues with observability, and the attack & damage surface is far less of an issue.
I'm not one to believe the Silicon Valley hype usually (GPT-2 being too dangerous to release, AI giving us UBI, and so on), but having run Claude Opus 4.6 against my codebase (a MUD client) over the weekend, I can believe this assessment.
Opus alone did a good job of identifying security issues in my software, as it did with Firefox [1] and Linux [2]. A next-generation frontier model being able to find even more issues sounds believable.
That said, this is script kiddies vs sql injections all over again. Everyone will need to get their basic security up on the new level and it will become the new normal. And, given how intelligence agencies are sitting on a ton of zero-days already, this will actually help the general public by levelling out the playing field once again.
Gemma 3 E4E runs very quick on my Samsung S26, so I am looking forward to trying Gemma 4! It is fantastic to have local alternatives to frontier models in an offline manner.
These security failures from Anthropic lately reveal the caveats of only using AI to write code - the safety an experienced engineer is not matched by an LLM just yet, even if the LLM can seemingly write code that is just as good.
Or in short, if you give LLMs to the masses, they will produce code faster, but the quality overall will degrade. Microsoft, Amazon found out this quickly. Anthropic's QA process is better equipped to handle this, but cracks are still showing.
To a certain extent, I do wonder if just letting claude do everything and then using the bug reports and CVE’s they find as training data for an RL environment might be part of the plan. “Here’s what you did, here’s what fixed it, don’t fuck up like that again"
Anthropic team does an excellent job of speeding up Claude Code when it slows down, but for the sake of RAM and system resources, it would be nice to see it rewritten in a more performant framework!
They posted previously on YN that they too were caught offguard. The 'tips' weren't specific to Raycast, they've been going on for a while and Raycast was just one product it decided to feature now.
In principle, one could train the AI to insert ads in its answers. So no, if you only do inference locally with an open-weight model you are still not in control.
[1] - https://en.wikipedia.org/wiki/GendBuntu?useskin=vector
reply