I don't discount this as a possibility but my impression is that the OpenAI brand isn't very sticky.
Internet Explorer being pre-installed on Windows devices didn't prevent it from being demolished by newcomer Chrome throughout the 2010s. Now we're looking at a product that's even less integrated, and whose value is exposed through universal interfaces (human language, images, etc.).
If OpenAI succeeds, I imagine that remarkably little of it will have come from the brand. But subtracting the first-mover brand advantage: they can either compete on the frontier, which seems difficult and bears potentially diminishing returns (particularly wrt to distillation); or compete as a commodity, which I imagine cannot justify their valuation/spend.
For people that use ChatGPT the same way you do, yeah it's not. For people in the throes of AI psychosis who've named their ChatGPT and have a deep relationship with it, switching to a newer model from OpenAI is an issue, nevermind switching to a different model from a different company.
I considered that but I don't see it being very impactful. It presumes a user who cares enough about "their" ChatGPT that they can't move from a particular model provider, but simultaneously does not care enough that model providers themselves have a financial motivation to shoo users onto their newer and more efficient models.
The transition from GPT4 to GPT5 was not well recieved among this crowd -- nevermind that I think this crowd is pretty small (comparatively) to begin with. I just don't imagine you can build a business on that sliver of a sliver, much less one that justifies OpenAI's spending.
The article you're responding to is making specific operational claims about Claude's (basically non-) relevance. I'd be interested to hear if you're directionally correct, but forgive me if I need more details than "but it integrates Claude".
If the issue is that large entities are using the software without contributing back in a way that goes against the "spirit of the law", the only solutions I can see license-wise are ones that restrict usage, which marks them squarely in the not-FOSS camp.
I do believe the line is inappropriately drawn, but I also have a lot of respect for what the open source/free software movements have achieved and won't spit in their face by trying to move the line or highjack the labels.
Yeah, and that they are always searching for ways to get out from under even their minimal obligations.
For example, the practice now of "AI reimplementation" (ie. uncopyrighting) free software using AI, and training LLM models on free software without respecting the license while simultaneously claiming that the same is absolutely not allowed on their software.
Oh and not only can you not train AI on their software "without permission", you can't even use their software through AI (Microsoft)
I didn't consider that bundling Search & Assistant maybe puts them in a tricky spot among some users who revile LLM features, and others who will utilize them to the cap. To the degree that the former is subsidizing the latter, or costing them customers (probably not a ton): I can see why separating the two offerings makes sense.
Though I'm sympathetic to the users for whom this would basically be a strict downgrade in featureset.
Wow small world. Hello from a fellow L1 (2012-2016). I didn't realize Ronimo had gone bankrupt, so I suppose I should be glad I have a chance to boot it up again.
The title writer might be doing the project a disservice by using the term "formal" to describe it, given that the project talks a lot about "specs". I mistook it to imply something about formal specification.
My quick understanding is that isn't really trying to utilize any formal specification but is instead trying to more-clearly map the relationship between, say, an individual human-language requirement you have of your application, and the code which implements that requirement.
The Library of Babel comparison is too fatalistic imo, even granting that it's maybe just an extreme example. The real world doesn't quite resemble a closed system with no metadata. We can still establish chains of trust.
Whether or not people will build resilient chains is another story, contingent on whether the strength of that chain actually matters to people. It probably doesn't for a lot of people. Boo. But inasmuch as I care, I feel I ought to be free to try and derive a strong signal through the noise.
I was sympathetic to this line of reasoning but I feel it's repeatedly shown to be self-defeating.
What chance have the proverbial good-guys got if, even after _proving_ some modicum of good will, people will nonetheless condescend any attempt to influence bad/wildcard actors? It feels great to tell someone they 'should've known better' but I'm convinced that that's basically void of cautionary utility.
This seems misleading inasmuch as your correspondents aren't all on the same mail servers.
Yes, correspondence between you and Build-A-Bear, and between you and your local terrorist cell, are unencrypted individually. But Build-A-Bear presumably doesn't know about your correspondence with the cell, and the latter presumably has some interest in not sharing organizational data access with the former.
I suppose you do have to trust that Proton isn't served a directive to snoop on your correspondence in transit with other providers. But that's still a much better position than leaving all of your historical data unencrypted at rest.
Internet Explorer being pre-installed on Windows devices didn't prevent it from being demolished by newcomer Chrome throughout the 2010s. Now we're looking at a product that's even less integrated, and whose value is exposed through universal interfaces (human language, images, etc.).
If OpenAI succeeds, I imagine that remarkably little of it will have come from the brand. But subtracting the first-mover brand advantage: they can either compete on the frontier, which seems difficult and bears potentially diminishing returns (particularly wrt to distillation); or compete as a commodity, which I imagine cannot justify their valuation/spend.
It seems very uphill of a battle.
reply