If no one is posting CVE that affects these old Ubuntu versions then Canonical doesn’t have to fix them. I realize that’s not your point, but it almost certainly is a part of Canonical’s business plan for setting the cost of this feature.
The Pro subscription isn’t free and clearly Canonical think they will have enough uptake on old versions to justify the engineering spend. The market will tell them if they’re right soon. It will be interesting to watch. So far it seems clear they have enough Pro customers to think expanding it is profitable.
PXE is awesome, especially if you combine it with systemd's UKI mechanism and its EFI stub. You can load a single file via TFTP or HTTP(S) and boot into a read-only (or ramdisk-only) full Linux system. Most off the shelf distributions can be made to work in this way, with a small bit of effort. A very usable Debian system is a few hundred MB.
You can extend this with secure boot (using your own keys) to sign the entire UKI file, so your firmware will authenticate the full "disk" image that it boots into.
They have a capacity to "learn", it's just WAY MORE INVOLVED than how humans learn.
With a human, you give them feedback or advice and generally by the 2nd or 3rd time the same kind of thing happens they can figure it out and improve. With an LLM, you have to specifically setup a convoluted (and potentially financially and electrical power expensive) system in order to provide MANY MORE examples of how to improve via fine tuning or other training actions.
> With an LLM, you have to specifically setup a convoluted (and potentially financially and electrical power expensive) system in order to provide MANY MORE examples of how to improve via fine tuning or other training actions.
The only way that an AI model can "learn" is during model creation, which is then fixed. Any "instructions" or other data or "correcting" you give the model is just part of the context window.
Fine tuning is additional training on specific things for an existing model. It happens after a model already exists in order to better suit the model to specific situations or types of interactions. It is not dealing with context during inference but actually modifying the weights within the model.
Depending on your definition of "learn", you can also use something akin to ChatGPT's Memory feature. When you teach it something, just have it take notes on how to do that thing and include its notes in the system prompt for next time. Much cheaper than fine-tuning. But still obviously far less efficient and effective than human learning.
I think it’s reasonable to say that different approaches to learning is some kind of spectrum, but that contemporary fine tuning isn’t on that spectrum at all.
In the US lots of prescriptions work the same. But some prescriptions and some over the counter (OTC) medicine requires presenting a legal ID to purchase because of a variety of laws.
Blood pressure prescriptions, no ID lots of times. OTC meds which are ingredients to make meth, need an ID.
My HP 35s sits between the 2 halves of my split keyboard. I use it multiple times per day. It's just faster for quick calculations as the UI is very well done and it turns on instantly.
Seems to make sense, if Poolside is selling a product which requires (or strongly recommends) that Poolside customers have on-prem or private cloud compute capability in order to fully integrate all the tools, right now that compute need is likely going to be filled by using NVIDIA hardware. So even if NVIDIA isn't selling a ton of hardware directly to Poolside, NVIDIA is likely going to sell a whole bunch of hardware to businesses using Poolside's solutions.
As much as the whole AI thing feels like a bubble, this feels less round-trip-like to me than some of the other recent deals which have been in the press.
It will be interesting to see if Qualcomm end up deploying an open sourced standards compliant UEFI implementation. This would be a big deal in my eyes as the Raspi firmware solution is really a pain in the butt to deal with as it's closed source and the documentation is very fragmented and hard to comprehend if you're doing anything beyond their recommended approaches (for example, trying to use an A/B update scheme on a Raspi CM 5 with proper failed boot fallback is not straight forward without resorting to using u-boot as well).
That display orientation and resolution is likely a carry-over from a phone project. Was probably cheap and easy for them to integrate an existing IP block from an older phone SOC.
The catch is ollama cloud is likely to increase prices and/or decrease usage limit levels soon. Free tier has more restrictions than their $20/mo tier. They claim to not store anything (https://ollama.com/cloud) but you'll have to clarify what you mean by "private" (your model likely runs on shared hardware with other users).
I agree. "Free" usage could mean tradeoff. But for side-project and experiments, to accesss open source model like gpt-oss, as my machine can not run, I think I will accept it.
My experience with the free tier and qwen3-coder cloud is the hourly limit gets you about 250k tokens input and then your usage is paused till the hour is up. Enough to try something very small.
The Pro subscription isn’t free and clearly Canonical think they will have enough uptake on old versions to justify the engineering spend. The market will tell them if they’re right soon. It will be interesting to watch. So far it seems clear they have enough Pro customers to think expanding it is profitable.