Out of curiosity, why x86? Is it the preponderance of resources? The weird instruction format? The complexity of the boot sequence? Are you specifically trying to mimic DOS?
> A support for the ARM architecture (aarch) is coming soon too.
Wow! How do you support a DOS-like OS across multiple architectures when DOS itself is tightly tied to interactions among the program, the system code, and the architecture?
I have not looked at this project, but my guess would be: x86 is a widely available platform that, because of its history and relentless compatibility, contains a lot of legacy interfaces that make implement a very simple, thin-layer and “DOS-like OS” without the need to parse device trees, set up MMUs, deal with complex busses like PCI(e) and so on.
It is much harder to bootstrap a simple OS in ARM, and it won’t stay very simple unless you accept significantly more limitations than you would under x86. (For example, you can’t do very much with the MMU off on ARM, and you also don’t have convenient BIOS interfaces that allow you to, say, read a sector, or wait for a keypress, with just a few lines of assembly).
The x86 arch is used because this system iteration derives from the first one, which relies on BIOS interrupts and inline assembly in Turbo C. I am not trying to mimic (MS-)DOS exclusively, but both systems are highly inspired by it.
IMO multiple archs could be supported as Rust compiler allows the target arch specification, so one would build a specific target before the build itself.
I honestly think it's gonna be a decade to define this domain, and it's going to come with significant productivity costs. We need a git but to prevent LLMs from stabbing themself in the face. At that point you can establish an actual workflow for unfucking agents when they inevitably fuck themselves. After some time and some battery of testing you can also automate this process. This will take time, but eventually, one day, you can have a tedious process of describing an application you want to use over and over again until it actually works.... on some level, not guaranteed to be anything close to the quality of hand-crafted apps (which is in-line with the transition from assembly to high-level and now to whatever the fuck you want to call the katamari-damacy zombie that is the browser)
> And of course, now, in retrospect, the whole thing looks silly.
Private enterprise should be the last people on earth to be allowed to label themselves. I have many marketer friends I love, but I truly think the practice of trying to pimp businesses to rich individuals has been probably the biggest waste of human effort in history (outside of maybe carbon-capture efforts). We're just stuck with shitty brands, broken products, and stupid consumers who think they're getting the best.
> I can’t believe riding a horse and carriage wouldn’t make you better at riding a horse.
Surely you mean "would"? Because riding a horse and carriage doesn't imply any ability at riding a horse, but the reverse relation would actually make sense, as you already have historical, experiential, intimate knowledge of a horse despite no contemporaneous, immediate physical contact.
Similarly, already knowing what you want to write would make you more proficient at operating a chatbot to produce what you want to write faster—but telling a chatbot a vague sense of the meaning you want to communicate wouldn't make you better at communicating. How would you communicate with the chatbot what you want if you never developed the ability to articulate what you want by learning to write?
EDIT: I sort of understand what you might be getting at—you can learn to write by using a chatbot if you mimic the chatbot like the chatbot mimics humans—but I'd still prefer humans learn directly from humans rather than rephrased by some corporate middle-man with unknown quality and zero liability.
Yes, I'm acknowledging a lack of skill transfer, but that there are new ways of working and so I sarcastically imply the article can't see the forest for the trees, missing the big picture. A horse and carriage is very useful for lots of things. A horse is more specialised. I'm getting at the analogy of a technological generalisation and expansion, while logistics is not part of my argument. If you want to write a very good essay and if you're good at that then do it manually. If you want to create scalable workflows and have 5 layers of agents interacting with each other collaboratively and adversarially scouring the internet and newssites and forums to then send investment suggestions to your mail every lunch then that's a scale that's not possible with a pen and paper and so prompting has an expanded cause and effect cone
> A horse and carriage is very useful for lots of things. A horse is more specialised.
You have that backwards. A horse and carriage is good for traveling on a road. If you have just the horse, however, you can travel on a road, travel offroad, pull a plow, ride into battle and trample evildoers, etc.
No it's only half backwards because of the infrastructure there is scalability in amount of work you're right in the phrasing however but the intention/idea matters more. So the horse and carriage is a generalization of the core value of the horse and increases the core value and generalization -> general, horse more specialised or at least reduced to niches today like competitions and hobbies
No, because of Poe's law only the author of the comment can confirm. But the analogy makes sense then:
"[Of course] writing an essay with chatgpt wouldn’t make you better at writing essays unassisted. Sure, a student wouldn’t want to practice the wrong way, but anyone else just wants to produce a good essay."
> A support for the ARM architecture (aarch) is coming soon too.
Wow! How do you support a DOS-like OS across multiple architectures when DOS itself is tightly tied to interactions among the program, the system code, and the architecture?