Hacker Newsnew | past | comments | ask | show | jobs | submit | noone_youknow's commentslogin

It isn’t, actually.

Thank you! A project like this is all about compounding small wins for sure, just building one thing on top of the previous thing until (hopefully) one day it’s a complete thing :D

I’m glad you found some inspiration and really appreciate your kind comment :)


Haha that quote felt pretty much obligatory :D Non-POSIX is definitely keeping things fun, letting me explore different ideas without having to fit into an existing interface design :)

While not basing the OS on POSIX is good, if you ever want to use it for more than a hobby, i.e. if you ever want to use it for your main personal computer, you will also need a translation layer for the LINUX syscalls, which would enable you to compile and run any of the existing open-source programs, or even run existing binaries.

Porting the programs that you are more interested in from Linux or *BSD to your own OS is the best, but this is necessarily slow and also the work of porting rarely used applications may not be worthwhile.

An alternative to a syscall translation layer is for your OS to act as a hypervisor for Linux or other OSes VMs. However, this would have a lower performance than using natively your IPC-based OS.


I agree that for it to be generally useful, it’ll need to be easy to port existing software to. But I don’t think I’ll need a translation layer at the syscall level (unless I want to expose a Linux-compatible ABI as you suggest, which is totally a non-goal).

You can get surprisingly far in this area with a decent libc implementation - once you have that porting a lot of things becomes possible. There will always be harder problems (e.g. anything that hard-depends on fork) but with enough work pretty much anything in user-space should be possible to port eventually.

I’m using newlib for libc, with a custom Anos-specific libgloss that mostly talks to SYSTEM via IPC to get stuff done, and uses syscalls where needed (and a process has the appropriate capabilities). I’m filling out that low-level interface as I go, and will be using porting of exiting BSD or Linux software to drive that implementation :)


I agree that instead of implementing the Linux syscalls implementing just those libc functions that are OS dependent is sufficient for a large fraction of the existing programs.

However, this means implementing a POSIX layer, similarly to Windows NT. So it is a solution more distant from a "non-POSIX operating system".

While a large part of the Linux syscalls have been motivated by the necessities of implementing POSIX, there are also significant differences and the Linux applications that require the best performance must deviate significantly from POSIX, so they are no longer based on libc for I/O.


There would likely be value in a POSIX layer as you describe, at least from the point of view of porting existing programs. And perhaps it will happen, if I get far enough along, but it will never be a kernel feature - it would be an entirely userspace concept intended only to speed up porting and building out said user space.

Perhaps in the fullness of time essential software can be rewritten to Anos’ “native API”, or perhaps that’s a pipe dream and not worth the work, but the option will be there.


Thanks! I haven't thought much about long term to be honest, right now my immediate goal is to get enough USB HID to be able to make it interactive, with the medium term goal of porting enough software for it being self-hosting.

> We live in an age of AI overinvestment. I would reserve judgement until they prove they actually have something.

Haha yes, that's a very fair comment!


There's ideas in here from various other kernels - some of the capability stuff is inspired by (but simpler than) seL4, and the message passing IPC and how I'll avoid priority inversion is based on ideas from QNX. Generally as a learning process I've tried to keep as open a mind as possible, without trying to reinvent all the wheels at once...

SYSTEM has a tiny arch-specific assembly trampoline which just sets up the initial stack pointer for new processes that it creates and makes the jump, but other than that it's source-compatible across architectures.

The platform detail extraction isn't yet complete, such that devices management on non-ACPI platforms isn't finished, but the idea is the abstraction will be enough that drivers for (at least) MMIO devices will be trivially portable.


That's a good way to look at it, and on reflection I feel the same way.

It's certainly useful _to me_ and has helped me really nail down concepts I thought I already understood, but it turns out I didn't.

I just hope that, in an age where it feels like code, and maybe even deep technical knowledge have diminishing value, projects like this don't become completely anachronistic.


Ahem, well, that's embarrassing! :D

This is really great work! Always impressed to see hobby OS projects that get this far, well done.

That said, I’m once again reminded that we sorely need some updated resources for aspiring OS developers in 2025. Targeting 32-bit x86 and legacy devices that haven’t been “the norm” for decades suggests to me a heavy influence from resources like the osdev wiki which, while occasionally useful, are increasingly outdated in the modern world and lead to many questionable choices early on.

I have come to believe (through multiple iterations of my own OS projects) that there’s more value in largely ignoring resources such as osdev and focusing instead on first-principles design, correct layering, and building based on modern platforms (be that x86_64 or something else) and ignoring legacy devices like the PIT and PS2 etc.

I just wish we had good introductory documentation resources to reflect that, and that outdated resources weren’t overwhelmingly surfaced by search engines and now AI “summaries”.

None of the above is intended to take away from OPs achievement, which is fantastic, or from the work done over the years by the osdev community, who I’m sure largely do the best they can with what they have.


Yes. This stuff needs to be retargeted at x86_64 (on EFI, too) and ARM.

Of course, also supporting i386 with legacy BIOS is OK, but it doesn't really get into the meat of what computers are doing now when you power them on.


trying to write about x86_64 (uefi, aslr, message signaled interrupts, lapic, acpi, iommu etc. etc.) and how to do platform init etc. to get ready for OS to execute. its a mess. its also hard to find someone who can proof read it and give meaningful feedback. (not many ppl enjoy this stuff i suppose :()


It is hard, and things are messy in some areas. But in my experience it’s less hard than writing a lot of the existing documentation would’ve been (because good datasheets and documentation are more available now) and actually less messy (with some exceptions) than things were back in the days of the legacy hardware most existing wikis etc focus on.

Finding people with the knowledge, time and willingness to proof-read is also hard - but surely not insurmountable if we collectively decide it’s an endeavour we want to pursue.


While I agree this might be a fun resource and useful example code for various aspects of legacy x86 interfacing, I would urge anyone who hopes to actually get into OS development to ignore this (and in fact every other tutorial I’ve ever seen, including those hosted on the popular sites).

For all the reasons stated in the link from the README [1] and agreed by the author, this project should not be followed if one wants to gain an understanding of the design and implementation of operating systems for modern systems. Following it will likely lead only to another abandoned “hello world plus shell” that runs only in emulation of decades old hardware.

My advice is get the datasheets and programmers’ manuals (which are largely free) and use those to find ways to implement your own ideas.

[1] https://github.com/cfenollosa/os-tutorial/issues/269


People interested in a "read the manual and code it up on real hardware"-type guide should take a look at Stanford's CS140E[1] repo! Students write a bare metal OS for a Raspberry Pi Zero W (ARMv6) in a series of labs, and we open source each year's code.

Disclaimer: I'm on the teaching team

[1]https://github.com/dddrrreee/cs140e-25win


Do you have the course online? Looks like a bunch of files


There aren't any 1hr+ lectures, just some readings (selected from the manuals in `docs/`) and a bit of exposition from the professor before diving into the lab. Lots of "as needed" assistance


Interesting. Now I am wondering if I could build an OS for the Zero. I have five of them sitting in my drawer


You absolutely can, and should! :)


not really my cup of tea, but a random feedback: take a look at nand2tetris as it's really "user friendly" and, if I'm not mistaken, was even made into a game on Steam.


It is user friendly, and it's astounding how much they managed to pack into a single semester. Highly recommended!

However, it's arguably too idealized and predetermined. I think you could get all the way through building the computer in https://nandgame.com/ without really learning anything beyond basic logic design as puzzle solving, but computer organization, as they call it in EE classes, is more about designing the puzzles than solving them. Even there, most of what you learn is sort of wrong.

I haven't worked through the software part, but it looks like it suffers from the same kind of problems. IIRC there's nothing about virtual memory, filesystems, race conditions, deadly embraces, interprocess communication, scheduling, or security.

It's great but it's maybe kind of an appetizer.


IMO this is kind of the tradeoff. In 140E we do touch on virtual memory (w/ coherency handling on our specific ARM core), FAT32 "from scratch", etc. but it comes at the expense of prerequisites. There is a lot of effort to "minify" the labs to their core lesson, but there is an inevitable amount of complexity that can't (or shouldn't) be erased.


disclaimer: I really have no idea about OSes, but hey...

Maybe it's a matter of marketing the product and managing expectations, but many of these projects are A ok being "legacy and obsolete" just for the sake of simplicity for introducing basic concepts.

Let's take two random examples.

(1) "let's create 3D graphics from scratch" It's quite easy to grab "graphics gems" and create a comprehensive tutorial on software renderer. Sure it won't be practical, and sure it will likely end on phong shading, but for those wanting to understand how 3d models are translated into pixels on screen it's way more approachable than studying papers on nanites.

(2) "let's crate a browser from scratch" It's been widely discussed that creating a new browser today would be complete madness (yet there's Ladybird!), but shaving the scope, even if it wouldn't be able to run most modern websites would be a interesting journey for someone who'd interested in how things work.

PS. Ages ago I've done a Flash webpage that was supposed to mimic a desktop computer for ad campaign for tv show. Webpage acted as a personal computer of main character and people could lurk into it between episodes to read his emails, check his browser history, etc. I took it as a learning opportunity to learn about OS architecture and spent ungodly amount of unpaid overtime to make it as close to win3.1 running on dos as possible. Was it really an OS? Of course not, but it was a learning opportunity to get a grasp of certain things and it was extremely rewarding to have an easter egg with a command.com you could launch and interact with the system.

Would I ever try to build a real OS. Hell no, I'm not as smart as lcamtuf to invite couple friends for drinks and start Argante. :)


To pick on graphics, since I'm more familiar with that domain, the problem isn't that this tutorial is about software rasterization, it's that the tutorial is a raytracer that doesn't do shading, textures, shadows, or any geometry but spheres, and spends most of its word count talking about implementing the trig functions on fixed-point numbers instead of just using math.h functions on IEEE floats.


Well put! This succinctly sums up the crux of my argument in my other comments.


great counterpoint :)


That simulated personal computer for a TV character actually sounds really cool. I love the idea that the environment would change from week to week with each new episode. What was the TV show?


Obviously you'll need to read the manuals to get much done, but these kinds of tutorials are complimentary.

The issue with x86_64 is that you need to understand some of the legacy "warts" to actually use 64-bits. The CPU does not start in "long mode" - you have to gradually enable certain features to get yourself into it. Getting into 32-bit protected mode is prerequisite knowledge to getting into long mode. I recall there was some effort by Intel to resolve some of this friction, breaking backward compatibility, but not sure where that's at.

The reason most hobby OS projects die is more to do with drivers. While it's trivial to support VGA and serial ports, for a modern machine we need USB3+, SATA, PCI-E, GPU drivers, WIFI and so forth. The effort for all these drivers dwarfs getting a basic kernel up and running. The few novel operating systems that support more modern hardware tend to utilize Linux drivers by providing a compatible API over a hardware abstraction layer - which forces certain design constraints on the kernel, such as (at least partial) POSIX compatibility.


Even taking only x86_64 as an example, going from real to long modes is primarily of concern to those writing firmware these days - a modern operating system will take over from UEFI or a bootloader (itself usually a UEFI executable). The details of enabling A20, setting up segmentation and the GDT, loading sectors via BIOS etc are of course historically interesting (which is fine if that’s the goal!) but just aren’t that useful today.

The primary issue with most tutorials that I’ve seen is they don’t, when completed, leave one in a position of understanding “what’s next” in developing a usable system. Sticking with x86_64, those following will of course have set up a basic GDT, and even a bare-bones TSS, but won’t have much understanding of why they’ve done this or what they’ll need to do to next to support syscall, say, or properly layout interrupt stacks for long mode.

By focusing mainly on the minutiae of legacy initialisation (which nobody needs) and racing toward “bang for my buck” interactive features, the tutorials tend to leave those completing it with a patchy, outdated understanding of the basics and a simple baremetal program that is in no way architected as a good base upon which to continue toward building a usable OS kernel.


You can just copy and paste the initialization code. It only runs once and there's very little of value to learn from it (unless you're into retrocomputing).


I don't even know where to _begin_ writing an operating system.

If i wanted to learn just so i have a concept of what an os does, what would you recommend?

I'm not trying to write operating systems per se. I'm trying to become a better developer by understanding operating systems.


Go do the xv6 labs from the MIT 6.828 course, like yesterday. Leave all textbooks aside, even though there are quite a few good ones, forget all GitHub tutorials that have patchy support, blogs that promise you pie in the sky.

The good folks at MIT were gracious enough to make it available for free, free as in free beer.

I did this course over ~3 months and learnt immeasurably more than reading any blog, tutorials or textbook. There’s broad coverage of topics like virtual memory, trap processing, how device drivers work (high-level) etc that are core to any modern OS.

Most of all, you get feedback about your implementations in the form of tests which can help guide you if you have a working or effective solution.

10/10 highly recommended.


Tanenbaum's textbook is highly readable, comprehensive (surveys every major known solution to each major prpblem), and mostly correct. xv6 may be a smaller, more old-fashioned, and more practical approach. RISC-V makes the usually hairy and convoluted issues of paging and virtual memory seem simple. QEMU's GDB server, OpenOCD, JTAG, and SWD can greatly reduce the amount of time you waste wondering why things won't boot. Sigrok/Pulseview may greatly speed up your device driver debugging. But I haven't written an operating system beyond some simple cooperative task-switching code, so take this with a grain of salt.


Funny question since you bring up JTAG and RISC-V -- do you have a cheapish RISC-V device you'd recommend that actually exposes its JTAG? The Milk-V Duo S, Milk-V Jupiter, and Pine64 Oz64 all seem not to expose one; IIRC, the Jupiter even wires TDO as an input (on the other side of a logic level shifter)...


That doesn't seem off-topic at all to me!

I don't know what to recommend there. I have no relevant experience, because all my RISC-V hardware leaves unimplemented the privileged ISA, which is the part that RISC-V makes so much simpler. The unprivileged ISA is okay, but it's nothing to write home about, unless you want to implement a CPU instead of an OS.


If you want to practice, try: https://littleosbook.github.io/

I am an occasional uni TA that teaches OS and I use littleosbook as the main reference for my own project guidebook.

It's a decent warm-up project for undergraduates, giving them a first-hand experience programming in a freestanding x86 32-bit environment.


Start with a simple program loader and file system, like DOS.


I suggest the opposite. Never DOS, particularly never MS-DOS and never x86 or anything in that family. They are an aboslute horror show of pointless legacy problems on a horrifying obsolete platform. Practically everything you'd learn is useless and ugly.

Start with an RTOS on a microcontroller. You'll see what the difference is between a program or a library and a system that does context switching and multitasking. That's the critical jump and a very short one. Easy diversion to timers and interrupts and communication, serial ports, buses and buffers and real-time constraints. Plus, the real-world applications (and even job opportunities) are endless.


If you have that sort of disrespect for computing history, you will only destroy and reinvent things badly.


risc-v seems is a clean-sheet design and that should be a good starting point (imho).

fwiw, xv-6, the pedagogical os has migrated to it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: