Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The implementation proved out to be unfortunate for compatibility reasons, and in hindsight it might be obvious, but the Mac was conceived and implemented when absolutely everything about every platform was proprietary/NIHd: keyboard layouts, connectors, expansion slots, mass storage connectors, operating systems and also file systems. Much of what became PC standards weren't conceived as that, but rather proprietary connectors for a platform that just happened to be easily cloned. There was really no expectation of compatibility between different models of computer from the same manufacturer.

I think it does a great disservice to the Macintosh Resource Manager to focus on its implementation using a dual fork file system. What it allowed was a nice high-level interface to work with small pieces of data that hid away a lot of memory and disk management from the programmer - you just ask for a type and ID, and if it's in memory you get it right away, else it gets fetched, including asking the user to swap in the right floppy if they'd swapped it in. Afair it even purges things out of memory when they were no longer needed with no action on the programmers part - this is a big deal in 128k RAM.

There were some bad things about the implementation: it's easy to corrupt the resources of a file, for instance, and there's a fairly small limit to the size and number of resources. But having a way to access structured data that's a core part of the platform can be a great thing. If anything, the Mac didn't take this far enough.



Sure, but you can do all of these things with the resources stored in a section of the binary, like Windows does. You don't need a special file system feature. I have never understood the rationale for this design.


But when you're (basically) one person designing both the filesystem and all the APIs together, and the idea of crossplatform compatibility hasn't really been invented yet, I can see how it looks like something that belongs in the filesystem layer rathr than as a convention above it. I don't know how it works on Windows, but when you say "section" of the binary, that implies the files have some sort of structure? Is it part of the PE format or how does it work? With the resource manager, any file can have resources, which means some file formats can be implemented using very simple OS APIs.


I'm not talking about cross-platform compatibility. It just seemed over-complicated.

Yes, on Windows it is part of the PE file, which is based on UNIX System V's COFF. These files have a list of sections used to distinguish between code, read only static data, read/write static data, optional debugging information, etc. The resources are just another section with the data in a particular format (probably not dissimilar in format to the Mac resource fork) and Windows has APIs to conveniently access the data.

But one thing this can't do is attach resources to arbitrary file formats. Thank you, that's a use case I was unaware of.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: