I wasn't speaking in terms of actual relations (since files are a subclass of objects in my mind), but in terms of what inspired me.
That said, I disagree that files are a more powerful refinement. So much of our code is spent converting to and from files, it's a curse more than a blessing. The only really powerful thing about files is that there are tools on our current systems to manipulate them, but I don't think that has to stay that way. Why can't we grep over a collection object from the command line like we do with a file now? Why can't we have good network-transparent objects (a key feature in Renraku)?
The file paradigm in general is tired, in my opinion. Streams just aren't a good mapping for the way we handle data.
Strongly disagree re streams; streams / data-flow oriented programming is extremely easy to parallelize, and isn't going away in the future for at least that reason.
It's trivial today to write a bash script utilizing fork/join parallelism (& and wait, in bash) with streams and fifos for data transfer, and only a little care with ordering to guarantee no concurrency bugs. I do it all the time.
Objects overemphasize a method-full interface. Files force one to use a single pair of methods, read and write, which are the fundamental units of message sending. Objects as implemented in languages like C# are a bastardization of message sending, where the optimization of the message call reduces its composability in the large. When your messages are small packets of data, abstracting over communicating processes gets a whole lot easier.
Perhaps the design should be "everything is a Resource" (ala REST) rather than "everything is an Object".
The beneficial aspect of "everything is a File" is the uniform interface: you can read a file, write to a file, seek to a particular position in a file (sometimes), and that's about it. It seems limiting, but that's what allows the huge number of interoperable tools to be built.
By going with "everything is an Object", there are no constraints on the interface. Every class of objects has it's own set of methods, and tools need to be designed for specific classes/interface rather than for "everything". Interoperability will be lost.
Resources are like objects, but constrained to uniform interface: their methods are GET, PUT, POST, DELETE, OPTIONS, HEAD. That's all the methods you need to manipulate individual objects and collections of objects. Of course, you'll need uniform identifiers (URLs) for the objects, and a uniform representation (or a set of standard representations.)
This will give you network-transparent resources, assuming you use globally unique URIs. It also turns the OS into a generic Web Service. I'm not sure what the implications are of that, but it seems like it might be interesting to explore.
I've always loved the Perl idea of "If you want to treat everything as objects in Perl 6, Perl will help you do that. If you don't want to treat everything as objects, Perl will help you with that viewpoint as well." (from http://svn.pugscode.org/pugs/docs/Perl6/Spec/S01-overview.po...).
From a traditional OS level this isn't very useful because it's your job to implement the lowest-level capability of storing data. But what if you're already running inside of an OS that provides all of that stuff for you. Then you are free to implement all sorts of crazy abstractions that can directly apply to even greater problems, like network transparency and distributed objects.
Now think of this as PG has with Arc (http://paulgraham.com/core.html) by creating basic axioms for an object interface given the pre-existing facilities of working inside a traditional OS. Something that could run a desktop to running on the cloud and you have some intriguing problems to work on that cannot be addressed by current OSes.
I think there's a lesson to be learned from current database trends. It seems that no one is expecting a database to automatically achieve scalability anymore, so now we have all of those projects that provide a distributed interface to a run-of-the-mill RDBMS (I'm thinking of Amazon's Dynamo work).
So, yeah, throw out the file concept, and everything else that has a chapter title in your OS text book. Then you'll be able to find ways of working in new OS/language concepts and quickly arrive at great problems like the current concurrency terror and cloud computing.
I think the thing people miss about objects in this scenario is that they don't have to be used directly by everything. That is, if I have an object that contains a list of names, `grep' doesn't have to support grepping over my object, it just has to support grepping over a list. You can just call `grep myObject.names Phil'.
I don't think making objects less generic is a good idea, but rather we need to interact with them differently.
>By going with "everything is an Object", there are no constraints on the interface. Every class of objects has it's own set of methods, and tools need to be designed for specific classes/interface rather than for "everything". Interoperability will be lost.
Not if every object inherits from a base interface that defines basic operations
While I think REST is great (and the only hope for some sanity in the future of the web), it doesn't seem fundamentally different from the "everything is a file" model, except for some historical limitations (like making navigation of the resource/file hierarchy extremely painful due to no standardized way to list resources/files).
ioctl was a mistake by people that didn't understand the "everything is a file principle" (a huge mistake I might add).
The original Unix from Bell Labs had no ioctl, Plan 9 has no ioctl, and the Linux people have been claiming to want to eventually get rid of all ioctls due to all the problems they cause, but the inertia and all the people that seem incapable of writing interfaces without ioctl means it will be ages before they get there.
That said, I disagree that files are a more powerful refinement. So much of our code is spent converting to and from files, it's a curse more than a blessing. The only really powerful thing about files is that there are tools on our current systems to manipulate them, but I don't think that has to stay that way. Why can't we grep over a collection object from the command line like we do with a file now? Why can't we have good network-transparent objects (a key feature in Renraku)?
The file paradigm in general is tired, in my opinion. Streams just aren't a good mapping for the way we handle data.