I wonder how much was due to legacy databases living in files.
For example, let's say you need to read /etc/foo.conf a lot. That file has internal structure, too, so even if you need just one value from it, you have to ask the filesystem (database) for the bytes that make up that file, then parse the whole thing (maybe), just to get the one value you want. Keeping it in a database just adds overhead.
In contrast, if your filesystem/database had actual structure, and stored the values of /etc/foo.conf in a schema, then every program that needed one value from it could do one query for just that value they need. That could be even faster than a dumb filesystem.
This seems like an area where you need to change everything all at once, or face a long slog of updating every piece of data in your system.
I wonder how much was due to legacy databases living in files.
For example, let's say you need to read /etc/foo.conf a lot. That file has internal structure, too, so even if you need just one value from it, you have to ask the filesystem (database) for the bytes that make up that file, then parse the whole thing (maybe), just to get the one value you want. Keeping it in a database just adds overhead.
In contrast, if your filesystem/database had actual structure, and stored the values of /etc/foo.conf in a schema, then every program that needed one value from it could do one query for just that value they need. That could be even faster than a dumb filesystem.
This seems like an area where you need to change everything all at once, or face a long slog of updating every piece of data in your system.