Heck I hope this doesn't result in a bug fork happening; it's already a PITA to deal with fixing bugs that have been inherited from OOo bug trackers or earlier.
You hit things like wanting to test against a file that used to be in a long dead bug tracker.
I guess it might. I wouldn't plan on it without a very detailed survey though, to say the least. Whereas solar is definitely right there. (And you still have to worry about cooling either way.)
There are other substances that can be used for reactor coolant. Molten salt reactors are actually substantially more efficient than water-cooled reactors because they have a higher operating temperature. You can also use liquid metal as coolant, such as lead or bismuth.
From the perspective of PC building, I've always thought it would be neat if the CPU/storage/RAM could go on a card with a PCIe edge connector, and then that could be plugged into a "motherboard" that's basically just a PCIe multiplexer out to however many peripheral cards you have.
Maybe it's gimmicky, but I feel like you could get some interesting form factors with the CPU and GPU cards sitting back-to-back or side-by-side, and there would be more flexibility for how to make space for a large air cooler, or take it up again if you've got an AIO.
I know some of this already happens with SFF builds that use a Mini-ITX motherboard + ribbon cable to the GPU, but it's always been a little awkward with Mini-ITX being a 170mm square, and high end GPUs being only 137mm wide but up to 300mm in length.
Oh, going back to a backplane computer design? That could be cool, though I assumed we moved away from that model for electrical/signaling reasons? If we could make it work, it would be really cool to have a system that let you put in arbitrary processors, eg. a box with 1 GPU and 2 CPU cards plugged in
I believe PCIe is a leader/follower system, so there'd probably be some issues with that unless the CPUs specifically knew they were sharing, or there was a way for the non-leader units to know they they shouldn't try to control the bus.
If every device is directly connected to every other one of n devices with Thunderbolt cables, each with its own dedicated set of PCIe lanes, you'd be limited to 1/n of the theoretical maximum bandwidth between any two devices.
What you really want is for every device to be connected through a massive PCIe switch that allows PCIe lanes to be connected arbitrarily, so, e.g., a pair of EPYCs could communicate over 96 lanes with 32 lanes free to connect to peripheral devices.
There were also PC compatible systems based around ISA backplanes. This was especially common for industrial computers but Zenith/Heathkit made ISA backplane based systems for the business and consumer markets. I own a Zenith Z-160 luggable computer from 1984 which uses an 8 slot 8-bit ISA backplane. 1 slot is occupied by a CPU card which also has the keyboard connector. My system has 2 memory cards which each provide up to 320k along with a serial and parallel port. Zenith sold a desktop version of this as the Z-150. They later released models based upon 16-bit ISA backplanes. I think but am not sure of the top of my head that the last CPU they produced a 16-bit card for was the 486.
This was (is?) done - some strange industrial computers for sure and I think others, where the "motherboard" was just the first board on the backplane.
The transputer b008 series was also somewhat similar.
The RAM and CPU would still be on the same card together, and for the typical case of a single GPU it would just be 16x lanes direct from one to the other.
For cases where there are other cards, yes there would more contention, but few expansion cards are able to saturate more than a lane or two. One lane of PCIe Gen5 is a whopping 4 GB/s in each direction, so that theoretically handles a dual 10gige NIC on its own.
That's what I was hoping Apple was going to do with a refreshed Mac Pro.
I had envisoned a smaller tower design that with PCI slots and Apple developing and selling daughter cards that were basically just a redesigned macbook pro PCB but with a PCI-E edge connector and power connector.
The way I see it a user could start with a reasonably powerful base machine and then upgrade it over time and mix and match different daughter cards. A ten year old desktop is fine as a day to day driver, it just needs some fancy NPU to do fancy AI stuff.
This kind of architecture seems to make sense to me in an age where computers have such a longer usable lifespan and where so many features are integrated into the motherboard.
Now we have cables that include computers more powerful than an old mainframe. So if it pleases you, just think of all the tiny little daughter computers hooked up to your machine now.
Another possibility is that you tend to keep an eye on where your phone and laptop are; there have been some plane fires where people drop a phone into a seat and it ends up getting bent, but at least they notice it fairly quickly.
(Will people know the direction if their USB-C power bank is charging from their phone or their phone is charging from their power bank?)
reply