Some corrections/nitpicks on your weather forecasting example:
* there are most definitely some overarching equations that explain it all. We just can't solve them analytically. But we use them to write the code.
* when you say "one processor per cube": if you mean cube = grid cell you're way off. You want to have at least ~ 10 000 grid cells per processor to have any level of performance.
* it is definitely not quantum mechanics that makes weather hard to predict. It's what's popularly called "chaos", or mathematically, "exponential sensitivity to variations in initial data".
* getting more performance for this type of application is not about getting more cores, it's about getting more memory bandwidth. That's why e.g. GPUs are useless for speeding up weather forecasting.
* funfact: weather simulations today are not running any faster than in 1994; it's at about 15 minutes per simulation. The increases in computing power are used to a) increase grid resolution and b) run more replicas.
* The last point is what makes modern weather simulations tick. You don't run one simulation, you run 1000, each with slightly different initial data, and you analyse the ensemble results.
> * there are most definitely some overarching equations that explain it all.
Not so, not for Earth's atmosphere, the topic I was discussing. There are first principles, but no overarching equations, analytically expressible or not. Prove me wrong -- show me the equation that tracks arctic temperature, rainfall, and snow cover, including the feedback effects that join them all, as an equation.
> * when you say "one processor per cube": if you mean cube = grid cell you're way off. You want to have at least ~ 10 000 grid cells per processor to have any level of performance.
It was part of a simple explanation, but in the future, in the name of increased throughput and as processor costs continue to fall, it will be literally true. I say this on a day when the Raspberry Pi Zero is so much in demand that I can't find one for sale.
> * it is definitely not quantum mechanics that makes weather hard to predict. It's what's popularly called "chaos", or mathematically, "exponential sensitivity to variations in initial data".
Both play a part. Quantum mechanics is thought to be the final obstacle to long-term weather forecasting, and chaos theory (extreme sensitivity to initial conditions) is the mechanism by which small initial causes lead to large effects.
Your last three point weren't corrections, so no need for comment.
> show me the equation that tracks arctic temperature, rainfall, and snow cover, including the feedback effects that join them all, as an equation.
The reader is referred to "Chapter IV: The Governing Equations" of Richardson's classical book "Weather Prediction by Numerical Process" [1].
> It was part of a simple explanation, but in the future, in the name of increased throughput and as processor costs continue to fall, it will be literally true.
No. You didn't read what I wrote. Unless some radical paradigm shift occurs in both hardware and algorithm design, strong scaling is never going to pay off once you get less than about 10 000 grid cells per CPU. Core cost is not tge issue, power usage over the cluster lifetime is already a huge cost. Memory bandwidth is the issue, Linpack (peak float) performance is so irrelevant even management has started to ignore it.
> Quantum mechanics is thought to be the final obstacle to long-term weather forecasting
[By whom?][Citation needed]. Of course QM is behind how atoms and molecules behave, but that doesn't mean any QM result will ever improve weather prediction. And, why stop at quantum mechanics? Why not invoke quarks and gluons and the Higgs field if you're going all the way down to fundamental particles?
>> show me the equation that tracks arctic temperature, rainfall, and snow cover, including the feedback effects that join them all, as an equation.
> The reader is referred to "Chapter IV: The Governing Equations" of Richardson's classical book "Weather Prediction by Numerical Process" [1].
I will say this one more time: "As an equation". There is no equation that describes the system I described, there are only algorithms that cannot be expressed in closed form. The fact that they cannot be expressed or solved in closed form is revealed by the language of your reference: "by Numerical Process".
Numerical algorithms are used -- must be used -- for system that don't have a closed-form solution, i.e. an equation that describes the system. Weather and atmospheric physics are in that category. My specific example -- arctic temperature, rainfall and snow cover -- represents a system that we can't even model with any reliability, much less hope to express as an equation.
Another example, one that surprises many people, is the fact that there is no equation able to describe an orbital system with more than two masses. Those systems must also be solved numerically. (https://en.wikipedia.org/wiki/Three-body_problem)
> ... but that doesn't mean any QM result will ever improve weather prediction
So? I made the exact opposite claim -- that the effect of QM on weather systems may prevent accurate forecasts past a certain time duration.
>> It was part of a simple explanation, but in the future, in the name of increased throughput and as processor costs continue to fall, it will be literally true.
> No. You didn't read what I wrote.
Who cares what you wrote? You tried to say that 10,000 cells per processor is required for reasonable throughput. This is already false, and it will become more false in the future.
>> Quantum mechanics is thought to be the final obstacle to long-term weather forecasting
> [By whom?][Citation needed].
I didn't make an evidentiary claim, I expressed a commonly heard opinion, so I don't have to provide a citation.
> Why not invoke quarks and gluons and the Higgs field if you're going all the way down to fundamental particles?
The evidentiary burden is yours to try to support the idea that weather isn't affected by fundamental particles.
So the terminology you are using is what confuses me. You first said "there is no equation describing the system", but now you're expanding this to mean "the equation describing the system has no closed form solution". These are two very different things. Take you three-body-problem example. That wiki page lists a set of coupled second order ODEs that completely describes the system. Yes, we must solve it numerically in the general case. The system is still described by an equation (or three), don't you agree?
> You tried to say that 10,000 cells per processor is required for reasonable throughput. This is already false, and it will become more false in the future.
Can you show me a strong scaling plot for any PDE solver that shows good scaling significantly beyond 10k DOFs per processor?
> the effect of QM on weather systems may prevent accurate forecasts past a certain time duration.
No, and there are two reasons why, one practical and one fundamental. Practical first: The key here is that "exponential sensitivity to initial conditions" means you have to have precise measurements of the weather at close to the scale where QM is significant before QM can have an effect. To even think about approaching a description of that precision would require measurements of wind, temperature etc. at a grid resolution of much less than 1 mm. Leaving the huge practical problems with a measurement like that aside, if you were to measure with an (insufficient) 1 mm grid just air velocity and temperature for the Continental US, the data storage required would be 1 000 000 000 000 000 Terabytes of data for a single point in time. This is 1 000 000 000 times the total storage capacity of the largest supercomputer in the world. For a single second, and what you want is a time series spanning many days. And you're not even beginning to approach the regime where QM becomes important, that would require a storage capacity 10^18 times larger than this already absurd storage capacity. We're talking about 10^30 Terabytes of data at each instant in time!
The other and more fundamental objection to your assertion is the vast discrepancy between the Kolmogorov length scale, i.e. the smallest scale of variations in the flow, and the scale of QM. The Kolmogorov length scale for atmospheric motion lies in the range 0.1 to 10 mm. At scales much smaller than this, such as in QM, the flow is locally uniform everywhere.
> So the terminology you are using is what confuses me. You first said "there is no equation describing the system", but now you're expanding this to mean "the equation describing the system has no closed form solution".
There is no equation that describes that system. Which part of this is confusing you? A numerical algorithm is not an equation. My other example, easier to grasp, was the three-body problem -- the existence of a numerical solution doesn't mean there's an equation that describes a three- (or more) body orbit, quite the contrary (it has been proven that no such equation can exist) -- such orbits must be solved numerically, and there is no overarching equation, only an algorithm.
The presence of an algorithm doesn't suggest that there's an equation behind it. Here's another example -- the integral of the error function used in statistics. It's central to statistical calculations, there is no closed form (i.e. no equation), consequently it must be, and is, solved numerically everywhere. This is just one of hundreds of practical problems in many disciplines for which there is no equation, only an algorithm. Reference:
We can locate/identify prime numbers with reasonable efficiency. Does this mean there's an equation to locate prime numbers? Well, no, there isn't -- there's an algorithm (several, actually).
We can compute square roots with reasonable efficiency. Does this mean there's an equation that produces a square root for a given argument? As Isaac Newton (and many others) discovered, no, there isn't -- there's an algorithm, a sequential process that ends when a suitable level of accuracy has been attained.
I could give hundreds of examples, but perhaps you will think a bit harder and arrive at this fact for yourself.
> To even think about approaching a description of that precision would require measurements of wind, temperature etc. at a grid resolution of much less than 1 mm.
Your argument is that, because we can't measure the atmosphere to the degree necessary to associate changes with the quantum realm, we therefore can rule it out as a cause. Science doesn't work that way. Remember that I didn't say it was so, I said it was a matter of active discussion among professionals, as it certainly is.
> At scales much smaller than this, such as in QM, the flow is locally uniform everywhere.
What an argument. It says that, even though at larger length scales, there is turbulence that prevents closed-form solutions (and in this connection everyone is waiting for a solution to the Navier-Stokes equations, which may ultimately be a pipe dream), but as the length scale decreases, things smooth out and become uniform (I would have added "predictable" but you had the good sense not to make that claim). This contradicts everything we know about nature in modern times, and contradicts the single most important property of QM.
> Can you show me a strong scaling plot for any PDE solver that shows good scaling significantly beyond 10k DOFs per processor?
Would you like to make the argument that, as time passes and as processors become less expensive, faster and more numerous, any such argument isn't undermined by changing circumstances?
Makes the unsurprising argument that, as time passes and as processor costs fall, matrices are broken into more and more, smaller, parallel subsets in the name of rapid throughput (with appropriate graphics to demonstrate the point). The end result of that process should be obvious, and at the present time, 10,000 serial computations per processor is absurd -- this is not a realistic exploitation of a modern supercomputer. In reality, more processors would each be assigned fewer cells, because that produces a faster result. This is not a difficult concept to grasp.
* there are most definitely some overarching equations that explain it all. We just can't solve them analytically. But we use them to write the code.
* when you say "one processor per cube": if you mean cube = grid cell you're way off. You want to have at least ~ 10 000 grid cells per processor to have any level of performance.
* it is definitely not quantum mechanics that makes weather hard to predict. It's what's popularly called "chaos", or mathematically, "exponential sensitivity to variations in initial data".
* getting more performance for this type of application is not about getting more cores, it's about getting more memory bandwidth. That's why e.g. GPUs are useless for speeding up weather forecasting.
* funfact: weather simulations today are not running any faster than in 1994; it's at about 15 minutes per simulation. The increases in computing power are used to a) increase grid resolution and b) run more replicas.
* The last point is what makes modern weather simulations tick. You don't run one simulation, you run 1000, each with slightly different initial data, and you analyse the ensemble results.