> If you can not arbitrarily compute data on the target
> machine ASLR is still completely useful
Cache timing and other side channel attacks can be done remotely. RSA implementations have been broken via side channels across ethernet and even via thermal noise.
That said, that ASLR could theoretically be broken remotely hardly implies that ASLR isn't an effective mitigation.[1] If you can quantify the rate of leakage across the channel you could, for example, throttle requests to make an attack impractical. Just be careful about underestimating the rate of bit leakage, or forget that a few leaked bits can make practical a brute-force trial+error attack.
This is why, arguably, the new best practice in writing service daemons (see recent OpenBSD work) is to fork+exec when restarting a service, even when restarting it statefully; as opposed to just resetting internal process state. Also, daemons and daemon slaves should be periodically restarted automatically.
This can be difficult to do, especially if you want to avoid dropping existing, long-lived connections. But when writing new software it's not too bad, especially if you farm out sensitive work (e.g. key signing, complex object parsing) to subprocesses that can be spawned and reaped independently and at a faster rate.
[1] We shouldn't discount the distinction between targeted and untargeted attacks. Currently untargeted attacks don't typically rely on breaking ASLR via timing attacks, and even when they do we can expect that the _cost_ of mass scale timing attacks to be significantly greater. Basically, there's no reason to believe ASLR to be a completely useless mitigation against remote exploits; quite the contrary. Useless for local exploits? Maybe, especially considering that Linux and Windows kernels are so riddled with exploits.
That said, that ASLR could theoretically be broken remotely hardly implies that ASLR isn't an effective mitigation.[1] If you can quantify the rate of leakage across the channel you could, for example, throttle requests to make an attack impractical. Just be careful about underestimating the rate of bit leakage, or forget that a few leaked bits can make practical a brute-force trial+error attack.
This is why, arguably, the new best practice in writing service daemons (see recent OpenBSD work) is to fork+exec when restarting a service, even when restarting it statefully; as opposed to just resetting internal process state. Also, daemons and daemon slaves should be periodically restarted automatically.
This can be difficult to do, especially if you want to avoid dropping existing, long-lived connections. But when writing new software it's not too bad, especially if you farm out sensitive work (e.g. key signing, complex object parsing) to subprocesses that can be spawned and reaped independently and at a faster rate.
[1] We shouldn't discount the distinction between targeted and untargeted attacks. Currently untargeted attacks don't typically rely on breaking ASLR via timing attacks, and even when they do we can expect that the _cost_ of mass scale timing attacks to be significantly greater. Basically, there's no reason to believe ASLR to be a completely useless mitigation against remote exploits; quite the contrary. Useless for local exploits? Maybe, especially considering that Linux and Windows kernels are so riddled with exploits.