One of the most important benefits of functional programming is that the compiler can figure out which operations can run in parallel and which can't. In other scenarios, the programmer has to explicitly decide these things herself, so you end up with extra bugs (race conditions, for example).
The language doesn't matter, as long as its compiler can expect purely functional code and the compiler is written to take advantage of that code.
Rust kind of succeeds in having it both ways -- the programmer decides what runs in parallel, but the compiler won't allow race conditions.
I can't imagine any better scenario than just writing code that can be automatically parallelized, though.
So far as I know, all attempts at automatic parallelization have been extremely disappointing. Even when dealing with Haskell code or other similarly functional code, you don't get much out of it. See for instance: http://stackoverflow.com/questions/15005670/why-is-there-no-... I've seen a lot of other research I can't seem to dig up right now that suggests that analyzing programs "in the wild" tend to produce optimal speedups of less than even 2x across whole programs in general.
It's a pity, really. But unfortunately, I wouldn't be promising anybody anything about automatic parallelization, because even in lab conditions nobody's gotten it to work very well, so far as I know. It's a weird case, because my intuition, like most other people's, agrees that there ought to be a lot of opportunity for it, but the evidence is contradicting that.
Anyone who's got a solid link to contradictory evidence, I'd love to hear about it, but my impression is that research has mostly stopped on this topic, because that's how poorly it has gone.
Arguably any functional language does this even more automatically than Go does.