I'd hope that with embedded DSLs we can get closer. People are already doing lots of it in Haskell, including domains you listed.
Of course there is always the risk that one invents an "inner language" with poorer semantics and tools.
Haskell is a pretty good host for DSLs. But if you want to go lower level than Haskell, you have to essentially write compilers for your embedded DSL rather than the usual interpreters.
And of course, Haskell's type system is not endlessly flexible (yet..). Eg Haskell still struggles expressing relational programming or linear types / uniqueness.
Yes the low-level DSLs tend to become their own compilers. But the good thing is that they as a side-effect also have an API, so they can hopefully be reused for new DSLs.
Interoperability of different DSLs does not neccesarily follow though, unfortunately...
Yes. And of course, you still need to write a decent compiler to produce decent code.
The situation is similar to Lisp macros: yes, you can implement Prolog in Common Lisp in a few lines, but no, it won't be a fully featured and fast production system, unless you actually put in the work. (Paul Graham's 'On Lisp' makes these excellent points in the chapter on the Prolog interpreter.)
Of course, you might want to go all the way to dependent typing. I think one of Ysabelle or Idris actually compile to 'low-level' languages like Haskell by default?
The main benefit I would like to see in Haskell is totality / termination of programs by default, and hiding Turing completeness behind something like unsafePerformCompute. Similar, we could split IO into IOReadWrite and IOReadOnly.
The former would be the same as the old IO, the latter's actions could depend on the environment but wouldn't be allowed to influence it (or weaker: would at least require idempotence?)---thus allowing more scope for optimization and human understanding when reading code.