I never found that to be true and moreover, these scenarios with nonstandard features tended to be the scenarios I was most worried about breaking.
There is also the problem of the in memory database behaving differently when given the same SQL so your test might make it look like everything works while a bug crops up in production.
> There is also the problem of the in memory database behaving differently when given the same SQL so your test might make it look like everything works while a bug crops up in production.
You've horribly messed up the architecture of your application if that is a problem. (Or you've misunderstood previous comments)
Postgres and sqlite/in memory dbs just behave differently to each other sometimes. Knowing this fact doesn't mean you've messed up your architecture it means that you have some understanding of how these databases work.
I'd say that in-memory/not-in-memory isn't the big difference - it's whether your database is in-process or not. Even with just a database running on the same node, but in a different process, connected to via unix socket, the context switches alone lead to very different performance characteristics. Actually going over the network obviously changes more. It's very easy to miss antipatterns like N+1 queries when you test on sqlite but run on a shared database in prod.
This is irrelevant for unit tests. Performance testing does not make any sense in build environment, you need to do it in an environment close to production and that’s completely different test automation scope.
You can see this stuff often even in test workloads. But even if you disregard that kind of issue, you still have stuff like needing to integrate networked database connections into e.g. event loops, which you don't really need to do for things like sqlite.
There is also the problem of the in memory database behaving differently when given the same SQL so your test might make it look like everything works while a bug crops up in production.
Realism matters.