Oh, I was assuming that Eager was responding to klik99's question about how we could identify hallucinations in the output—round tripping doesn't help with that.
If what they're actually saying is that it's possible to train a model to low loss and then you just have to trust the results, yes, what you say makes sense.
I haven't found many places where I trust the results of an ML algorithm. I've found many places where they work astonishingly well 30-95% of the time, which is to say, save me or others a bunch of time.
It's been years, but I'm thinking back through things I've reverse-engineered before, and having something which kinda works most of the time would be super-useful still as a starting point.
If what they're actually saying is that it's possible to train a model to low loss and then you just have to trust the results, yes, what you say makes sense.