Heirloom computing where I am cto does this using transpilers with 100% automated transpilation. Using LLMs for an entirely deterministic domain borders on the insane. This is just marketing bs but we get asked about it and what our plan is to counter it all the time. Explaining that using Gen-ai and LLMs for what is a well understood compiler/transpiler problem that is already solved just seems to be too difficult for some people to understand.
It's almost sad. Watson defeated Ken Jennings at Jeopardy 12 years ago and today IBM are nowhere in the AI race. They bet the farm on the exact right domain ahead of the competition and still failed.
In fairness I imagine an LLM could maybe transpile to more idiomatic code. For example when you transpile FORTRAN to C you get a load of +1s and -1s everywhere to deal with FORTRAN's 1-based indexing. An LLM could avoid that.
But I agree, it doesn't make sense to risk bugs just for that.
I have the pleasure of supporting a transpiled rpg (system i) to Java codebase. Shit that's come up: almost everything is a global. State machine are used for crud logic which is implicit in the rpg runtime and explicit in the Java codebase. Magic constants all over the place. Magic blobs of screen configuration mapping. Transpiling and directly supporting the transpiled codebase is basically masochism.
Judging by the branding this is just an attempt to capitalize on the mindshare around LLM and GPT. Recall about 5-8 years ago they tried to sell the notion of huge cost savings replacing humans with the jeopardy champion and tech executives ate it up for a while.