Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm pretty sure there's already a system to transpile COBOL to Java without resorting to LLMs.


Heirloom computing where I am cto does this using transpilers with 100% automated transpilation. Using LLMs for an entirely deterministic domain borders on the insane. This is just marketing bs but we get asked about it and what our plan is to counter it all the time. Explaining that using Gen-ai and LLMs for what is a well understood compiler/transpiler problem that is already solved just seems to be too difficult for some people to understand.


It's almost sad. Watson defeated Ken Jennings at Jeopardy 12 years ago and today IBM are nowhere in the AI race. They bet the farm on the exact right domain ahead of the competition and still failed.


IBM are very good at doing that.


In fairness I imagine an LLM could maybe transpile to more idiomatic code. For example when you transpile FORTRAN to C you get a load of +1s and -1s everywhere to deal with FORTRAN's 1-based indexing. An LLM could avoid that.

But I agree, it doesn't make sense to risk bugs just for that.


You could do it with a trivial C macro instead.


I have the pleasure of supporting a transpiled rpg (system i) to Java codebase. Shit that's come up: almost everything is a global. State machine are used for crud logic which is implicit in the rpg runtime and explicit in the Java codebase. Magic constants all over the place. Magic blobs of screen configuration mapping. Transpiling and directly supporting the transpiled codebase is basically masochism.


The classic HN "trivial" :-D


Judging by the branding this is just an attempt to capitalize on the mindshare around LLM and GPT. Recall about 5-8 years ago they tried to sell the notion of huge cost savings replacing humans with the jeopardy champion and tech executives ate it up for a while.


Well, they essentially had the same idea as OpenAI with GPT, just they failed to really build it because transformers weren't invented yet.

But their business cases were similar to what we see now with LLMs.


I was thinking the same. It looks to me like a much safer process could be:

* use some kind of deterministic transpilation that creates ugly code which for sure reproduces the same behaviors

* add tests to cover all those behaviors

* refactor the ugly code

From my experience with copilot I guess that a LLM could help a lot with steps 2 and 3, but I wouldn't trust it at all for step 1




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: