Not everyone is doing coding interviews. Some might struggle with a particular language due to lack of muscle memory, but can dictate the logic in pseudocode and can avoid pitfalls inferred from past experience. This sort of workflow is compatible with LLMs, assuming a sufficient background (otherwise one can't recognize when the output diverges from your intent).
I personally treat the LLM as a rubber duck. Often I reject its output. In other cases, I can accept it and refactor it into something even better. The name of the game is augmentation.
I personally treat the LLM as a rubber duck. Often I reject its output. In other cases, I can accept it and refactor it into something even better. The name of the game is augmentation.