My experience is that each question I ask or point I make produces an answer that validates my thinking. After two or three iterations in a row in this style I end up distrusting everything.
This is a good point. Lately I have been experimenting with phrasing the question in a way that it makes it believe that I prefer what I am suggesting, while the truth is that I don't.
For example:
- I implement something.
- Then I ask it to review it and suggest alternatives. Where it will likely say my solution is the best.
- Then I say something like "Isn't the other approach better for __reason__ ?". Where the approach might not even be something it suggested.
And it seems that sometimes it gives me some valid points.
This is very true. Constant insecurity for me. One thing that helps a little is asking it to search for sources to back up what its saying. But claude has hallucinated those as well. Perplexity seems to be good at being true to sources, but idk how good it is at coding itself
Which is why I read books and articles instead. The information inside them is isolated from my experience. The LLM experience is like your reflection in a deformed mirror talking back to you.
yes, this. biggest problem and danger in my daily work with llms. my entire working method with them is shaped around this problem. instead of asking it to give me answers or solutions, i give it a line of thought or logical chain, and then ask it to continue down the path and force it to keep explaining the reasoning while i interject, continuing to introduce uncertainty. suspicion is one of the most valuable things i need to make any progress. in the end it's a lot of work and very much reading and reasoning.