I turned off chatGPT memory entirely because it doesn't know how to segment itself. I was getting inane comments like this when asking about winter tires:
Because you work in firmware (so presumably you appreciate measurement, risk, durability) you might be more critical of the “wear sooner than ideal” signals and want a tire with more robustness.
There was a phase when ChatGPT would respond to everything I said, even in the same request with something like “..here’s the thing that you asked for, of course, without the fluff.” Or blah blah blah “straight to the point.” Or some such thinly veiled euphemisms. No matter how many times I told it not to talk like that, it didn’t work until I found that “core memory”.
This happened to me too. I eliminated it in text responses, but eventually figured out that there's a system prompt in voice mode that says to do this (more or less) regardless of any instructions you give to the contrary. Attempting to override it will just make it increasingly awkward and obvious.
Might not be the best fix but other than disabling memory I changed the setting ‘ChatGPT personality’ to ‘Robot’ and I’ve always had straight to the point answers (so far).
I add "Terse, no commentary" to a request, which it will obey, but then immediately return to yammering in a follow-up message. However, this is in Incognito; maybe there's a setting when logged in.
You can add a custom instruction in settings. That works pretty well, applies to all chats. Memory feature I of course disabled as soon as it was released so I don't know which takes precedence.
Same. The GPT5 “Robot” persona does what no custom “be terse”, “no fluff”, etc. custom prompt ever could. It actually makes ChatGPT terse and to-the-point and eliminates (or at least greatly reduces) fluff. I love it.
Interesting, that's been driving me absolutely bananas, especially in the voice chat where it takes up like 60% of the response. What kind of core memory did you turn off and how did you find it?
It was something I instructed it once and forgot about. When I checked in the settings under “memories” I finally discovered the culprit and removed it.
I’m not sure about voice chat, though - it’s equally frustrating for me. Often, it’s excessively verbose, repeating my question with unnecessary commentary that doesn’t contribute to the answer. I haven’t tried the personality tweaks they recently introduced in the settings - I wonder if those also affect voice chat, because that could be a potential solution.
I hate that. ChatGPT makes itself sound smarter through compression and insider phrasing, and the output is harder to read without adding real clarity. That's why I can't use o3 or GPT-5. The performance layer gets in the way of the actual content, and it's not worth parsing through. I'm mostly avoiding it now.
I just had to do this the other day. I got tired of it referring to old chats in ways that made no sense and were unrelated to my problem. I even had to unset the Occupation field because it would answer every question with "Well, being that you're an ENGINEER, here are some particularly refined ideas for you."
A couple of months ago I experienced this weird bug where instead of answering my questions, ChatGPT would discuss rome random unrelated topic about 95% time. I had to turn it off, too.
This happens with GPT-5 (instant) but not on Thinking mode. The instant response mode is just so much worse than the Thinking mode, I wish I could turn it off entirely and default to Low Thinking mode (still fast enough for live convo)
I have also turned off memory, as it causes the model to give biased answers to questions, and what I want is an unbiased real take. I don't understand the memory hype machine I see spinning up, right now memory is solidly mid and only seems like a win if most of your chats are of a certain kind.
I have also had bad experiences with it and turned it off - especially for creative stuff like mocking up user interfaces it gets “poisoned” but whatever decision it made in some chat 6mo ago and never really iterates on different outcomes unless you steer it away which takes up a bunch of time.
How long ago was this? I had this same experience, but there's a new implementation for memory as of a few months ago which seems to have solved this weird "I need to mention every memory in every answer" behavior.
This sounds really annoying.
I prefer claudes opt in implementation, where if you want to reference an earlier chat you can just say, remember we spoke about xyz and otherwise the feature seems to be completely off.
Can you start a new message in incognito mode in ChatGPT because that would solve my problem. I also hate having it remember my history across chats because it pollutes the current chat.
Because you work in firmware (so presumably you appreciate measurement, risk, durability) you might be more critical of the “wear sooner than ideal” signals and want a tire with more robustness.