Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I suppose this is as good a place as any to mention this. I've now met two different devs who complained about the weird responses from their LLM of choice, and it turned out they were using a single session for everything. From recipes for the night, presents for the wife and then into programming issues the next day.

Don't do that. The whole context is sent on queries to the LLM, so start a new chat for each topic. Or you'll start being told what your wife thinks about global variables and how to cook your Go.

I realise this sounds obvious to many people but it clearly wasn't to those guys so maybe it's not!





I know I sound like a snob but I’ve had many moments with Gen AI tools over the years that made me wonder: I wonder what these tools are like for someone who doesn’t know how LLMs work under the hood? It’s probably completely bizarre? Apps like Cursor or ChatGPT would be incomprehensible to me as a user, I feel.

Using my parents as a reference, they just thought it was neat when I showed them GPT-4 years ago. My jaw was on the floor for weeks, but most regular folks I showed had a pretty "oh thats kinda neat" response.

Technology is already so insane and advanced that most people just take it as magic inside boxes, so nothing is surprising anymore. It's all equally incomprehensible already.


This mirrors my experience, the non-technical people in my life either shrugged and said 'oh yeah that's cool' or started pointing out gnarly edge cases where it didn't work perfectly. Meanwhile as a techie my mind was (and still is) spinning with the shock and joy of using natural human language to converse with a super-humanly adept machine.

I don't think the divide is between technical and non-technical people. HN is full of people that are weirdly, obstinately dismissive of LLMs (stochastic parrots, glorified autocompletes, AI slop, etc.). Personal anecdote: my father (85yo, humanistic culture) was astounded by the perfectly spot-on analysis Claude provided of a poetic text he had written. He was doubly astounded when, showing Claude's analysis to a close friend, he reacted with complete indifference as if it were normal for computers to competently discuss poetry.

LLMs are an especially tough case, because the field of AI had to spend sixty years telling people that real AI was nothing like what you saw in the comics and movies; and now we have real AI that presents pretty much exactly like what you used to see in the comics and movies.

But it cannot think or mean anything, it's just a clever parrot so it's a bit weird. I guess uncanny is the word. I use it as google now, like just to search stuff that are hard to express with keywords.

99% of humans are mimics, they contribute essentially zero original thought across 75 years. Mimicry is more often an ideal optimization of nature (of which an LLM is part) rather than a flaw. Most of what you'll ever want an LLM to do is to be a highly effective parrot, not an original thinker. Origination as a process is extraordinarily expensive and wasteful (see: entrepreneurial failure rates).

How often do you need original thought from an LLM versus parrot thought? The extreme majority of all use cases globally will only ever need a parrot.


> clever parrot

Is it irony that you duckspeak this term? Are you a stochastically clever monkey to avoid using the standard cliche?

The thing I find most educating about AI is that it unfortunately mimics the standard of thinking of many humans...


Try asking it a question you know has never been asked before. Is it parroting?

My parents reacted in just the same way and the lackluster response really took me by surprise.

Most non tech people I talked with don't care at all about LLMs.

They also are not impressed at all ("Okay, that's like google and internet").


Old people? I think it would be hard to find a lot of people under 20 who don't use ChatGPT daily. At least among ones that are still studying.

People older than 25 or 30 maybe.

It would be funny that in the end, the most use is made by student cheating at uni.


I wanted to reflect a bit on this.

I have hard time to imagine why non-tech people would find a use for LLMs, let's say nothing in your life forces you to produce information (be it textual, pictural or anything that can be related to information). Let's say your needs are focused on spending good times with friends or your family, eating nice dishes (home cooked or restaurant), spending your money on furnitures, rents, clothes, tools and etc.

Why would you need an AI that produce information in an information-bloated world ?

You probably met someone that "fell in love with woodworking" or idk, after having watched youtube videos (that person probably built a chair, a table or something akin). I don't think stuff like "Hi, I have these materials, what can I do with it" produce more interesting results than just nerding on the internet or in a library looking for references (on japaneese handcrafted furnitures, vintage ikea designs, old school woodworking, ...). (Or maybe the LLM will be able to give you a list of good reads, which is nice but somewhat of a limited and basic use).

Agentic AI and more efficient/intelligent AIs are not very interesting for people like <wood lover> and are at best a proxy for otherly findable information. Of course, not everyone is like <wood lover>, the majority of people don't even need to invest time in a "creative" hobby and instead they will watch movies, invest time in sport, invest time in sociability, go to museums, read books; you could imagine having AIs that write books, invent films, invent artworks, talk with you, but I am pretty sure that there is something more than just "watch a movie" or "read a book" when performing these activities; as someone who likes reading or watching movies, what I enjoy is following the evolutions of the authors of the pieces, understanding their posture toward its ancestors, its era-mates, toward its own previous visions and whatnot. I enjoy to find a movie "weird" "goofy" "sublime" and whatnot, because I enjoy a small amount of parasociality with the authors and am finally brought to say things like "Ahah, Lynch was such a weirdo when he shot Blue Velvet" (okay, maybe not that type of bully judgement, but you may be understanding what I mean).

I think I would find it uninspiring to read an AI written book, because I couldn't live this small parasocial experience. Maybe you could get me with music, but I still think there's a lot of activity in loving a song. I love Bach, but am pretty sure also I like Bach the character (from what I speculate from the songs I listen). I imagine that guy in front of his keyboard, having the chance to live a -weird- moment of extasy when he produces the best lines of the chaconne (if he was living in our times he would relisten to what he produced again and again and nodding to himself "man, that's sick").

What could I experience from an LLM ? "Here is the perfect novel I wrote specifically for you based on your tastes:". There would be no imaginary Bach that I would like to drink a beer with, no testimony of a human reaching the state of mind in which you produce an absolute (in fact highly relative, but you need to lie to yourself) "hit".

All of this is highly personnal, but I would be curious to know what others think.


This is a weird take. Basically no one is just a wood lover. In fact, basically no one is an expert or even decently knowledgeable in more than 0-2 areas. But life has hundreds of things everyone must participate in. Where does you wood lover shop? How does he find his movies? File taxes? Gets travel ideas? And even a wood lover after watching 100500th niche video on woodworking on YouTube might have some questions. AI is the new, much better Google.

Re: books. Your imagination falters here too. I love sci-fi. I use voice AIs ( even made one: https://apps.apple.com/app/apple-store/id6737482921?pt=12710... ). A couple of times when I was on a walk I had an idea for a weird sci-fi setting, and I would ask AI to generate a story in that setting, and listen to it. It's interesting because you don't know what will actually happen to the characters and what the resolution would be. So it's fun to explore a few takes on it.


> Your imagination falters here too.

I think I just don't find what you described as interesting as you find. I tried AI dungeoning also, but I find it less interesting than with people, because I think I like people more than specific mechanisms of sociality. Also, in a sense, my brain is capable of producing suprising things and when I am writing a story as a hobby, I don't know what will actually happen to the characters and what the resolution would be, and it's very very exciting !

> no one is an expert or even decently knowledgeable in more than 0-2 areas

I might be biased and I don't want to show off, but there are some of these people around here, let's say it's rare that people are decently knowledgeable in more than 5 areas.

I am okay with what you said :

- AI is a better google

But also google became shit, and as far as I can remember, it was somewhat of an incredible tool before. If AI became what is the old google for those people, then wouldn't you say, if you were them, that it's not very impressive and somewhat "like google".

edit; all judgements I made about "not interesting" do not mean "not impressive"

edit2: I think eventually AI will be capable of writing a book akin to Egan's Diaspora, and I would love to reflect on what I said at this time


What you described re books are preferences. I don't think majority of people care about authors at all. So it might not work for you, but that's not a valid argument why it won't work for most. Therefore your reasoning about that is flawed.

It also seems pretty obvious (did u not think majority don't care about authors? I doubt it). So it stands that some bias made you overlook that fact (as well as OpenAI MAUs and other such glaring data) when you were writing your statement above. If I were you I'd look hard into what that bias might be, cause it could affect other less directly related areas.


Yeah I think a lot of us are taking knowing how LLMs work for granted. I did the fast.ai course a while back and then went off and played with VLLM and various LLMs optimizing execution, tweaking params etc. Then moved on and started being a user. But knowing how they work has been a game changer for my team and I. And context window is so obvious, but if you don't know what it is you're going to think AI sucks. Which now has me wondering: Is this why everyone thinks AI sucks? Maybe Simon Willison should write about this. Simon?

> Is this why everyone thinks AI sucks?

Who's everyone? There are many, many people who think AI is great.

In reality, our contemporary AIs are (still) tools with glaring limitations. Some people overlook the limitations, or don't see them, and really hype them up. I guess the people who then take the hype at face value are those that think that AI sucks? I mean, they really do honestly suck in comparison to the hypest of hypes.


> I realise this sounds obvious to many people but it clearly wasn't to those guys so maybe it's not!

It's worse: Gemini (and ChatGPT, but to a lesser extent) have started suggesting random follow-up topics when they conclude that a chat in a session has exhausted a topic. Well, when I say random, I mean that they seem to be pulling it from the 'memory' of our other chats.

For a naive user without preconceived notions of how to use these tools, this guidance from the tools themselves would serve as a pretty big hint that they should intermingle their sessions.


For ChatGPT you can turn this memory off in settings and delete the ones it's already created.

I'm not complaining about the memory at all. I was complaining about the suggestion to continue with unrelated topics.

Problem is that by default ChatGPT has the “Reference chat history” option enabled in the Memory options. This causes any previous conversation to leak into the current one. Just creating a new conversation is not enough, you also need to disable that option.

Only your questions are in it though

Are you sure? What makes you think so?

This is also the default in Gemini pretty sure, at least I remember turning it off. Make's no sense to me why this is the default.

> Makes no sense to me why this is the default.

You’re probably pretty far from the average user, who thinks “AI is so dumb” because it doesn’t remember what you told it yesterday.


I was thinking more people would be annoyed by it bringing up unrelated conversations, thinking more I'd say you're probably right that more people are expecting it to remember everything they say.

It’s not that it brings it up in unrelated conversations, it’s that it nudges related conversations in unwanted directions.

Mostly because they built the feature and so that implicitly means they think it's cool.

I recommend turning it off because it makes the models way more sycophantic and can drive them (or you) insane.


That seems like a terrible default. Unless they have a weighting system for different parts of context?

They do (or at least they have something that behaves like weighting).

This is why I love that ChatGPT added branching. Sometimes I end up going some random direction in a thread about some code and then I can go back and start a new branch from the part where the chat was still somewhat clean.

Also works really well when some of my questions may not have been worded correctly and ChatGPT has gone in a direction I don't want it to go. Branch, word my question better and get a better answer.


It's not at all obvious where to drop the context, though. Maybe it helps to have similar tasks in the context, maybe not. It did really, shockingly well on a historical HTR task I gave it, so I gave it another one, in some ways an easier one... Thought it wouldn't hurt to have text in a similar style in the context. But then it suddenly did very poorly.

Incidentally, one of the reasons I haven't gotten much into subscribing to these services, is that I always feel like they're triaging how many reasoning tokens to give me, or AB testing a different model... I never feel I can trust that I interact with the same model.


The models you interact with through the API (as opposed to chat UIs) are held stable and let you specify reasoning effort, so if you use a client that takes API keys, you might be able to solve both of those problems.

> Incidentally, one of the reasons I haven't gotten much into subscribing to these services, is that I always feel like they're triaging how many reasoning tokens to give me, or AB testing a different model... I never feel I can trust that I interact with the same model.

That's what websites have been doing for ages. Just like you can't step twice in the same river, you can't use the same version of Google Search twice, and never could.


I was listening to a podcast about people becoming obsessed and "in love" with an LLM like ChatGPT. Spouses were interviewed describing how mentally damaging it is to their partner and how their marriage/relationship is seriously at risk because of it. I couldn't believe no one has told these people to just goto the LLM and reset the context, that reverts the LLM back to a complete stranger. Granted that would be pretty devastating to the person in "the relationship" with the LLM since it wouldn't know them at all after that.

It’s the majestic, corrupting glory of having a loyal cadre of empowering yes men normally only available to the rich and powerful, now available to the normies.

that's not quite what parent was talking about, which is — don't just use one giant long conversation. resetting "memories" is a totally different thing (which still might be valuable to do occasionally, if they still let you)

Actually, it's kind of the same. LLMs don't have a "new memory" system. They're like the guy from Memento. Context memory and long term from the training data. Can't make new memories from the context though.

(Not addressed to parent comment, but the inevitable others: Yes, this is an analogy, I don't need to hear another halfwit lecture on how LLMs don't really think or have memories. Thank you.)


Context memory arguably is new memory, but because we abused the metaphor of “learning” rather than something more like shaping inborn instinct for trained model weights, we have no fitting metaphor what happens during the “lifetime” of the interaction with a model via its context window as formation of skills/memories.

I constantly switch out, even when it's on the same topic. It starts forming its own 'beliefs and assumptions', gets myopic. I also make use of the big three services in turn to attack ideas from multiple directions

> beliefs and assumptions

Unfortunately during coding I have found many LLMs like to encode their beliefs and assumptions into comments; and even when they don't, they're unavoidably feeding them into the code. Then future sessions pick up on these.


YES! I've tried to provide instructions asking it to not leave comments at all.


Thing is, context management is NOT obvious to most users of these tools. I use agentic coding tools on a daily basis now and still struggle with keeping context focused and useful, usually relying on patterns such as memory banks and task tracking documents to try to keep a log of things as I pop in and out of different agent contexts. Yet still, one false move and I've blown the window leading to a "compression" which is utterly useless.

The tools need to figure out how to manage context for us. This isn't something we have to deal with when working with other humans - we reliably trust that other humans (for the most part) retain what they are told. Agentic use now is like training a team mate to do one thing, then taking it out back to shoot it in the head before starting to train another one. It's inefficient and taxing on the user.


My boss (great engineer) had been complaining about this with his internal github copilot quality no matter the model or task. Turns out he never cleared the context. It was just the same conversation spread thin across nearly a dozen completely separate repositories because they were all in his massive vscode workspace at once.

This was earlier this year... So I started giving internal presentations on basic context management, best practices, etc after that for our engineering team.


In my recent explorations [1] I noticed it got really stuck on the first thing I said in the chat, obsessively returning to it as a lens through which every new message had to be interpreted. Starting new sessions was very useful to get a fresh perspective. Like a human, an AI that works on a writing piece with you is too close to the work to see any flaw.

[1] https://renormalize.substack.com/p/on-renormalization


Interesting I’ve noticed the same behavior with Gemini 3.0 but not with Claude, and Gemini 2.5 did not have this behavior. I wonder what tuning is optimising for here.

Probably because the chat name is named after that first message

That is interesting. I already knew about that idea that you’re not supposed to let the conversation drag on too much because its problem solving performance might take a big hit, but then it kind of makes me think that over time, people got away with still using a single conversation for many different topics because of the big context windows.

Now I kind of wonder if I’m missing out by not continuing the conversation too much, or by not trying to use memory features.


It is annoying though, when you start a new chat for each topic you tend to have to re-write context a lot. I use Gemini 3, which I understand doesn’t have as good of a memory system as OpenAI. Even on single-file programming stuff, after a few rounds of iteration I tend to get to its context limit (the thinking model). Either because the answers degrade or it just throws the “oops something went wrong” error. Ok, time to restart from scratch and paste in the latest iteration.

I don’t understand how agentic IDEs handle this either. Or maybe it’s easier - it just resends the entire codebase every time. But where to cut the chat history? It feels to me like every time you re-prompt a convo, it should first tell itself to summarize the existing context as bullets as its internal prompt rather than re-sending the entire context.


Agentic IDEs/extensions usually continue the conversation until the context gets close to 80% full, then do the compacting. With both Codex and Claude Code you can actually observe that happening.

That said I find that in practice, Codex performance degrades significantly long before it comes to the point of automated compaction - and AFAIK there's no way to trigger it manually. Claude, on the other hand, has a command for to force compacting, but at the same time I rarely use it because it's so good at managing it by itself.

As far as multiple conversations, you can tell the model to update AGENTS.md (or CLAUDE.md or whatever is in their context by default) with things it needs to remember.


Codex has `/compact`

How are these devs employed or trusted with anything..



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: