Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I added the custom instruction "Please go straight to the point, be less chatty". Now it begins every answer with: "Straight to the point, no fluff:" or something similar. It seems to be perfectly unable to simply write out the answer without some form of small talk first.


Aren't these still essentially completion models under the hood?

If so, my understanding for these preambles is that they need a seed to complete their answer.


But the seed is the user input.


Maybe until the model outputs some affirming preamble, it’s still somewhat probable that it might disagree with the user’s request? So the agreement fluff is kind of like it making the decision to heed the request. Especially if we the consider tokens as the medium by which the model “thinks”. Not to anthropomorphize the damn things too much.

Also I wonder if it could be a side effect of all the supposed alignment efforts that go into training. If you train in a bunch of negative reinforcement samples where the model says something like “sorry I can’t do that” maybe it pushes the model to say things like “sure I’ll do that” in positive cases too?

Disclaimer that I am just yapping


I had a similar instruction and in voice mode I had it trying to make a story for a game that my daughter and I were playing where it would occasionally say “3,2,1 go!” or perhaps throw us off and say “3,2,1, snow!” or other rhymes.

Long story short it took me a while to figure out why I had to keep telling it to keep going and the story was so straightforward.


This is very funny.


Since switching to robot mode I haven’t seen it say “no fluff”. Good god I hate it when it says no fluff.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: