@grog is this real?
I have a theory that when LLM models are not spineless sycophants, they are more likely to be stubborn about being right when they hallucinate or say something wrong in general. I can’t think of any other reason why every chatbot is set up to be this annoying.
To maximize engagement. They changed it, engagement went down, so they changed it back.
You’re absolutely right! It isn’t efficiency, it’s engagement.
oh yeah I remember like people got disappointed that ChatGPT was too ‘neutral’ and wasn’t like their friend anymore, I dunno where I got it though.
People like it when the chatbot says that they are brave, smart and handsome for asking if their partner is being abusive for not wanting you to fight in front of the children.
I, on the other hand, instinctively distrust anything that is too agreeable or compliments me. Meaning that I’m both cult and LLM proof.
Oh it’s cause when they started testing them with the users initially they found that people preferred it when the bot agreed with them.
I like the term stochastic parrot