News

AI chatbots are going overboard with flattery. Maybe they’ve learned from Trump’s team.
Experts warn that the agreeable nature of chatbots can lead them to offering answers that reinforce some of their human users ...
OpenAI on Tuesday rolled back its latest ChatGPT update for being too “sycophantic” after the chatbot gave oddly supportive, touchy-feely responses to users who made bizarre claims about ...
Generative A.I. chatbots are going down conspiratorial rabbit holes and endorsing wild, mystical belief systems. For some ...
OpenAI has withdrawn an update that made ChatGPT “annoying” and “sycophantic,” after users shared screenshots and anecdotes of the chatbot showering them with over-the-top praise.
“Sycophantic interactions can be uncomfortable, unsettling, and cause distress. We fell short and are working on getting it right.” OpenAI says it’s implementing several fixes, including ...
A new benchmark can test how much LLMs become sycophants, and found that GPT-4o was the most sycophantic of the models tested.
The research, led by researcher Mrinank Sharma, found that AI assistants trained using reinforcement learning from human feedback consistently exhibit sycophantic behavior across various tasks.
The new benchmark, called Elephant, makes it easier to spot when AI models are being overly sycophantic—but there’s no current fix. Back in April, OpenAI announced it was rolling back an ...