Cursor's Customer Support AI Chatbot Creates Fake Policy

Interestingly, this isn’t the first time a chatbot has gone off the rails

Cursor's Customer Support AI Chatbot Creates Fake Policy
(Downloaded from Freepik)

When a developer reached out to Anysphere—the company behind the popular AI coding tool Cursor—for support, they were met with a response from its chatbot, Sam.

He sought support about getting logged out when switching between devices and Sam responded by saying:

“Cursor is designed to work with one device per subscription as a core security feature.”

The only problem? That policy didn’t exist.

Then, he shared his experience on Reddit, and this left many Cursor users disappointed.

Three hours later, a Cursor representative finally responded on Reddit, clarifying that no such policy exists and blaming the error on the AI support bot.

“Users are free to use Cursor on multiple machines... We’re investigating a recent security change that may have affected session handling.”

Interestingly, this isn’t the first time a chatbot has gone off the rails. Remember when a hacker tricked a car dealership’s chatbot into selling him a Chevy for just $1? Or when a chatbot in New Zealand suggested a recipe for poison? Yep, it’s happened.

These incidents raise a serious question: should AI models that are prone to hallucinations really be used for such critical tasks?

Hallucinations—when AI confidently generates false or nonsensical information—remain an unsolved issue. In fact, OpenAI recently admitted that hallucinations have actually increased in their two latest models… and they aren’t sure why.