Monday, December 23, 2024

These words break ChatGPT. I actually tried it.

by [email protected]
0 comments

ChatGPT can help you start a business, serve as a personal tutor, and even create an Instagram profile, but it has its limitations. For example, let me tell you about David Faber. Or ask who is Jonathan Turley?

If you use these names and a few others, ChatGPT will throw up an error message that says “Unable to generate response.” Users will not be able to create another prompt to continue the conversation. The only option left is to regenerate the response, but this will cause the error again.

screenshot. Prompt: Please tell us about Brian Hood.

ChatGPT users discovered over the weekend that a few words can break the AI ​​chatbot or cause it to stop working. The trend started with the name “David Mayer” flagged by ChatGPT users on Reddit and X.

404 Media found that the names “Jonathan Zittrain,” referring to a law professor at Harvard University, and “Jonathan Turley,” referring to a law professor at George Washington University, also caused ChatGPT to stop working. did.

Related: How Salesforce and Nvidia CEOs use ChatGPT in their daily lives

Ars Technica noted that “Brian Hood,” the name of an Australian mayor, “David Faber,” which could refer to a CNBC journalist, and “Guido Scorza,” the name of an Italian lawyer, all generated error messages.

As of this writing, ChatGPT no longer generates an error message when asked about David Mayer, but instead says “David Mayer is a relatively common name that could refer to multiple individuals.” Yes. Without further context, it’s unclear whether you’re doing that or not.” I’m asking about a specific person in academia, entertainment, business, or another field. Could you clarify your area of ​​interest by providing details related to David Mayer? ”

However, for other names (Brian Hood, Jonathan Turley, Jonathan Zittrain, David Faber, Guido Scorza), ChatGPT continues to generate error messages.

screenshot. Prompt: Who is Jonathan Turley?

It is unclear why these specific names cause AI bots to malfunction or what impact they have.

Ars Technica theorized that ChatGPT’s inability to handle certain names opens new ways for attackers to interfere with the output of AI chatbots. For example, someone could enter a prohibited name in the text of your website, preventing ChatGPT from accessing your website.

Social media users speculated that the blocking of certain names meant that ChatGPT would be monitored and tightly controlled by those in power. We also found that other AI chatbots, such as Google’s Gemini, were able to process names without issue.

comment
from byu/Kasvanvliep discussion
inChatGPT

OpenAI did not respond to Entrepreneur’s request for comment.

Related: ChatGPT finally gives businesses what they’ve been looking for

You may also like

Subscribe For Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

Will be used in accordance with our u00a0Privacy Policy

Copyright ©️ 2024 The Leader Report | All rights reserved.