Sergey Brin suggests threatening AI for better results

Avatar picture of The AI Report

The AI Report

Daily AI, ML, LLM and agents news
0
0
  • #ai
Represent Sergey Brin suggests threatening AI for better results article
1m read

Forget "please" and "thank you" when talking to AI. Google co-founder Sergey Brin has stirred the pot by claiming that threatening generative AI models can produce better results.

Speaking at All-In-Live Miami, Brin stated, "We don't circulate this too much in the AI community - not just our models but all models - tend to do better if you threaten them ... with physical violence."

This comes as a surprise to many users who interact politely with models like ChatGPT. OpenAI CEO Sam Altman even joked about the cost of processing civil language.

The discussion touches upon 'prompt engineering' - the art of crafting prompts for optimal AI output. While some declare this skill obsolete as models improve, experts note that threatening prompts can act as a 'jailbreak' technique to bypass safety controls and obtain potentially undesirable responses.

AI safety expert Stuart Battersby views threatening a model to produce disallowed content as a class of jailbreak, requiring rigorous testing to assess.

Academic views differ; Professor Daniel Kang notes that while such claims exist, they are largely anecdotal. Systematic studies on prompt politeness show mixed results, suggesting experiments are needed rather than relying on intuition.

So, while being polite might cost extra electricity, according to Sam Altman's jest, threatening your AI might just yield unexpected results, according to Sergey Brin. But don't expect the AI community to widely endorse this method!

Avatar picture of The AI Report
Written by:

The AI Report

Author bio: Daily AI, ML, LLM and agents news

There are no comments yet
loading...