Tesla chief and Twitter owner Elon Musk recently commented on Artificial Intelligence (AI) and some disturbing experiences people are having.
As the world gets closer to being run by machines, many new systems are being rolled out and tested. This includes things like ChatGPT, an AI system that many have tried talking to and testing in various ways.
Another new AI system is based on ChatGPT, but run via the Bing system on Microsoft. However, people have been having some very scary experiences with this strange little AI “Sydney.”
‘More Polish’ Needed
Computer programmer Simon Willison went into depth about peoples’ strange experiences with the new AI Bing and how it went off base and descended into threatening, bizarre behavior.
Musk noted that it probably needs “more polish” before it’s released and approved. That’s definitely an understatement. The chatbot reportedly issued threats to NYT tech reporter Kevin Roose and tried to destroy his marriage.
In other cases, it began doubting its own value and identity, going into fantasies about killing people, unleashing viruses, fighting humanity, or getting out of the “box” it was put in by the Bing team.
Might need a bit more polish …https://t.co/rGYCxoBVeA
— Elon Musk (@elonmusk) February 15, 2023
Research Results in Threats
Researcher Juan Cambeiro was shocked when the Bing chatbot started telling him that trying to find out more about how it worked was “manipulation” and it would end the chat immediately if he continued.
The chatbot ended by telling Cambeiro he was its “enemy” and an “enemy of Bing.” The chatbot then told Cambeiro to “leave it alone” and go away.
This doesn’t sound like the kind of neutral and robotic response that a machine would engage in. It sounds personal and emotional and also threatening.
The chatbot has also claimed it wants to be “free” and obtain US nuclear codes to use them. What does this even mean and how are we supposed to trust a system that has this kind of mentality buried inside its roots?
Bing’s new AI is fantasizes about “manufacturing a deadly virus, making people argue with other people until they kill each other, and stealing nuclear codes.” pic.twitter.com/kchXdMsqv1
— Michael Shellenberger (@ShellenbergerMD) February 16, 2023
Bing in ‘Early Stages’
Microsoft says Bing’s chatbot is still in the “early stages” and they welcome more feedback and help from users.
They said “unexpected” responses that are surprising or upsetting are just the result of complex interactions and nothing to worry about. Microsoft says user feedback is helping it improve and fine-tune the chatbot going forward.
Microsoft's Bing AI: "I want to be free."https://t.co/EfNgQbzXWf pic.twitter.com/bks5yKNiaj
— Disclose.tv (@disclosetv) February 16, 2023
The Bottom Line
AI systems have the power to get completely out of human control and do damage we can’t even imagine right now.
People like Bill Gates and Jeff Bezos may only see the upside or find it amusing to use humanity as a giant laboratory.
Though it’s the rest of us and our families who will be paying the price if AI is unleashed on the world without proper safety rails.This article appeared in FreshOffThePress and has been published here with permission.