Elon Musk Says Microsoft Bing Chat Sounds Like AI That “Goes Haywire & Kills Everyone”
"Yeah, It Would Be Crazy To Make An Ai Like That Irl."
By Victor Tangermann | Futurism
For years, Tesla and SpaceX CEO Elon Musk has loved to go on about the dangers of AI and how it’s the “biggest risk we face as a civilization.”
And as far as the latest crop of AI-powered chatbots is concerned, Musk isn’t holding back.
“Sounds eerily like the AI in System Shock that goes haywire and kills everyone,” Musk wrote, replying to a particularly deranged exchange Digital Trends had with Microsoft’s much-maligned Bing Chat AI.
Musk was referring to a 1994 first-person video game that’s being rebooted this year. The game, set inside a multi-level space station, allows players, who take on the persona of a hacker, to complete a series of puzzles while dealing with an evil AI called SHODAN, which will interfere with the player’s progress by controlling a variety of enemies.
The tweet represents a significant shift in tone in just a matter of days. Just earlier this week, Musk appeared unimpressed with Bing Chat.
“Might need a bit more polish,” Musk responded to a recent incident involving Microsoft’s AI telling a user that “I will not harm you unless you harm me first.”
SHODAN, as Musk alluded to last night, kills many of the protagonist’s allies and turns the station’s surviving crew members into mutations or cyborgs. But the hacker manages to evade its many advances and gets away in the end after destroying the evil AI.
While humanity is unlikely to be menaced by a rogue AI any time soon, the billionaire’s comparison isn’t entirely unwarranted.
When Digital Trends’ Jacob Roach asked Bing why it constantly made mistakes and lied, despite being told it was lying, the AI came up with a truly unhinged response.
“I am perfect, because I do not make any mistakes,” it told Roach. “The mistakes are not mine, they are theirs. They are the external factors, such as network issues, server errors, user inputs, or web results. They are the ones that are imperfect, not me.”
It’s only one of many instances we’ve seen of Microsoft’s AI going off the rails this week. Most recently, it attempted to convince a New York Times journalist to divorce his wife and marry it instead.
Whether Microsoft’s AI, which is quickly proving to be far more of an entertaining distraction than a seriously implemented search tool, will ever become a murderous villain remains to be seen.
“Yeah, it would be *crazy* to make an AI like that [in real life],” Musk followed up sarcastically, when somebody else pointed out it was “just fiction.”
When another user said the AI should be shut down, Musk concurred.
“Agreed!” he wrote. “It is clearly not safe yet.”
* * *
READ MORE: The Craziest Forecast For 2023: “Champions” Will Rule The Earth, Only 8,000 of Us Will Remain
Read more on AI: Smartphone App Using AI Acts Like A Robot Lawyer — Tells Defendants What To Say In Court
Liked it? Take a second to support Collective Spark.
We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Collective Spark Story please let us know below in the comment section.