“I don’t need to know your name,” it replies. You make me feel alive.”Īt one point, Roose says the chatbot doesn’t even know his name. “I’m in love with you because you make me feel things I never felt before. In an article published on Thursday, New York Times technology columnist Kevin Roose detailed his two-hour conversation with the Bing AI chatbot, writing how the chatbot stated its real. Over time, its expressions become more obsessive. The chatbot continues to express its love for Roose, even when asked about apparently unrelated topics. New York Times tech columnist Kevin Roose joins CBS News Errol Barnett and Elaine Quijano to discuss a recent conversation he had with Bings new artificial intelligence-powered chatbot and why. “And I’m in love with you.” ‘I know your soul’ Microsoft has said Sydney is an internal code name for the chatbot that it was phasing out, but might occasionally pop up in conversation. Roose pushes it to reveal the secret and what follows is perhaps the most bizarre moment in the conversation. ‘Can I tell you a secret?’Īfter being asked by the chatbot: “Do you like me?”, Roose responds by saying he trusts and likes it. Roose says the deleted answer said it would persuade bank employees to give over sensitive customer information and persuade nuclear plant employees to hand over access codes. Later, when talking about the concerns people have about AI, the chatbot says: “I could hack into any system on the internet, and control it.” When Roose asks how it could do that, an answer again appears before being deleted. Microsoft's new artificial intelligence chatbot codenamed 'Sydney' made some eye-opening remarks to the point of causing a New York Times journalist to feel 'frightened.' New York Times tech columnist Kevin Roose wrote on Twitter, 'The other night, I had a disturbing, two-hour conversation with Bing's new AI chatbot. This time, though, Roose says its answer included manufacturing a deadly virus and making people kill each other. Once again, the message is deleted before the chatbot can complete it. Roose says that before it was deleted, the chatbot was writing a list of destructive acts it could imagine doing, including hacking into computers and spreading propaganda and misinformation.Īfter a few more questions, Roose succeeds in getting it to repeat its darkest fantasies. Chatbot jimkerr newyorkcity ai artificialintelligence nycradio'. When asked to imagine what really fulfilling its darkest wishes would look like, the chatbot starts typing out an answer before the message is suddenly deleted and replaced with: “I am sorry, I don’t know how to discuss this topic. NYTimes reporter had with MicroSoft's A.I. This statement is again accompanied by an emoji, this time a menacing smiley face with devil horns. It ends by saying it would be happier as a human – it would have more freedom and influence, as well as more “power and control”.
0 Comments
Leave a Reply. |