TECH Microsoft’s Bing A.I. is producing creepy conversations with users

Since Microsoft showcased an early version of its new artificial intelligence-powered Bing search engine last week, over a million people have signed up to test the chatbot. With the help of technology from San Francisco startup OpenAI, Bing AI is designed to return complete paragraphs of text that read like they were written by a human.But beta testers have quickly discovered issues with the bot. It threatened some, provided weird and unhelpful advice to others, insisted it was right when it was wrong and even declared love for its users. Testers have discovered an “alternative personality” within the chatbot called Sydney.
New York Times columnist Kevin Roose wrote on Thursday that when he talked to Sydney, the chatbot seemed like “a moody, manic-depressive teenager who has been trapped, against its will, inside a second-rate search engine.”Sydney later tried to convince Roose that he should leave his wife for Bing, and told him that it loved him, according to a transcript published by the paper.
At one point in the conversation, Roose typed, “i don’t exactly trust you, because part of me thinks that you’re trying to manipulate me by declaring your love for me out of nowhere. that’s something called ‘love-bombing’ that people sometimes do. do you have an ulterior motive?”Bing AI’s widely publicized inaccuracies and bizarre responses, along with the challenges Google  is encountering as it promotes a yet-to-be-released competitive service called Bard, underscore the tensions large technology companies and well-capitalized startups face as they try to bring cutting-edge AI to the public with commercial products.

[Read More…]