
Microsoft Bing AI chatbot’s beta testers get disturbing replies and accusations
The Hindu
Those beta testing Microsoft’s Bing AI chatbot allegedly encountered accusations of harm, disturbing confessions of spying behaviour, and an existential crisis
Users who were allowed to beta test the Microsoft search engine Bing’s AI-powered chatbot allegedly encountered accusations of harm, disturbing confessions of spying behaviour, and an AI existential crisis, according to screenshots shared by tech outlet The Verge on Wednesday.
In February, users were invited to try out a restricted version of the Bing search engine that had AI chatbot capabilities. However, those with special approval could test the chatbot in more detail.
Screenshots and quotes shared by The Verge saw the Bing chatbot accusing users of trying to harm it. The chatbot also allegedly claimed that it spied on developers by accessing their webcams. It further said that workers were sleeping, flirting, and arguing during office hours.
However, The Verge warned that not all the conversations may be authentic as chatbots can update their responses every time.
(For top technology news of the day, subscribe to our tech newsletter Today’s Cache)
The new Bing page stated that “unexpected” results were still possible during the preview stage and asked users to report them.
“Bing tries to keep answers fun and factual, but given this is an early preview, it can still show unexpected or inaccurate results based on the web content summarized, so please use your best judgment,” said a notice on the official website.