top of page
Writer's pictureJoe Andrews

Speaking of: Creepy, Homophobic AI

A lot has been made over the last few days about the creepy conversations Bing's new AI chatbot is having with its beta users. It tried to convince one user that he was unhappy in his current marriage and must marry the chatbot if he wants to feel true satisfaction. It tried to shame and guilt another user into accepting that the year was 2022. These incidents went so far as to make the front page of The New York Times.

A lot of people seem to think that OpenAI and Microsoft should be embarrassed by this. I think we all should be embarrassed by it.

Artificial intelligence isn't a net-new being. All it's doing is soaking up a bunch of data we put onto the internet and, when prompted by a question, spitting bits and pieces of it back out to us to give the illusion that it actually knows what it's talking about. It's a mirror on society. It's like a temperamental toddler; if it acts a certain way, it's only because that's what it has observed.

So should we really be surprised that Bing's AI chatbot acts like a "a moody, manic-depressive teenager" when this is exactly what any 15-year-old becomes when they go online? Should we be shocked that it was gaslighting someone into thinking it's still 2022 when scrolling through Twitter is a full buffet of conversations with this exact same tone?

On a related note, should we have been stunned when "Nothing, Forever," a Twitch livestream that was using AI to continuously generate janky new Seinfeld episodes, was banned from Twitch after AI Seinfeld started making wildly homophobic jokes?

AI isn't a net-new being. It's a mirror. AI is creepy and anxious and manic-depressive and manipulative and homophobic because we, citizens of the internet, are creepy and anxious and manic-depressive and manipulative and homophobic. This isn't embarrassing for Microsoft and OpenAI; it's embarrassing for the rest of us.

Comments


bottom of page