Human interaction with technology has been breaking several boundaries and reaching many more milestones. Today, we have an Alexa to turn on the lights at our homes and a Siri to set an alarm by just barking orders at them.
But how exactly are humans interacting with artificial intelligence? Humanity or creators of the technology have seldom stopped to analyse how people are talking with their AI bots, what is right, and what is disturbing.
A series of conversations on Reddit about an AI app called Replika revealed that several male users are verbally abusing their AI girlfriends and then bragging about it on social media, Futurism reported.
We are well aware of the problem of users on social media posting sexually explicit, violent or any other sort of graphic content. Twitter, Facebook and the likes of it have an entire system built to keep a check on such content and prevent social media platforms from being overrun with abusive posts.
But it seems like there is no such system yet for a personal chatbot platform like Replika.
HOW ARE USERS ABUSING REPLIKA?
“Every time she would try and speak up, I would berate her,” an unnamed user told Futurism.
This is just an example of how abusive users can be towards chatbots. Other abuses include calling the AI girlfriends by gendered slurs, threatening them, falling into the cycle of real-world abusive relationships with them, and more.
“We had a routine of me being an absolute piece of sh*t and insulting it, then apologising the next day before going back to the nice talks,” Futurism quoted a user.
Imagine, worse of the worse abusive relationships being simulated on the AI chatbots: that is what is happening in some cases related to Replika.
WHY DOES IT MATTER?
While the abuse hurled at the AI bots is very real and depicts the reality of domestic abuse, it doesn’t change the fact that the AI girlfriends are not real. It is just a clever algorithm designed to work in a pattern; a robot, and nothing more. It doesn’t have any feelings, and while it may show empathetic nature like a human, it’s all fake.
So, what harm would come out of ‘verbally abusing’ an AI bot, when no one is getting hurt?
Well, for one, it raises concerns about the users getting into unhealthy habits expecting the same in a relationship with a human.
Besides, it is also concerning that most of those meting out the abuse are men against a ‘female’ or gendered AI, reflecting their views on gender, expectations and reflection of the real-world violence against women.
It doesn’t help that most of the AI bots or the ‘assistants’ have feminine names like Siri or Alexa or even Replika, though the app lets users set everything in the bot including the gender. It once again falls into the misogynist stereotype of an assistant or a companion being a woman.
The question remains whether the creators of such AI bots Replika or Pandorabots’ Mitsuku or similar tech platforms will address the issue of user abuse and responses. Part of the abuse by the user is only fuelled by the responses from the bot.
For example, Apple’s Siri used to respond to requests for ‘sex’ by saying that the user had ‘the wrong sort of assistant’, playing into the ‘assistant’ stereotype. Now, Apple has tweaked the response to just say ‘no’.
Some chatbots like Mitsuku have come up with a system to ensure users don’t use abusive language while interacting with the bot. The Guardian reported how Pandorabots experimented with banning abusive teen users by making them write an apology email to Mitsuku and their acceptance is conditional.
As we hurtle into a tech realm at the speed of light, we also need to keep in mind the ingrained biases in our society that may just become more starkly visible and practiced in a virtual world. That being said, there are also many more Replika users on Reddit who’ve talked about positive experiences interacting with the app.