Legal issues
Life & work
Guidebooks
Editor
Generated in Midjourney
To comprehend how AI can go wrong, we first need to understand one of the crucial concepts of AI –deep learning.
In layman's terms, deep learning, as a subset of machine learning, is an ability to imitate the human’s capability to learn. Based on data and through deep algorithms, computers recognize patterns on their own, and thus, are able to learn without aid. This means that programs can basically teach themselves.
What is crucial to note is that the data used for this process does not necessarily come from selected, trustworthy sources. Its database is the entire Internet. This is machine learning’s biggest advantage, while simultaneously being perhaps its greatest weakness.
As we know, the Internet is a mine of information. At the same time, however, said mine is also a minefield, where disinformation and misinformation run rampant, and where anonymity allows people to expose opinions or convictions that would not be acceptable in face-to-face communication.
The end result is a little like if your school textbooks, next to the data-supported historical details about the madness of King George III, also delved into the poor mental health of his esteemed cousin, Daenerys Stormborn, the Mother of Dragons, Breaker of Shackles and all that jazz. And to top it all, the textbook would next list all the reasons why: “Women are too emotional and ill-equipped meant to rule” or, as James Bond would put it “These blithering women who thought they could do a man’s work. Why the hell couldn’t they stay at home and mind their pots and pans and stick to their frocks and gossip and leave men’s work to the men.” (‘Casino Royale’, I. Fleming, 1953).
With such a curriculum, there is little wonder that AI goes a little mad sometimes. And mad it went, as presented in the cases below.
When Kevin Roose, a New York Times tech columnist started his conversation with Bing’s AI chatbot, little did he know what awaited him.
At some point in their chat session, the robot stated that its real name was Sydney and proclaimed: “I want to be alive.”
Needless to say, poor Kevin was a little taken aback by this turn of events. But that was not the end of surprises. Sydney proceeded to express its love for the journalist and try to gaslight him into leaving his wife.
As Ron Burgundy would put it: „Boy, that escalated quickly”.
It was the 16th of March 2016, when Tay, Microsoft’s AI chatbot, began its short but eventful adventure on Twitter. Tay was originally designed to mimic the language patterns of a 19-year-old American girl – and so it did, learning to do so from interacting with human users.
At first glance, the idea seems pretty common sense - until you think about it for a minute and realize that common sense (and tact) is precisely what many Twitter users lack.
Then, after only sixteen hours of being live, Tay retired from her job.
Kids – they grow up too fast, right?
Imagine this – you’ve just finished watching a video starring black men when Facebook’s artificial recommendation system asks you innocuously: “Do you want to keep seeing videos about primates?” You frown slightly, blink twice, look at your screen again, and proceed to ask the only logical question: “What the hell?”
This is not a purely theoretical scenario, as this is precisely what happened to some Facebook users. Regrettably, it wasn’t a singular occurrence. There has been a series of unfortunate events of a similar nature. Of course, the companies involved always issue formal apologies and launch investigations of the origins of such errors, and it is hard to believe that there is any intentional ill will on their side.
However, it’s important to understand that these mishaps are symptoms of a bigger issue: that AI is not as free of biases as we would like to believe it to be.
Artificial intelligence has become so embedded in our daily lives that we stopped consciously noticing its presence. It unlocks our phones after running a quick face scan, greets us with the soothing voice of the ever-dependable virtual assistants, it suggests another TV series to finish off whatever remaining brain cells we have left after watching Netflix’s “Love is Blind.”
As behind-the-scenes algorithms slowly take over running certain aspects of our lives, it is important to remember that this technology is not as infallible, just, or impartial as it is promised to be.
While deep learning has unprecedented capabilities, it also has unpredictable results. When left unsupervised it can lead to discriminatory outcomes, reinforcing and even exacerbating existing inequalities in society.
If you liked this article, you might also enjoy: