Opinion:

Technology is not a miracle cure for a clean and tidy internet
OPINION: Artificial Intelligence promises to transform our world, how much should we let it?
Have you noticed the army of AI chatbots, copilots, and assistants invading your computer and phone software lately? The AI boom is here, and every company seems to want a shiny new toy to present to their customers or integrate into their business.
AI writes, edits, and talks. It organises to-do lists and handles customer service. In fact, for some reason, it now even paints, tries to drive cars, and runs social media profiles.
Technology is much like toothpaste: Once you squeeze the tube, there’s no putting it back.
AI promises to change the world and so it is critical that we ask ourselves what kind of world we want to build, because technology is much like toothpaste: Once you squeeze the tube, there’s no putting it back.
AI hasn’t delivered on its promises
With each new company, workspace, and service that adopts or rolls out a new AI, the question becomes more important: Is AI always the right tool for the job?
Let’s be honest with ourselves: most of these tools simply do not deliver on their promises. They do not work as advertised. They do not and cannot think. So why do we keep asking them to think for us?
The truth is that automation is in many ways a necessity to enable the scale and speed of work that our modern world requires. AI does not need to be perfect; it really doesn’t even need to be particularly good to entice use.
Tesla CEO Elon Musk promised all his company's vehicles would be shipped with the hardware necessary for 'full self-driving' back in 2016 but nearly a decade later we’re nowhere near this. ChatGPT couldn’t find the third R in 'strawberry' as recently as June of last year. Yet millions of people rely on these technologies.
It seems AI simply needs to be cheaper than manual labour and functional enough to justify its up-front cost.
Important decisions
Consider as well that some of the industries AI has been and will be introduced to are extremely sensitive, and the roles it takes on are highly consequential.
Look no further than the replacement of flight engineers with autopilot systems dating back to the 1960s and the high-profile failure of the Boeing MCAS navigation system in 2019.
But for the sake of discussion, let’s go deeper into one contemporary example: The use of AI to automate detection of hate speech online.
Lessons from hate speech moderation
Monitoring the internet for hate speech is massively challenging. In the shallow streams of individual servers and channels on platforms like Discord, Twitch, and YouTube, human moderators have some success identifying and removing hate. The problem is that this moderation approach is hard to scale up.
The internet is just too vast, like a raging river of posts, comments, and messages. Its current is too fast and strong for any individual to stay afloat. And just like a river, a person can only observe one small section of the internet at a time, which means it is difficult to know if what they see is representative of the whole.
ChatGPT couldn’t find the third R in 'strawberry' as recently as June of last year.
It is generally not a legal issue that channel moderators possess total authority to decide what is hateful and how to respond to it. But whose opinions would you trust to govern the entire internet?
Now you may be saying to yourself, 'but so and so says AI removes bias, what’s the problem?' to which the answer is that this is simply incorrect. It has been proven time and again whether from the data, the training, the algorithms, or the world around them, bias will inevitably be present in AI systems. Yet achieving neutrality is still often the goal when developing these systems.
In practice, this generally means treating law enforcement as a neutral party and baking their perspective into the algorithms of AI detection tools. This is not truly neutral, but perhaps you believe it to be reasonable. Let’s take it a step further, what if someone else’s perspective was treated as neutral and objective?
A Brave New World
In recent months, Meta and X have both made major changes to their policies around content moderation.
Two of the largest technology companies in the world have in no uncertain terms made it easier for hateful content to spread on their platforms while hailing these changes as a return to freedom of speech and a rebuke of liberal political bias. Should they oversee the building and use of AI moderation tools in the future?
Consider whose perspective is prioritised and enforced. Consider whether our tools are supporting or replacing us.
So, given what you now know, do you want AI doing this job?
Recognise that similar challenges arise with each new role AI is given and each new decision it is asked to make. It should be clear the technology is not a miracle cure; rather, it must be adopted carefully, responsibly, and conservatively. So, I encourage you to build a habit of asking yourself – 'Is this a job I want AI to do?'
Consider whose perspective is prioritised and enforced. Consider whether our tools are supporting or replacing us. Consider what work you want to do and what you don’t. AI, like the printing press and the lightbulb before it, will change our world. It’s already happening.
It’s up to all of us to direct that change purposefully – to shape the world of today into the one we want to live in tomorrow.
Further reading:
Share your science or have an opinion in the Researchers' zone
The ScienceNorway Researchers' zone consists of opinions, blogs and popular science pieces written by researchers and scientists from or based in Norway. Want to contribute? Send us an email!