Many have come out to say that ChatGPT will replace various jobs: programmers, journalists, creative writers, to name a few. I think we will be fine, Alain Starke, writes.
Many have come out to say that ChatGPT will replace various jobs: programmers, journalists, creative writers, to name a few. I think we will be fine, Alain Starke, writes.

No, the new AI chatbot ChatGPT won't take your job

OPINION: People have been quick to dismiss the chatbot ChatGPT as technology that is likely to replace many writing jobs. Such a pessimistic perspective completely overlooks how professionals in different domains can take advantage of it.

ChatGPT has had an impressive take-off. The prompt-based chat interface has been able to provide coherent responses in many different domains. Ranging from two-day itineraries for your favourite city to recipes based on a list of ingredients and solving exam questions: its responses cover a lot of ground, which offers a lot of possibilities for applications – and fun.

ChatGPT is still incredibly limited, and is at times outright hallucinating

Due to its high level of trialability (it was free!), it went viral soon after. Its success has surpassed the introduction of other popular technologies, such as the Iphone, by attaining a million users within a matter of weeks.

Why your job is safe

As it is common with new technologies that have a swift breakthrough, it also triggers prospects of doom and gloom. Many have come out to say that ChatGPT will replace various jobs: programmers, journalists, creative writers, to name a few. Others are concerned about being flooded by AI-generated text of which the authenticity cannot be checked, never mind possible ethical issues.

Since ChatGPT generates consistent and coherent responses, I was initially also in awe. I tested how it would score on my open-ended exam questions – which was seemingly rather good – as well as whether it could reflect on scientific concepts better than I could. Its consistency across various domains of responses also made me raise the alarm: is this the end of take-home exams?

I think we will be fine. For example, we just need better exam questions that don’t reward good looking, but rubbish answers. Right now, when the prompts become a little tough, ChatGPT mainly spews out such polished rubbish.

When questioned about complicated details, for example when asked to reflect on an argument made in a scientific blogpost, it correctly reproduces some of the content, but does not really address the problem. Much like a student at an exam who has not fully engaged with the content of a course, it is able to write a long story that seems sensible, but actually makes little sense upon closer inspection.

It is safe to say that AI will not take my job, nor yours. This has also rarely happened in the past. The information technologies of the past decades typically did not replace human jobs or functionalities, but impacted how we did our jobs.

For example, the internet has not stopped us from communicating with each other, but has merely changed the means of doing so. I am inclined to foresee a transition in how we write texts or content, that is similar to substituting land-line telephones with social media, changing how we communicate with each other.

Polished rubbish

Another reason why ChatGPT is not a job killer, is its current quality of output. ChatGPT is still incredibly limited, and is at times outright hallucinating. When I prompted it to write a few academic paragraphs about a decision-making bias that a student of mine was studying – the asymmetric dominance effect in the food domain – it fabricated references to academic literature that I did not know, in actual journals.

Upon closer inspection, however, the references did not exist. Moreover, while it had written one correct paragraph, it fabricated two other paragraphs that were filled with misinformation. This is a clear example of polished rubbish: a response to a complicated prompt that seems good and is presented confidently, but actually makes little sense.

This problem has also been noticed by the moderators of the programmer Q&A-website Stackoverflow. The forum on which programming problems are shared with a community of programmers has been flooded by ChatGPT-generated responses. These typically look correct, but in reality, rarely are – unless the problem is rather straightforward.

Could be a valuable asset

In a sense, ChatGPT does not differ so much from us when solving problems, as it also taps into the resources that are easily available. We run to Google, ChatGPT relies on a database that is more skewed towards Wikipedia-based information than scientific databases. When things become challenging, it starts to fail.

Hence, its applications are still limited. The prospect of ChatGPT writing news articles is still far away. However, it could be used as a creative asset. For example, a journalist could prompt it to write three different headlines for a given news article, after which the output is used to finetune the ideas that a journalist has.

In fact, using ChatGPT effectively is a skill in itself: entire sub-reddits are devoted to bypassing specific filters that ChatGPT currently has, as well as on how to get the best responses out of it. This is not straightforward, and might become part of courses on ‘interacting with AI and language models’; who knows.

An optimistic outlook more of a value

I expect that AI-based language models like ChatGPT will be an asset to us. Because of that, it will be an important skill to understand how to interact with algorithms. Where human-computer interaction has been a field of research for long, human algorithmic interaction or human model interaction should become a specific subfield.

Consider for example applications that you can use to take a photo of a spot on your skin, to check whether it could be malign or cancerous. Both patients and doctors should be able to assess how reliable such a diagnosis is, otherwise the AI is just a nuisance or a source of tension.

Similarly, with ChatGPT, if it is used incorrectly, it will likely just spread high-quality misinformation. When used correctly, however, it can be a partner in problem-solving or creative writing tasks.

It is up to us to understand its full potential and to educate end users. An optimistic perspective would be much more useful for its development, than a dystopian outlook where the current version of ChatGPT spews out polished rubbish that replaces our jobs.

FURTHER READING:

Powered by Labrador CMS