A linguistic anthropologist explains how humans are like ChatGPT

A linguistic anthropologist explains how humans are like ChatGPT

[ad_1]

ChatGPT is a hot topic at my university, where faculty members are deeply concerned about academic integrity, while administrators urge us to “embrace the benefits” of this “new frontier.” It’s a classic example of what my colleague Punya Mishra calls the “doom-hype cycle” around new technologies. Likewise, media coverage of human-AI interaction — whether paranoid or starry-eyed — tends to emphasise its newness.

In one sense, it is undeniably new. Interactions with ChatGPT can feel unprecedented, as when a tech journalist couldn’t get a chatbot to stop declaring its love for him. In my view, however, the boundary between humans and machines, in terms of the way we interact with one another, is fuzzier than most people would care to admit, and this fuzziness accounts for a good deal of the discourse swirling around ChatGPT.

When I’m asked to check a box to confirm I’m not a robot, I don’t give it a second thought – of course I’m not a robot. On the other hand, when my email client suggests a word or phrase to complete my sentence, or when my phone guesses the next word I’m about to text, I start to doubt myself. Is that what I meant to say? Would it have occurred to me if the application hadn’t suggested it? Am I part robot? These large language models have been trained on massive amounts of “natural” human language. Does this make the robots part human?

A typical 'captcha' message featuring a square on the left, the words 'I am not a robot' in the middle and three interconnected curved arrows forming a semicircle