top of page
Search

What We Can Learn from AI Grappling with Language

Updated: Dec 12, 2022

Although the computer is still far from using natural languages like the naturals (i.e., us Homo sapiens), the stumbles it falls into can give us some interesting perspective on why translators and everyone else need to handle it very warily.


William Benzon tries ChatGPT out a few times, and it makes some good old college tries (some people think it can write a pretty good undergraduate term paper, but real faculty members usually grade chatbot papers pretty low on the curve). Here it is on Steven Spielberg’s classic film Jaws. In trying to summarize the plot, it writes, among other things:


Throughout the film, the shark attacks and kills several people, including a young boy and a woman who was skinny dipping. The men eventually discover that the shark is a large great white and they set out to kill it using various methods, including using barrels to track it and using a large explosive to try to kill it.


In the end, the men are able to kill the shark, but not before it takes a significant toll on the town and its inhabitants. The film concludes with the men triumphantly returning to shore as heroes, having saved the town from the terror of the great white shark.


But, as Benzon notes, the men don't use the barrels to track the shark, but instead to tire it out by sinking a line into the shark to which they attach some empty barrels, thus forcing it to keep swimming rather than diving under water. And they don't triumphantly march to shore as heroes. At the end of the film, two men swim toward shore and then stand up as the credits role; no triumph is shown.


It seems that the AI "film critic" is simply picking up fragments of prose somewhat related to the movie from somewhere on what Al Gore called the "Information Superhighway" and tacking them onto its essay without thinking – because of course it can’t think. I have read quite a few reviews by real, human film critics that make similar mistakes, probably because they need to write their copy quickly to meet deadlines. But ChatGPT doesn’t have that excuse.


On the site Medium.com, Clive Thompson wrote an essay titled “On Bullshit, And AI-Generated Prose.” He gives a few examples of ChatGPT producing what seems to be “factually accurate,” but isn’t.


When it comes to facts, the AI sometimes flies off the rails spectacularly. When the biology professor Carl T. Bergstrom asked ChatGPT to write a Wikipedia entry about him, it got basic dates of his career wrong, said he’d won awards he hadn’t, and claimed he held a professorship that doesn’t even exist. When Mike Pearl asked it what color were the uniforms of Napoleon’s Royal Marines, it utterly muffed it. (And OpenAI wasn’t the only AI running afoul of facts. A few weeks ago, Meta released Galactica, an AI it claimed could summarize and sift through scientific findings, but it mangled so much basic scientific info that Meta pulled it offline after only two days.)


He asks why it makes such simple factual mistakes. He answers:


It’s probably because AI models like this do not appear to actually understand things. Having been trained on oodles of text, they’re great at grokking the patterns in how we humans wield language. That means they autocomplete text nicely, predicting the next likely phrases. They can grasp a lot of context.


But human intelligence is not merely pattern recognition and prediction. We also understand basic facts about the world in an abstract fashion, and we use that understanding to reason about the world. AI like GPT-3 cannot reason very well because it doesn’t seem to truly know any facts. It is, as the scientist Gary Marcus notes, merely the “king of pastiche," blending together snippets of language that merely sound plausible.



Thompson uses the impolite term “BS” because he is referring to Harry G. Frankfurt’s well-known book On Bullshit. Frankfurt notes that people lie when they know the truth but want to conceal it, but bullshit when they don’t care whether what they’re saying is true or false. And of course, AI doesn’t know or care about truth; it simply pastes “snippets of language” together in ways that often seem rather clever, but just as often just seem to be stupid.


Translators, however, being conscious humans, certainly want to look like geniuses at all times, and good ones do their best.



17 views0 comments

Recent Posts

See All

Do the new "chatbots" help translators?

Looking at examples I have encountered of what these new developments, more technically called "large language models (LLMs)," have achieved so far, I must say that they are still far from replacing u

Replacing Japanese with English? Why not?

When many Americans descended on Japan after World War II to temporarily govern the country, many of them found it an attractive place to settle down for a number of years, or even for the rest of the

Another Take on the Nature of Machine Translation

Machine translation (MT) is evolving steadily these days, based on the way artificial “intelligence” (I have to keep using these scare quotes to remind people how artificial all of this stuff is) is b

bottom of page