Pink Lexical Goop: The Dark Side of Autocorrect

 

By Dmitry Mazin.

Illustrations by Sean McOmber.

We’ve awaited the age of artificial intelligence for decades. In our fantasies, AI is usually humanoid, straight out of the Jetsons. But while we anticipate the great arrival of the robotic butlers, AI has, in fact, already quietly permeated the fabric of our daily lives — from shopping, to driving, to communication.

Consider autocorrect, an AI-driven input assistant so ubiquitous that you likely don’t even realize how much it impacts your life. Without it, typing on a smartphone would be exceedingly difficult. That utility comes with a price, however, as autocorrect has begun to significantly alter the way we communicate.

Though you probably first encountered autocorrect as telltale squiggly red lines under your spelling mistakes, its breakthrough came with the smartphone. As you mash the tiny keys on your phone’s virtual keyboard, a sophisticated language model, working behind the scenes, determines which keys you actually intended to press. The iPhone, for example, invisibly enlarges those keys you are likely to hit next, so they are harder to miss[1]. Naturally, spelling is automatically checked in the process. This hybrid of input assistance and spellchecking is what we now know as autocorrect.

Prior to autocorrect, spellcheck was constrained to word processors. Its impact was limited, affecting primarily formal documents like letters and essays. Now, thanks to autocorrect, which mediates everything typed on a smartphone — casual and formal speech included — spellcheck is essentially universal. While the Standard English which spell check enforces may be preferable within the context of a formal document, this isn’t necessarily the case elsewhere.

Autocorrect’s insistence on “ducking” (instead of the much coarser exclamation) is infamous, but its rigidity goes beyond cursing. If you actually prefer the spelling “miniscule,” you must wrestle with autocorrect. And because actual humans adapt quickly to change (and even anticipate it), a human-edited dictionary like Merriam-Webster actually includes words that autocorrect doesn’t, such as “abridgement.”

Autocorrect fundamentally alters English. Since there are many ways to spell most English sounds, its spelling tends to drift. Autocorrect slows this evolution, enforcing Standard English in spaces where novel or informal spellings would have previously gone unmolested. Indeed, a 2011 study concluded that in a 20-year period prior to the introduction of autocorrect, spellcheck was already largely responsible for an accelerating death of English words, while the creation of new words contracted sharply, causing an actual shrinkage of the English lexicon[2].

Nevertheless, autocorrect undeniably provides a net benefit. Using our smartphones would simply be intractable without it. However, a new class of input assistant AIs operates on a level beyond spelling, affecting the very way we choose our words. These AIs cross into dangerous territory, threatening to render the English language into lexical pink slime.

In 2014, years after the iPhone’s initial release, typing on a smartphone apparently remained too slow[3]. With iOS 9, Apple launched a new product called QuickType, a small bar above the keyboard which automatically suggests the next word in a sentence, dramatically reducing the need for typing itself. “Typing as you know it might soon be a thing of the past,” Apple promised[4]. For simple phrases like “on my way,” QuickType works perfectly, and for more complex phrases, its suggestions are often good enough.

But that seemingly unimportant difference between the words QuickType suggests and the words you may have otherwise chosen is significant indeed. The extensive dimensions of our language — for example, the meaning, mood, personality, and bias implied by nuances in the words we choose ourselves — defy quantification by AI. If QuickType homogenizes our choice of words in the same way autocorrect does the way we spell them, we become less expressive as a result. Our language will be limited to those factors which computers can detect and encode.

To understand how QuickType interacts with our language, consider the way QuickType includes built-in biases. On a brand-new iPhone, the suggestions for “you are a good…” are “friend,” “person,” “man.” Similarly, the suggestions for “Filipino…” are “food,” “people,” “girl.” The suggestions for “homosexuality is a…” are “sin,” “good,” “crime.” Having learned our biases, QuickType amplifies their impact by suggesting them right back to us.

Meanwhile, Google has gone beyond the realm of mere words. Smart Reply, released in 2015, offers up entire suggested responses to a variety of emails, from event plans to proclamations of love. If a friend wants to schedule a call, Smart Reply will offer up three diverse replies to choose from: “Sure, what time?”; “Yeah, I’m down.”; and “Sorry, I can’t.” The chat app Allo, released in 2016, even uses Smart Reply to respond to photos — for example, in response to a photo of a smiling friend: “You look so happy.” When on the go, Smart Reply’s suggestions are clearly more tempting than those tiny virtual keys. As of 2017, 12% of emails sent through Google’s Inbox email app were written by Smart Reply[5].

Smart Reply’s responses, though often technically correct in their meaning, are almost always markedly different from what we might have said on our own. Just like QuickType, Smart Reply homogenizes language.

 

 

Beyond that, Smart Reply raises serious existential questions. Whereas autocorrect andQuickType function while you type, Smart Reply steps in before you come up with a response — indeed, before you even see what you’re responding to. All you need to do is choose a response and press send. In fact, like sitting in the driver’s seat of a self-driving car, your involvement is largely optional — the first suggestion is generally correct enough (and will only improve as the AI learns), and Google certainly doesn’t need your help to press the send button. To put it bluntly, when you use Smart Reply, you are effectively voting for your own agency to be automated away.

 

These existential concerns extend to Smart Reply’s email recipients, as well. Given Gmail’s popularity, you may have already received emails generated by Smart Reply. Could you tell if you did? What does that mean for you, the reader, and for the Smart Reply user? If tools like Smart Reply gain more traction, our daily communications will become perverted Turing Tests for which we didn’t volunteer. Did your friend truly intend to call your baby photo cute, or were they simply accepting an AI’s suggestion? Increasingly, we will feel a nagging uncertainty as to whether we are interacting with a human or not.

 

 

 

 

The process described in this essay is not new. Technology has always exerted influence on language. The printing press, the railroad, and the internet have all contributed to the destruction of entire dialects and languages. What’s different today is the level of sophistication of the mediating technology. QuickType and Smart Reply would not have been possible without the computing power, advances in research, and the infinitudes of data available in the 21st century. More than ever, when AI researchers ask, “Can we?” the answer is “We can.”

 

We are increasingly able to bless AI with more and more agency (or more precisely, what we believe passes for agency). In many cases, this brings us great benefits. In employing AI, we might achieve previously unthinkable goals. We could end alienating, dangerous labor, or we could significantly improve medicine. But we must be extremely careful.

 

Like all intellectual endeavors, AI research is driven by curiosity rather than necessity. We must approach AI breakthroughs with skepticism. Not only does technology wield great power over us, it advances at an inhuman pace. Unless we anticipate its coming challenges, we will realize them only in hindsight.

 

 

Notes and References

[1] According to Apple’s patent 8,232,973 “Method, device, and graphical user interface providing word recommendations for text input”

 

[2] Petersen, A. M., Tenenbaum, J., Havlin, S., & Stanley, H. E., Statistical Laws Governing Fluctuations in Word Use from Word Birth to Word Death.

 

[3] Indeed, Google found that “[t]he average user is roughly 35% slower typing on a mobile device than on a physical keyboard.” Françoise Beaufays, The Machine Intelligence Behind Gboard.

 

[4] Apple QuickType marketing page.

 

[5] Brian Strope, Efficient Smart Reply, now for Gmail.

3 thoughts on “Pink Lexical Goop: The Dark Side of Autocorrect”

  1. Great article. I’m used to hearing general alarmist statements about AI, but have seen too little of this type of specific analysis of the dangers. Keep it up!

  2. Enjoyed reading it… chose it over this. Yay for free thinking and free thinkers such as you.youwas not an option by the way, but way was.

Leave a Reply

Your email address will not be published. Required fields are marked *