How Stranger Things can make you smarter about OpenAI's GPT-3 language model

OpenAI’s GPT-3 continues to generate press as developers figure out how to build apps on top of its API.

I want to present a thought experiment that I hope helps characterize the limits of GPT-3's abilities.  What I present is a Stranger Things version of a thought experiment created by Emily Bender and Alexander Koller (starts at the 5:44 mark). The original version is more concise, but mine is cooler because I use 80’s pop culture.

To be clear,  my purpose with this essay is not to indulge in the increasingly popular pastime of bashing deep learning. My personal view is that what makes models like GPT-3 so valuable is that they act like a sort of oracle.  Like the oracles of myth, one presents a payment and receives insights that are valuable yet inevitably lead to more questions.  The price paid to GPT-3 is an unreasonably large text corpus and considerable fees for computation. In return, one gains hard evidence about what's technically possible from language models, where "possible" is typically measured by leaderboards for specific English language benchmarks. However, once we know what's possible, we rely on thought experiments and other tools to understand how that algorithm makes those achievements, and just as importantly, how it doesn't.

Thought Experiment: Dustin, Suzie, and The Mind Flayer

Two teenagers named Dustin and Suzie are in love. They maintain a long-distance relationship over CB radio.

https://wstale.com/wp-content/uploads/2019/07/ccelebritiesfotomh-stranger-things-never-ending-1563302521.jpg
Dustin and Suzie connect over CB

Dustin has an adversary, a lawful Evil extra-dimensional octopus-headed being called a Mind Flayer. The Mind Flayer wishes to enter the human dimension and subjugate the humans, but Dustin and his allies have foiled its attempts.

The Mind Flayer

The Mind Flayer needs to defeat Dustin and his allies. However, it knows nothing about our world except that it contains teenagers like Dustin and other types of humans. So, to gather enemy intel, the Mind Flayer eavesdrops on the CB radio communications between Dustin and Suzie.

The Mind Flayer eves drops on the human teenagers.

After listening long enough, the hyper-intelligent Mind Flayer performs an extremely sophisticated statistical analysis of the numerous conversations shared between Dustin and the Suzie. It uncovers statistical patterns between word usage far more nuanced and complex than any mere human algorithm could hope to capture.

Finally, the Mind Flayer uses its otherworldly abilities to block the transmissions between the couple. It then pretends to be Suzie in an effort to deceive Dustin.

The question is, can the Mind Flayer successfully deceive Dustin into thinking that it is Suzie? Whether or not the Mind Flayer can do this depends on what Dustin wants to discuss.

Construx, Autobots, and meaning

Previously the couple has had a lot of social conversations about their various STEM-related hobbies. So when Dustin says he made a robot from a kit, the Mind Flayer can say, “Wow, cool! You made an electronic man! You know what? I made a geodesic dome out of Construx!”

“Electronic man” is a good colloquial definition of “robot”. Talking of “geodesic domes” and “Construx” is very much on theme. The Mind Flayer’s analysis revealed a pattern between such words in previous conversations between the couple. It then used that pattern to generate this response. Dustin accepts the response as meaningful and doesn't suspect anything. The sophistication of the response and the successfully deceiving Dustin demonstrates the Mind Flayer's advanced statistical learning abilities.

Now, suppose that Dustin is excited because he has learned how to construct a DIY version of Optimus Prime that can transform automatically. He wants to explain to Suzie how she can make her own DIY Autobot using the materials she has on hand. He asks the Mind Flayer, who he believes is Suzie, for her thoughts.

Now, the Mind Flayer will have trouble coming up with a technically meaningful response. The Mind Flayer knows that words like “construx,” “kit,” “robot,” and many others have a subtle pattern of co-occurrence. However, these words cluster together because the couple is talking about "hobbies." To say actionable things about “hobbies,” the Mind Flayer would need to understand human psychology and how humans physically interact with their world.

Worse yet, the Mind Flayer’s dimension has eldritch physics that is fundamentally different from the physics of the human’s dimensional plane. So the idea of constructing a toy may be utterly unfamiliar. Indeed, the idea of robots that transform shape (and why that’s cool) relies on the human world's physics and human culture to make any sense.

If it were truly Suzie, she would ask Dustin questions that gave her a high-level idea of how she would go about constructing the Autobot. The Mind Flayer, lacking the cultural and physical frame of reference, cannot articulate such questions.

So instead, the Mind Flayer will fall back on those subtle statistical conversational patterns it has learned. In the past, when either party said things like “I have an idea for a cool project, here is how to do it... What do you think?” the other party said something like “Oh my gosh, this is so cool! I can’t wait to try it out!” If this is what the Mind Flayer says back to Dustin, it's not inconceivable that Dustin will accept this as a meaningful response from Suzie. However, that is not because the Mind Flayer had meaning it was trying to communicate, it couldn’t. Rather, Dustin is projecting meaning onto the response, based on his beliefs and experiences with Suzie and other humans.

Finally, let’s raise the stakes. Suppose Dustin asks the Mind Flayer (who is posing as Suzie) to help construct a novel device to hack the computer systems within a high-security Russian military facility. He needs this information immediately so he can help his friends. He’s getting some engineering fundamentals wrong and needs Suzie’s help.

Now the Mind Flayer is in trouble. As before, it lacks the human frames of reference needed to inject actionable meaning into its response to Dustin. Falling back onto a statistical answer like it did previously would lead to Dustin detecting the deception. There is no response the Mind Flayer could give based on statistical knowledge alone that the boy would find meaningful, given the stakes.

GPT-3 cannot understand meaning. If it were more like the Mind Flayer, it could.

Research from cognitive science and linguistics shows that humans understand the meaning of language within the context of conversations.  Transmission through the medium of conversation relies on interlocutors' shared context, beliefs about each other's beliefs and intents, and similar mental models of how the world works.  These conditions facilitate the transmission of meaning even across space and time, such as when a dead author speaks to me through her novels.

When GPT-3 "talks" to you, it has no intention.  It has no causal model of the world and the things it contains like "Russian military facilities"; even the strongest proponents of GPT-3 and similar models don't argue that their transformer network architectures are sufficient to learn causal models.  GPT-3 has no beliefs about your state of mind; states of mind were not part of the training data.  So it cannot understand meaning.

So when we submit something to GPT-3 through its API, it does not respond with intent or meaning.  Even the Mind Flayer had intent (deception).  Rather, we are tempted to imbue that response with meaning because that response is grammatically sound and consistent with the input.

I suppose GPT-3's inability to grasp intent, beliefs, and causality is where the analogy to Stranger Thing fails.  The Mind Flayer in my example at least had intent (deceiving Dustin).  It could acquire human knowledge, beliefs, and mental models through mind control; in the show, it learns enough from its mind control victims to successfully manipulate other humans.  It even learns enough about Earthly physics to make limber monsters out of flesh goo.

I'm looking forward to the next season!


Talk is cheap.  Are you interested in turning the ideas discussed here into product?  We're hiring!

< Back to All