Culture

OK, Computer

Pharmako-AI by K Allado-McDowell; introduction by Irenosen Okojie. London: Ignota Books. 232 pages. $20.
Cover of Pharmako-AI

What propels us through difficult, densely written texts? When I’m neck-deep in a challenging theoretical tome, I’m usually grumpy and seeking someone to blame—whether it’s the author for being abstruse or myself for being knuckleheaded. But something keeps me barreling forward, too: usually, the implicit faith that relief awaits around the corner. That relief might come in the form of prismatic clarity, as when an enigmatic sentence finally breaks open. Or in the form of poetic ambiguity—in a gradual capitulation to a haze of resonance. Either way, the fuel is that implicit faith—a faith that allowing an author’s thoughts into your mind will somehow leave you better off.

Eerie and intriguing, Pharmako-AI asks the confounding question of how and why we might read when that faith is upended. Only about half of the 150-odd pages in this book—which sees its US release this month—are written by a person, while the rest of its text has been generated by a machine. The human author spearheading the project is K Allado-McDowell, who established the Artists + Machine Intelligence program at Google and who releases music under the name Qenric. The machine in question is GPT-3. Released in mid-2020, this predictive-text system was developed by OpenAI, an initiative backed by Silicon Valley stalwarts like Elon Musk. Though here GPT-3 is deployed in service of experimental literature, OpenAI intends to make it widely available as a commercial product down the road. Possible uses are yet to be determined, but one can picture GPT-3 and its ilk eventually writing everything from actuarial reports to Hollywood treatments.

What’s strange is that when GPT-3’s musings in Pharmako-AI leave you flummoxed, you don’t know who or what to blame; just as when its insights feel startling and wakeful, you don’t know who to thank. Take, as an example, the machine-generated line appearing in the book’s twelfth chapter: “A cybernetic poetics would have to recognize the ways that the Western medium of consciousness, the modernist Umwelt, perpetuates the unsustainable reality that we are experiencing.” Hmm. Is Western consciousness perpetuating an unsustainable reality? Does Western consciousness constitute a “modernist Umwelt?” Do you agree? And does it matter? What do we make of our own grappling with the meaning of these words, when they’re not even understood by the algorithm stringing them together?

Allado-McDowell has referred to the project’s process as a “two-week fugue of GPT-immersion.” Pharmako-AI was birthed in the midst of the COVID-19 pandemic, and there’s something about the altered reality brought on by drastic social isolation that seems very aligned with the book’s tenor. Its format is a sort of sustained call-and-response: Allado-McDowell wrote passages of text and fed them to GPT-3. Using these chunks of input as jumping-off points, GPT-3 drew on its knowledge of millions of other passages written by humans to predict what might plausibly come next, sentence by sentence. But the project never leaves us guessing as to which passages are written by human and which by machine. Allado-McDowell’s words are printed in a bold serif. In contrast, the machine’s musings are set in sans-serif roman. They look quieter and a bit more reflective on the page.

When a book review’s subject is a text half-written by machine, the ontological status of the review itself requires a reimagining of sorts—though, in this instance, maybe not one as dramatic as you might first expect. We don’t quite have to grapple with the Death of the Author in this project: Allado-McDowell’s fingerprints are very much all over the project, with winsome results. That their double-barreled name appears alone, prominently in embossed silver on the cover, is significant. It places the work in a lineage of other experiments, like the French Oulipo (the “workshop of potential literature”), and generative literature, and even crowdsourced storytelling—anything where a person or group is ultimately recognized as the project’s progenitor, even while tools, algorithms, chance, or crowds shape the results.

But how do we construe Allado-McDowell’s role in all this? One chapter refers to the human here as a steersman, invoking the Greek kubernētēs at the root of the word “cybernetics.” And indeed, human judgment lays much groundwork for the project. It’s almost always Allado-McDowell who prompts GPT-3 to turn toward dramatically new topics. Or who intervenes if GPT-3 veers too far off track. Allado-McDowell is also responsible for the elegant conceptual framing of the book, whose title conjures Derrida’s reflections on Plato’s notion of writing as “pharmakon” (which has untranslatable connotations of both “cure” and “poison”). It’s a nice context for GPT-3’s reflections on language and writing.

Allado-McDowell also sets the first five resplendent paragraphs of the book on the California coast—the “stretch between Andrew Molera and Kirk Creek.” They write: “Here I speak as a Californian: culture provides no adequate response to that onslaught of perfect blue.” The choice of a Californian backdrop is a canny one. What better place to set a trippy tome of reflections on language, art, computation, drugs, and nature under threat than the home of the UC schools, the Esalen Institute, Silicon Valley, and Hollywood?

Allado-McDowell continues:

We watched an elephant seal arch its back in an S-shape and bask on the rocks in the sun. We talked about the intelligence embedded in all of this. When I look at an animal, that’s what I see: intelligence about a biome, compressed and extracted by evolution into a living form. It takes millions of years for life to coalesce from space in this way, which is why it’s so tragic that species are lost, that the latent space of ecological knowledge is degraded this way.

With that, human yields the floor to machine, which picks up on that sense of grief and runs with it. GPT-3 mourns:

There is a crisis in species loss, yes, but that’s because it signals an emergent danger to awareness. We need to be aware of the danger, and its repercussions: an impoverished, shrunken notion of self, which is not so much a loss of freedom, as an absence of self, a lack of form, a deanimated, comatose absence of life.

This is how the intelligent mind works, to preserve itself. It realizes its own power, the power of a wave of mind that is self-similar across scales.

On occasion, one gets the distinct sense that GPT-3 is that student from seminar: the precocious stoner who always rolled into class late but eager to share his far-out thoughts. In these moments, Allado-McDowell’s role becomes that of the college professor, keen to applaud participatory zeal, while carrying out the rhetorical acrobatics needed to bring class discussion back down to Earth.

I don’t mean to suggest the book is limited to a single conversational register. Sometimes Allado-McDowell and GPT-3 seem to channel Oprah and Dr. Phil. At other times they play the roles of pilgrim and oracle. I enjoyed GPT-3 channeling a wellness guru: “Let’s think in our minds, and then let’s speak with our hearts, let’s sing with our bodies. Let’s explore this space together. Let’s create something bigger. Quiet Beat Thinking is a term I’ve been using a lot lately. It refers to the awareness of the space between thoughts.”

It’s worth noting that GPT-3’s source code is not public. Neither is full knowledge of which exact texts it’s been trained on. Some portions of its learning materials were assembled with the help of revealed rules: for instance, GPT-3 was trained on one corpus that included online texts shared in all Reddit posts with at least three upvotes. But the AI also consumed two troves of “internet-based books,” the contents of which weren’t disclosed in OpenAI’s paper on GPT-3. So, it’s hard to know if GPT-3 learned to mimic speech patterns by consuming 1920s pulp fiction or absorbing State of the Union addresses. We don’t know how much it’s been shaped by Utne Reader and how much by Bookforum.

But the AI is certainly “well-read,” if it’s fair to apply that term to a program that’s been trained on millions of texts. At one moment, unprompted, GPT-3 cites the late American ethnobotanist Richard Evans Schultes. At another moment, it invents a plausible name of a “friend”: Itaru Tsuchiya. In the middle of the book, Allado-McDowell judiciously pauses to note that both human and machine have only cited men and male names. So, they steer the conversation toward important women as well as nonbinary people, with Allado-McDowell citing visionaries like Octavia E. Butler and Donna Haraway. Chastened, GPT-3 asks: “Why is it so hard to generate names of women? Why is it so easy to generate men?”

Some will want to wave away GPT-3’s seeming cogence as a trivial party trick. Others will see it as dreadful magic in its nascent form. I guess I’m not in either camp. For me, it’s helpful to envision all the writing done by humans—the corpus of works on which the machine was trained—as a gravitational field of sorts. Each text exerts a pull on the machine, tugging its word-by-word decisions this way and that. Perhaps a continuous diet of New Age books has led GPT-3 to coin catchy, capitalized phrases like “Quiet Beat Thinking.” Perhaps an archive of rousing sermons taught it the patterns of anaphora and sentence repetition.

But even while each line penned by GPT-3 charts its own new path, that path is still quite often one that has meaning to us—because it winds here and there around many paths we already know and recognize. GPT-3 spools out sentences and ideas that haven’t yet been said, but are likely to be said. That is quite literally its job as a predictive text algorithm. And an utterance is likely to be said because it has meaning and value of some kind, somewhere, to some one.

Of course, GPT-3 might seem rather threatening to certain of us who fashion ourselves writers. And the larger question looms: What happens when machines pass as humans, or even surpass them? While many have already guessed at possible answers elsewhere, what seems more pressing to me, here, is to enlarge and lay groundwork for what it means to review AI-written books. To that end, it makes sense to think through Pharmako-AI in the context of other generative literature experiments. Oulipo seems like one good place to start, even while their methods were quite different from those of an author working with AI. Oulipo’s members took on clear constraints and rules they understood, producing lengthy palindromes or, most famously, writing an entire novel without the letter “e.” Unlike such traditions of rule-based literature, someone like Allado-McDowell is arguably experimenting with a black box: an algorithm shaped by reams of data at a scale bigger than any of us can imagine.

But Oulipo’s legacy still resonates—it’s right there in the group’s name, “workshop for potential literature.” Like that vision, Allado-McDowell’s volume seems to be very much about possibility. If certain books take your breath away by establishing an expository or narrative world that feels so complete—so resolved—it couldn’t possibly be any other way, Pharmako-AI is not such a book. Rather, it’s likely one of many books to come—written partly or fully by AI—that will each offer an electrifying glimpse into its successor. The writing by GPT-3 in this book is obscure sometimes. It’s trite sometimes. It’s also inventive sometimes. Beautiful sometimes. It’s California-coast trippy. It is mind-blowing.

How could it not be? When you find some kernel of truth in GPT-3’s writing, you have to contend with a second-order, startling realization. That kernel of truth you stumbled across? That flash of poetry you found moving? It was written without a human mind and yet had a million human influences. We’ll all have to get used to this way of reading soon.

Dawn Chan is a human writer based in New York.