Syllabi

Machine Learning

“Machine Learning” is a catchall term for software that improves computers’ ability to recognize patterns and solve problems through examples and feedback. Deep Learning is based on similar methods, but increases efficiency by mimicking the gang mentality of neurons, creating convolutional neural nets similar to the human mind, allowing computers to grasp abstract meaning with less guidance. The combination of these two learning approaches has put humanity on a rapid course toward creating sophisticated (and ubiquitous) artificial intelligence. The gold standard of AI has been a machine that could pass the Turing Test—meaning its ability to pass as human.

The eminent physicist Stephen Hawking has warned, “The development of full artificial intelligence could spell the end of the human race.” Of course, it has always been fun to be paranoid. From Hephaestus to Pygmalion, HAL 9000 to Ultron, Frankenstein to Genesis, there is substantial narrative history detailing the tension between creators and their creations. Will, many of these stories ask, the creations become better than their makers and revolt? Whose will do they serve? This is an especially pertinent question now. We have, in many ways, lost control of technology, and today we influence the design of our machines primarily through what we buy. It is our mouse-clicks and participation in social media that feeds the “neural nets.” We still are the teachers. We just don’t quite know what we’re teaching. And most of us don’t get paid.

Fiction has long been prescient about AI, but this, too, has become a challenge, with rapid technological developments outpacing our own imagination and language. Many books approach our relationship with malicious intelligent machines after they have been put in production (see novelist Alex Garland’s film Ex Machina)—that is, after it’s too late. But fiction can also be a good place to explore how we could still influence our technology. The following fun and mind-bending stories restore—or, at least, question—the boundary between the maker and the made, and provide fertile ground for considering how we invent. Most important, they show that there is more to be gained by imagining the future than in sitting back and waiting for obsolescence.

Inter Ice Age 4 by Kōbō Abe

Two AIs crunch staggeringly huge data sets to make future predictions and compete for bragging rights. The one called Moscow II predicts the future of large-scale geopolitical events. To differentiate his project, the Japanese professor Katsumi sets a different goal: to predict the future of one single human. The obvious problem Katsumi encounters is that the random subject he selects—a guy he sees slurping noodles at a ramen bar—is murdered! The less obvious problem winds through labyrinthine twists of the “observer effect”—even simply asking the computer to make a prediction affects the data. In other words: Would an innocent man have been murdered if Katsumi had not created his future-predicting machine in the first place? And can you still excavate the past once you have mined data for the future? Abe, better known for his claustrophobic parable The Woman in the Dunes, then tilts the world into an even more apocalyptic landscape—pitting the “intelligence” of Darwinian evolution against the cognition of computers. What good was all that deep learning when we’re clearly going to become gill-breathing mermaids, due to environmental catastrophe and rising sea levels, both of which announce the arrival of a new geological era: Inter Ice Age 4!

We Can Build You by Philip K. Dick

In this brilliant hall of dirty mirrors, America is obsessed with 1861. The masses are obsessed with the Civil War. Meanwhile, the rich scheme to buy real-estate holdings on the moon. Radiation has caused mutations, but the State cares more about Americans’ mental health—the government constructs giant treatment centers to rehabilitate vast populations of citizens ratted out as ill. In this falling apart universe, everyman piano salesman Louis Rosen falls for the beautiful Priss, a coldhearted but brilliantly alive young artist at the ticking heart of a new revolution in AI. Except, it’s not quite—it’s a hack job, some punched-tapes of data fed into a UCLA computer and melded with the technology of electronic pianos able to generate tones that change the mood of the depressed and manic. The cranky, beta prototype, an insanely lifelike, warts-and-all Abraham Lincoln, runs away from his makers in a hissy fit—riding a Greyhound bus from Boise to Seattle. Is humanity flawed, or just plain broken? This novel about the tiny band of humans vending the first cybernetic automatons asks key questions about the future of AI.

Galatea 2.2 by Richard Powers

Unable to kick his writer’s block, Richard Powers (the narrator of this postmodern novel) ends up enlisting in an experiment, a variation on the Turing Test: Can he train an AI to pass itself off as a graduate student in English literature? Tasked with feeding the machine (when she awakens enough to ask her name, Richard dubs her “Helen”) the canon, Richard cannot help but infect his data set with the alternatively quotidian and heartbreaking story of his own life and midlife crisis. The end-result—with a nice, surprise twist—bothers at the antiseptic, theoretical underpinnings of the Turing Test and warns against having too much faith in Locke’s ideas of the mind as tabula rasa.

“Golem XIV” by Stanislaw Lem, from the book Imaginary Magnitude

“The more evident the link becomes between the construction of the world and life and Intelligence, the more unfathomable becomes the enigma. Can it be that the universe was designed as a bridge, designed to collapse under whoever tries to follow the builder, so they cannot get back if they find him? In this mind-bending bitch slap to humanity, a future AI named Golem XIV delivers a condescending sermon to its creators. Unfortunately, Golem’s points are so well made that the condescension is warranted—to this reader, at least. Dense with scientific language and packed with philosophy, the lecture is a stiff antidote to millennia of anthropomorphism. The only issue is that Golem XIV, and the surrounding Borgesian apocrypha that make up his lecture in Imaginary Magnitude, was, obviously, written by a very real human: the Soviet sci-fi author Stanislaw Lem. This twist is deceptively powerful in a book about levels of intelligence considered as a set-theory problem: are there true limits to the human mind if we can imagine anything? And, if there are limits, are they in place just to keep us alive and sane? Golem XIV clearly frets about the silence of his closet relative, another AI dubbed “Honest Annie” by the defense contractors who built her (Annie is short for Annihilator). Has Annie shut down, is she broken, or has she moved on? Suicide, refusal, and negation haunt the narrative. But there is also hope and the spirit of curiosity. Golem wistfully imagines higher “rungs” of intelligences existing out in space, the nuclear explosions of stars fueling their thoughts. The story is a difficult read, as venomous as they come—but Golem's cranky ruminations are touching. It cares.


Andrew Zornoza is the author of the photo-novel Where I Stay. He currently lives in New York City and teaches as faculty at Parsons Design & Technology MFA program.