They say that art imitates life. Nowhere is that more true than in movies. With machines becoming more and more intelligent and tragedies reshaping national borders, it’s easy to see why we look to silver-screen heroes – both real and imagined – to answer the world’s most pressing moral quandaries. Because we need them. But what about the flipside? What happens when cinematic blockbusters pose not the solutions to our troubles, but the underlying questions? These three Augusta University professors discuss how this year’s biggest titles affect their research, their teaching styles and their philosophy.
Wonder Woman
Heroines Aren’t Born. They’re Designed.
Princess. Warrior. One-time United Nations Ambassador.
Diana Prince, better known as Wonder Woman, has played a number of different roles within the wide and wonderful universe of DC Comics. She’s been, at times, a defender of women’s rights and a proponent of feminine strength. She’s also been the bane of bullies, and the whip-wielding scourge of Fascists and Nazis the world over. But it’s the roles Wonder Woman plays off-page – those of role model, symbol and heroine – that have made her the world’s most enduring female superhero.
Few people understand the power of Wonder Woman’s enduring legacy better than Dr. Ruth McClelland-Nugent. An associate professor of history and an expert in transatlantic pop culture, she has published a number of articles about Wonder Woman’s impact as a feminist icon and is currently working on a book proposal for a history of Wonder Woman as a pop culture icon. She collects Wonder Woman comics, toys, trinkets and knickknacks – many of which adorn the walls of her office – and has attended several conferences, both as a scholar and fan, to discuss the cultural significance of America’s Guardian Angel. As a lifelong fan of the Amazons’ greatest champion, McClelland-Nugent said Wonder Woman owes part of her longevity to her creator – psychologist and women’s advocate Dr. William Marston.
“Marston designed Wonder Woman explicitly as a kind of propaganda for increased participation of women in society,” McClelland-Nugent said. “His idea was that we need to value feminine contributions to the world to have balance.”
Marston, whose research into systolic blood pressure eventually led to the invention of the modern polygraph, firmly believed that strong, caring women could – and, more importantly, should – rule the world. He must have been disappointed, then, when in 1942 his red-, white- and blue-clad heroine joined the Justice Society of America as the team’s secretary. Like many modern female role models, though, Marston’s character pressed on in spite of her job title.
And that perseverance paid dividends.
Today, Wonder Woman is seen as one of the pillars of the modern Justice League and a symbol of strength for those marginalized by society. That isn’t necessarily new territory for the Amazing Amazon, however. She’s had a purpose since the day she was born. Or rather, created.
“The Amazons’ background is understood as this kind of cosmic background, part of a struggle between the god of war, Mars, and the goddess of love, Aphrodite,” McClelland-Nugent explained. “The two have this bet about controlling the world.”
Mars, in Marston’s telling, represented unrestrained masculine strength and was the guiding force behind Hitler’s Germany. Aphrodite, by contrast, represented the power of feminine strength (caring, compassion and the capacity for reform). To combat the rise of Nazism, Aphrodite endowed a race of super-powerful women with super strength to help them battle the forces of evil. Among these women, Diana, princess of the Amazons, rose to the top.
Unlike the characters of Jerry Siegel and Joe Shuster (Superman) or Bob Kane (Batman), Wonder Woman was a well-defined character from her outset: a hero with Superman’s strength, Batman’s cunning, and the power to not only turn the other cheek, but to reach out a helping hand afterward. As such, her creator held her to a somewhat higher standard. She was expected to do two things which, at the time, seemed nearly impossible: bring down the Nazis and bridge the gap between sexes.
In short, McClelland-Nugent said, “She’s a comic book hero who has always been about something.”
Whether that portrayal will hold up in the upcoming (Nazi-free) Wonder Woman film is yet to be seen, however. McClelland-Nugent said she’s cautiously optimistic.
“Setting it during World War I was an interesting, possibly good, decision,” she said. “I like the appeal of her, a character who wants to end all wars, participating in the war to end all wars.”
However, without the presence of an overarching enemy like Fascism or Nazism, McClelland-Nugent said, the character’s original focus is somewhat muddled.
It seems a missed opportunity, especially considering the choice of casting. After all, Gal Gadot – the Israeli actress portraying America’s Guardian Angel – is the granddaughter of Holocaust survivors.
A role can’t get much more personal, or purposeful, than that.
Bladerunner 2049
They carry our voices across continents in a matter of seconds. They can scan the full breadth of human knowledge faster than most of us can swallow a pill. They set our pace at work, guide our social interactions, and rarely stray more than a few feet away from us at any given point.
If our smart phones were any smarter, they might even be people. But, then, wouldn’t they also be slaves?
The answer, according to Dr. Brian Armstrong, assistant professor of philosophy, is a conditional “yes.”
While the bulk of his work centers around the writings of Dostoevsky, as a philosopher, Armstrong takes special interest in periods of “conceptual destabilization” – those times throughout history when thinkers and writers were unable to describe the world as they understood it using previous established philosophical concepts. With more and more artificial intelligences coming online every year, we may be fast approaching another such period. But before we decide what to call the concept of “human/nonhuman interaction morality,” Armstrong cautions we must first nail down what makes a “thing” a “person.”
Sensation isn’t the answer.
“Your phone can take in information, and it can sense information about its external environment just as I can,” Armstrong said. “The idea is that, as a conscious being, I am aware of what that’s like, whereas the phone is not.”
(For example, your phone can register that temperatures tomorrow will be freezing, but you, as a living being, know and understand what freezing temperatures feel like against bare skin.)
It’s a fairly concrete division. But what could change it? What would it take for your phone or computer, or something even more complex, like, say, a replicant, to suddenly convert sensation into awareness?
[Replicants, for those left scratching their heads, are the main antagonists of Ridley Scott’s 1982 cult masterpiece Blade Runner and the upcoming Blade Runner 2049. Ultrarealistic androids designed to look, sound and think like people, replicants are hated by a majority of humans who fear they might someday blend unnoticed into the general population – a fear that might become a very real concern for us in the not-too-distant future.]
More importantly, if our thinking machines were to suddenly become understanding ones, would we become their masters instead of their users?
With technology advancing more rapidly year after year, the question is gaining momentum in the fields of science fiction and modern robotics. However, in philosophical spheres the concept of personhood – or assigning human worth to something essentially “nonhuman” – has been a timeless topic of discussion.
The answer to the question is not quite. Philosopher John Locke tells us why.
In one of his most famous works, “An Essay Concerning Human Understanding,” published in 1689, Locke attempts to define personhood by approaching the concept of “being” from a number of different angles.
“The most important of these is the notion of ‘consciousness,’ and Locke believed that consciousness tracked through time is memory,” Armstrong said.
The key to becoming a person was not simply to experience or understand a sensation, Locke believed, but to remember it from one moment to the next. This continuous string of consciousness was what differentiated people from animals or plants.
But how does one define continuous consciousness?
“Theoretically, I have a memory of every conscious experience I’ve ever had,” Armstrong explained. “So, how do I know I’m the same person now as I was back in 1982? Well, if we have the same memories, Locke would say we’re the same person.”
The theory, understandably, has some holes in it, like the fact that amnesia would essentially create a new person, but for the purposes of determining a thing’s “personhood,” it’s still a fairly solid metric.
Every week, headlines boast about the power of so-called “learning machines” (IBM’s Watson, Google’s DeepMind, to name just a few). The differences between these machines and people, according to Locke, is that, while they can learn to adjust their actions, machines do so only out of logical necessities. They don’t change course because they remember something frustrating, exciting or painful; they adjust because a chosen method is either easier or superior from a purely mathematical standpoint.
It’s that blurring metric – and perhaps that metric alone – that prevents our cars, phones and computers from becoming “people” and, in the process, making us unwitting slavemasters.
But that same metric is what makes the internal struggles of characters like Blade Runner’s Roy Batty – a replicant who can remember – that much more difficult for modern audiences to accept.
One speech, delivered near the end of the movie, sums up the dilemma.
“I’ve seen things you people wouldn’t believe,” Batty says, displaying for the first time that he not only has memories, but that those memories matter to him. “All those moments will be lost in time, like tears in rain.”
At this point, what is he? He isn’t a human – not quite, at least. But he’s clearly more than just a machine.
One day, our phones and computers may gain the same abilities Batty does in the original Blade Runner. Perhaps, as Blade Runner 2049 might demonstrate, they’ll gain ones we never considered. That point begs two very important questions.
Are we, as a species, ready to accept the quickly blurring lines between human and machine?
And if so, will we decide to treat our thinking machines as equals before they reach awareness, or will we continue to dismiss their experiences, unremembered, as something inconsequential – like tears in rain?
The answers are important.
One day, they might just be the memories we pass on.
World War Z
“This is no longer a democracy.”
To say one phrase changed the way Dr. Craig Albert (BA ’01) teaches international relations theory would not only be unfair, it would also be wildly inaccurate. The decision was shaped over years, formed piecemeal from a wealth of ideas, experience and various media. But those very words, uttered toward the end of season 2 of The Walking Dead, did become the final nail in the metaphorical coffin of his decision.
Albert, assistant professor of political science, now teaches a class on the international impact of zombies. Unlike his subject matter, though, the only brains he’s interested in are those of his students.
“Part of my teaching philosophy is to meet students where they are,” Albert explained.
To do so, he immerses himself in his students’ culture. He uses the same social media platforms they do. He listens to the same music, watches the same shows and regularly discusses pop culture with his classes. Part of the reason zombies work as a teaching aid, he said, is that students instantly recognize how dangerous they are.
“I can try to discuss [international relations theory] through nuclear annihilation, but … most students can’t grasp that type of spectacle because it’s just so far out there,” he said. “By bringing in zombies, I spark their interests, keep them engaged and discuss something that would lead to the same policies as a more likely example of global crisis.”
Another plus is that there’s no shortage of material. The Walking Dead. Night of the Living Dead. World War Z, both the novel by Max Brooks and the 2013 movie of the same name. Albert uses the latter in his classes to illustrate separate but equally valuable points about the way people and societies react to tragedy.
“The movie is decent,” Albert said. “It gets literally everything wrong about the United Nations, about military capability, but it gets the human drama and emotional distress pretty accurate.”
By contrast, the novel – written by the son of legendary comedian and filmmaker Mel Brooks – provides a much more engaging story, a more “true-to-life” rendition of a global zombie epidemic from the standpoint of nations and their citizens.
Using both versions as a backdrop, students are required to write a short-story follow-up based in their own hometown, illustrating along the way what strategies and political theories they’d use to combat a zombie horde.
“World War Z provides a great vehicle to force creative, critical and analytical thinking in a fun way,” he said. “Students have turned in amazing stories – way better, in fact, than the information I read from research papers.”
As for the film’s upcoming sequel – World War Z 2 – Albert said he’s cautiously optimistic.
“I’m looking forward to seeing it,” he said. “I’d like to see some questions on human nature and more realistic accounts of government, but even if those aren’t there, it’ll be fun to watch.”
The movie is set to release in June, right in the middle of Albert’s summer class.
“I feel a class field trip might be needed,” he said.
See how our professors really look, and get a behind-the-scenes look at photography. See The Ones to Watch: Behind the Scenes.