Almost Human

BY KAREN GUZMAN | PHOTOGRAPHY BY BILL BURKHART AND STEVEN JACARUSO

Almost Human The era of Hollywood’s fevered imagination has finally arrived. The computers are taking over. Machines, robots, artificial intelligence agents, whatever you call them, these technological “brains” are rivaling—and in some cases surpassing—their human counterparts. The big question today, as Wendell Wallach ’68 sees it, is this: Just how much control should we let them have? And what could go wrong along the way?

It’s not idle worry. Computers have already infiltrated critical sectors of daily life, and not always with the best of results. Computer systems govern trains and ATMs. They run power grids, monitor the stock market, and assist doctors weighing end–of–life decisions. Robotic arms perform delicate surgeries, while robotic vehicles dismantle roadside bombs in Iraq and Afghanistan. More than ever, we need our machines. And as the 21st century picks up steam, scientists in the emerging field of “machine morality” are now taking the next step. They want to imbue computers with the ultimate authority— the power to exercise judgment, make decisions, and act on them.

Skeptics declare such a step impossible. Decisions often call on a moral sense and an ethical framework. They can be tortuously difficult to make. Dependent on cultural and emotional contexts, they are the domain of humanity. A machine could never mimic the conscientious workings of the mind.

Optimistic researchers respond: Wait and see. Computers have already outpaced humankind in so many tasks. This is simply the next frontier.

Technological breakthroughs, of course, have always driven humanity’s progress—from the wheel to the cracking of the human genome. But the milestone looming now could be bigger. It may very well redefine what it is to be human, says Wallach, a recognized leader in the machine morality field and a consultant at Yale University’s Interdisciplinary Center for Bioethics. “We’re in the midst of a radical revisioning of human nature, human consciousness, human decision making, and ethics,” he said in a seminar at Wesleyan’s 2008 Homecoming and Family weekend. Artificial intelligence has become a laboratory to test which human faculties can be reproduced in machines. And if they can be reproduced, Wallach asked, then what exactly are these faculties and what are we?

The road to reproducing ethical faculties in computers is fraught with pitfalls and uncertainty, but Wallach argues the time to get started down it is now. “Within the next few years, we predict there will be a catastrophic incident brought about by a computer system making a decision independent of human oversight,” Wallach and coauthor Colin Allen assert in their new book, Moral Machines: Teaching Robots Right from Wrong (Oxford University Press, 2009). This brings us to Wallach’s other big question: “Does humanity really want computers making morally important decisions?”

On the front lines of battle, an armed military robot detects an enemy soldier up ahead. The soldier is waving a white surrender flag. The robot has been programmed to hold fire when it spots a white flag, but the soldier is also carrying what looks like an assault rifle in his other hand. It is in fact a cane, because the soldier has been injured. Has the computer been designed to tell the difference? It fires.

In November of 1967 Wendell Wallach was navigating his senior year at Wesleyan. The campus, like many nationwide, pulsed with the turbulent energy and raw emotion of the times. The counter–culture was in full swing; civil rights crusades shook up the status quo. Wallach’s worldview was changing, too. In a seminar entitled “Saint Augustine and His World,” he met regularly with a select group of campus literary types. They had probing discussions of Western and Eastern philosophies.

Then a close family member tried to commit suicide. This tragedy, taking place just as Wallach’s intellectual life was in ferment, led him to question the way he had learned to see the world.

“Something shifted in me,” he says, “and I began reflecting on the elasticity of the human personality and mind. While many in my generation asked how they could alter their consciousness, my question was, why, if this elasticity is possible, do we lock into one relatively fixed view of life?”

Unknown to Wallach, the seeds of his future book were planted then, but they would take years and some unexpected career maneuvers to flower. “I was a philosophy type. What’s fascinating is I’m much more science–centric than I ever would have imagined,” he says. “I’m not surprised that I’m in a multi–disciplinary field.” A College of Social Studies major, Wallach discovered an interdisciplinary approach to knowledge that would become the foundation for his work in machine morality. In fact, he says, this approach increasingly is in demand as technology, including the specter of decision–making robots, asks us all to rethink key social policy issues.

“Many people don’t quite grasp the profundity of what’s going on,” says Wallach, who designed the first–of–its–kind course on “machine ethics,” which he has taught at Yale. “It’s difficult to detect all the influences of technology on social change.”

His multidisciplinary grounding also has been instrumental in his work at Yale, where he coordinates with researchers from the university’s various schools and disciplines. “I’m comfortable roaming from field to field,” he says.

Computer programs initiate millions of financial transactions on the world markets every day. They make decisions to buy and sell stocks, commodities, and currencies at a lightning quick pace. The current financial crisis, in fact, is blamed in part on computers driving the markets at warp speed, leaving human observers in the dust and confused as to how we got here from there. It is not difficult, then, to imagine a computer, in its electronic quest for profit, stumbling upon an ingenious— though illegal—way to manipulate the markets. Programmed for profit, how would it know it had crossed a line?

Wallach describes the nascent field of machine morality— also known as roboethics, machine ethics, and friendly artificial intelligence—as two parts philosophy, one part cognitive science, and one part computer science. Programming computers with ethical decisionmaking abilities poses a real challenge for cognitive scientists, namely do we understand the human mind well enough to replicate some of its unique abilities? Computer scientists, meanwhile, are beginning to grapple with how to teach computers these abilities—if, in fact, machines can learn them.

The philosophical part drew Wallach to the field. Graduating from Wesleyan, he initially planned a career in law. He wanted to specialize in bioethical legal issues. He was wait–listed at Harvard University’s law school, but accepted by the university’s divinity school. So he enrolled in the divinity school. At Harvard, he became submerged in the political activism of the day, taking part in the famed 1969 takeover of the university’s administration building and raising consciousness about social issues. He spent one year at Harvard Divinity School, another year in the Graduate School of Education, and a third year overseeing grants for a think tank addressing ethical issues in higher education.

After graduation, Wallach left for a stint in India. “I came out of the ’70s a quasi–itinerant, spiritual philosopher/ therapist,” he says. Exploring processes of thought and cognition still interested him most.

By the end of the ’70s, Wallach was through with the itinerant lifestyle. He knew it was time to start building a career, and he also realized then that he wanted to write a book addressing his major fields of inquiry. Writing led him to try out the new word processors of the day, and his interest in computers was born. A job as a sales consultant for a computer company in Portland, Conn., began a few months before IBM created its first personal computer and Timemagazine crowned the computer its “Machine of the Year.”

He carved out a career as an educational consultant, selling many school districts in Connecticut their first computer systems. He went on to become a computer consultant, founding two consulting companies with clients that included PepsiCo International, United Aircraft, and the State of Connecticut.

Then almost 20 years later, the voice of a book called again. In early 2001, Wallach sold his interests in the consulting companies. He was determined to return to the lines of inquiry he had begun years earlier. Only the landscape had changed drastically. Massive strides had been made in cognitive science and unlocking the mysteries of how the mind functions. At the same time, the prolific rise of computer systems was spurring ever–new technology.

In the 1990s Wallach had begun boning up in the academic disciplines he knew his writings would touch upon. He audited a course in political theory at Wesleyan and attended the university’s summer writing workshop. Political and social issues were sure to figure large in his work, as would their relationship with technological issues.

His studies laid the groundwork for his subsequent interest in machine ethics. He notes, for instance, that future public policy toward robots will be influenced by notions about how to measure their intelligence and moral agency. “But political factors will play the larger role in determining the issues of accountability and rights?and whether some forms of (ro)bot research will be regulated or outlawed,” he writes.

The issues surrounding machine ethics are becoming more pressing as the population of robots grows. By 2007, there were 6.5 million robots in operation worldwide, according to the International Federation of Robotics. This figure includes industrial and service robots. By 2011 the federation predicts more than 18 million robots will populate the world. Areas with strong predicted growth include defense, security, cleaning, and medical robots.

Taking a futuristic view, Wallach notes that as robots grow more sophisticated, two political issues may arise. Can robots themselves, rather than their manufacturers, be held liable for any damage they cause? And do sophisticated robots deserve recognized rights of their own? These questions, however, are contingent on researchers discovering the technology to create fully independent robots.

In 2003 Wallach began consulting at Yale’s Interdisciplinary Center for Bioethics. Today he chairs a technology and ethics groups, which has become one of the foremost of its kind in the country. At a workshop in Germany, he met Colin Allen, professor of cognitive science, history, and philosophy at Indiana University. Allen would become his coauthor. “Colin and I hit it off from the beginning, and we started to realize there was a real field in computers making moral decisions,” he recalls

APACHE is a computer–based, decision support model used by doctors in intensive care units to help determine treatment procedures. With a database recording the history of hundreds of thousands of patients in intensive care, APACHE recommends care options for doctors to consider for an individual patient. But there are factors—especially qualityof– life judgment calls—that critics say a computer will never be able to fully appreciate or weigh. In our increasingly litigious society, however, it is not difficult to imagine a day when doctors, even if they think otherwise, are reluctant to go against APACHE’s recommendations.

Isn’t this a slippery moral slope? An abdication of human responsibility to a machine?

It didn’t take long for Wallach to realize that he wanted to focus on machine ethics as a way of laying the groundwork for some larger philosophical issues. The book asks to what extent humans are machines, and can higher order human faculties be reproduced in real machines. In particular, the book outlines the ethical components of decision–making and the various methods scientists may use to try to program them into computer systems.

But relaying the social, emotional, and cultural factors that go into moral judgment and decisions poses one of the biggest challenges. Can computers be taught to “understand” them? To appreciate right and wrong in a moral sense? Heidi Hadsell, president of Hartford Seminary and professor of social ethics, doesn’t think so.

“Ethics isn’t just about rules and norms. Ethics is applying them. It’s about contextual and subtle nuances. . . things that can’t be programmed into a computer,” she says. “The morality will be lost in translation.”

Wallach calls himself a “friendly skeptic,” that is, friendly toward the can–do engineering spirit of the field while skeptical that we’re ready to reproduce higher–order human thinking. The machine ethics discussion today is long on broad overviews and philosophical debate, but short on the nuts–and–bolts knowledge and technology that can make it happen.

“No one has convinced me that we understand enough about human intelligence to be able to do this,” Wallach says. “We just don’t have the knowledge in our science today.” He doesn’t doubt that computer systems would be able to take in and analyze vast amounts of information. It is duplicating the other capacities, the “gifts of the soul”— consciousness, empathy, etc.—that’s the sticking point. Nonetheless, computer systems are already making some independent decisions and that poses an immediate problem. Observers fear a disaster is in the making, so they argue that imbuing computers with at least some semblance of ethical sense is a necessity, an urgent one.

Wallach sees three major and immediate stumbling blocks. In addition to our limited understanding of what goes into the complex stew of human intelligence and decision– making, there are also technological and bioethical challenges. “As artificial intelligence advances, we’ll have to determine if we like where it’s going, and public policy could go against it,” he says.

The introduction to Wallach’s book depicts a doomsday scenario triggered by computers responding to financial market fluctuations. A series of interlocking, machinegenerated decisions follows that results in a plane crash, massive blackouts, Homeland Security confusion, and automated machine guns picking off people on the U.S./ Mexico border. “Computer systems are already making choices and taking actions with less and less input from humans,” Wallach says.

Even though full moral capabilities for computers—if possible—are still a long way off, researchers have mapped out several routes to try imparting some basic rules. As a starting point, Wallach points to Isaac Asimov’s famed “Three Laws of Robotics,” which the author prophetically offered more than 50 years ago. They read: “1. A robot may not injure a human being or, though inaction, allow a human being to come to harm. 2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second laws.”

But these laws, while intriguing, may be too simplistic in the ambiguous, conflicting landscape of real–life decisions. And as Wallach points out, Asimov was writing fiction.

Still researchers worldwide are aggressively pursuing the necessary scientific and technological know–how. In Japan, the nation with the highest robot density, researchers are developing myriad service robots, and they are also working on an ambitious project to develop robots that look and behave human.

Driverless trains are already operating in major cities. Their existence puts a new spin on the classic “trolley car” dilemma introduced by philosopher Philippa Foot in 1967. In short: a runaway trolley approaches a fork in the tracks. If it goes one direction, a work crew of five will be killed. If it goes the other way, a single worker will be killed. There are variations on the theme: What if a bystander, not the trolley driver, can flip the switch and decide which track to take? What if the bystander could push a heavy man onto the tracks, stopping the train but killing the unfortunate man? In ethically tortured scenarios like these, what is the right thing to do? And if people are still wondering, how will a computer ever figure it out?

In the new book Wallach is working on, he returns to focus on the issues that intrigue him most—the big, philosophical ones that first stirred his imagination when he was an undergraduate at Wesleyan. Titled Cybersoul, it concentrates on the ways scientific discoveries are changing our understanding of decision–making, ethics, and the human mind. Wallach sees Cybersoul as addressing what science is revealing about how the mind works. Advances in neuroscience and psychiatry in the last 20 years have revealed a far more complex, chemically–nuanced system than what researchers previously believed. “The work has only just begun,” he says.

The second dimension of Cybersoul digs into the speculative, philosophical realm where Wallach feels most at home. It questions how to reconcile methods of knowledge that are beyond the scientific method with actual science. “How do scientific findings come together with the introspective and spiritual insights that people have?” Wallach says. “As someone who has meditated for the past 40 years of my life, it’s very clear to me that there are dimensions to what we call ?consciousness’ that are not fully being explained by the kinds of science we have out there.”

Wallach believes a paradigm shift in human understanding and contemporary culture is necessary. In order to replicate human decision–making ability, we need to more fully understand all the dimensions of the human mind, including those aspects of consciousness glimpsed through means such as meditation. He contends that such an appreciation is coming, one that will embrace advances on the scientific and technological fronts, while at the same time forging a new vision of human functioning and its potential.

In the end, the quest to give a part of our humanity to machines may push us to look all the more deeply and profoundly into ourselves. “The possibilities of technology are opening up to us just what we might become,” Wallach says. “Each of us is in the midst of trying to reconcile our beliefs and intuitions about what it means to be human with what science is showing us about how we function.”

Download a PDF of the complete article HERE