A view From the Bridge

Robots I have known

Posted on behalf of Celeste Biever

Bonding with RoboThespian at London's Science Museum.

Bonding with RoboThespian at London’s Science Museum.

You remember your first robot – at least, if you are as fixated on them as I am. A recent review of three books that explore the implications of artificial intelligence took me back to 2006 — and the machine that lit my obsession. It wasn’t pretty or even cute, though many automatons are. It was creepy: a four-legged metal crawler that could figure out how to limp if one of its legs was shortened.

At the time, I was a technology reporter for New Scientist with an assignment to write a news story about the quadruped. In order to limp, the robot first had to detect that something had changed. To do this, it maintained a software version of itself, which it constantly compared with the position of its real physical body. When the two no longer matched, it knew it had to modify its gait to cope with its new shape.

It seemed neat, even potentially useful, but not the stuff of philosophy — until a computational biologist I spoke to cast the machine in a new light. Because the robot built a model of itself that was distinct from its real physical body, he suggested that its creators had – perhaps inadvertently – given it a sense of self. With at least the semblance of an inner experience, he said, the robot provided a glimmer of what consciousness could look like in a machine. That was it. My world shifted, I understood the power of robots and I was hooked.

Trading places with Abbie the robot arm at Massachusetts Institute of Technology.

Trading places with robot arm Abbie at Massachusetts Institute of Technology.

Until then, I had regarded them as either gimmicks that disappointed or cold, destructive Terminators. The crawler represented a third option: a way to figure out how humans work.

My next discovery was Nico, a metal skeleton that could recognise its reflection in a mirror. Dressed in a sweatshirt and baseball cap, he did this in a similar way to the crawler: he compared what he saw in the mirror to the movement commands he had just sent to his body, and looked for a match.

Nico didn’t really recognise himself. He just reproduced the behaviour – or so many biologists I spoke to insisted. But the Turing test challenges us to consider what the difference would be between behaviour that seems human and that of a real human. And undeniably, Nico classified his own reflection differently from the sight of anyone else.

Nico set me on a roll. Weekly news meetings became a sport in which I competed with biology reporters to discover beings that chipped away at what it means to be human. Scrub jays are birds clever enough to move their food stores to trick potential pilferers, but I discovered a furry robot that passed a test for theory of mind and a wheeled rover that deceived its opponent to play hide-and-seek.

Sparring with Jedibot at Stanford University in California.

Sparring with Jedibot at Stanford University in California.

These synthetic creatures had a crucial selling point: people had programmed them and so understood how they worked, making them the ultimate tool for discovering whether simple rules can produce complex behaviours. Unlike animals, with robots you know exactly what your psychological ingredients are.

My passion led me to shake hands with a knee-high pearly white humanoid as it stepped off the red carpet at the Robot Film Festival (in TriBeCa, New York, in 2011 – I was a judge). I fenced with an orange robot arm, Jedibot. And I mentally ‘traded places’ with another orange robot arm, Abbie, as we worked together to insert screws into a tabletop. The point was to see if the switch improved our ability to collaborate, a psychological trick that is known to work with human-only teams.

Throughout these adventures, my feelings towards robots have often dramatically differed from many others’. Elon Musk and Stephen Hawking, for instance, are among those who have recently warned that the creation of artificial intelligence is risky because of the potential to create Terminator-style killing machines. There’s also the fear, fashionable right now, that robots are on the brink of making human jobs redundant, leaving us with nothing useful left to do.

Perhaps I should be more afraid. But I can’t help but think of one further entry in my robot diary: there is a piece of software that falls for the same optical illusion as people. Trained to estimate the lightness of a pixel based on examples of images it had seen, the program classified grey regions of an image as darker when placed on a white background and lighter when on a black one. This highlights the main reason I don’t fear robots: if you believe that we humans are just complex machines — and I do — aren’t we on some level just one big happy family?

Celeste Biever is Nature’s chief news editor. She tweets at @celestebiever.

 

For Nature’s full coverage of science in culture, visit www.nature.com/news/booksandarts.

Comments

There are currently no comments.