Nature Future Conditional

The story behind the story: Legs-11

This week, Futures is very pleased to introduce you to Alfred. No, he’s not an author, he’s the key character in Legs-11, Hugh Cartwright’s latest story. Alfred is up to something — but what? Hugh is a retired Oxford chemist and you can find out more about him at his website. Here be reveals the sinister origins of Alfred — as ever it pays to read the story first.

Writing Legs-11

In hospital? An AI is checking your X-rays. Bought a Tesla?  AI is your chauffeur.

Impressive stuff, but current AI programs are single-purpose: they can read an X-ray or drive a car, but can’t do both.

By contrast, we humans can tackle many different tasks (although not always successfully). A prime aim of AI research is to replicate human versatility.

So what happens when, some time in the future, we create an autonomous, multi-purpose AI? How do we stop it getting distracted, veering ‘off-task’, or behaving in an unpredictable manner?  An AI that starts setting its own agenda — and acting upon it — might make decisions that seem to us to be stupid, random, even vindictive. And if the AI cannot, or will not, explain what it’s doing, its behaviour may be not just an irritation, but a significant threat.

Alfred is such an AI. We don’t know why he has been delivered or what his goals are. He may be harmless, but seems to have his own, unknown (to us) agenda. Why is he keen to get upstairs? And why is his owner eager to stop him from doing so?

Single-purpose AI tools have value, even without explanations. Once multi-purpose AIs arrive, embedded with their own goals, an inability to explain their operation may be the least of our worries. You’ve got an AI with an agenda, but it won’t tell you what that agenda is? You are really are in trouble.

Alfred and friends are outside the door.

Comments

There are currently no comments.