Nature Future Conditional

The story behind the story: reCAPTCHA all over again

This week, Futures is delighted to welcome back Aaron Moskalik with his latest story reCAPTCHA all over again. Regular readers will recall that Aaron has previously introduced us to eLiza and some Ghosts in the machine. You can find out a lot more about his other work at his website. Here, Aaron reveals what inspired his latest tale — as ever, it pays to read the story first.

Writing reCAPTCHA all over again

Are you a robot?

We’ve all encountered the traffic light CAPTCHA. The first time I was asked to identify traffic lights, I thought it strangely … specific. But it never had the same images twice and I began to notice variations in the theme. Sometimes there were nine different images, other times it was one image broken into nine squares. And each time I encountered this strange test, it seemed to get progressively harder. Soon there were images where the traffic light was sideways or even facing away from the camera. Confounding elements were introduced such as other devices of similar shape hanging from poles and wires stretched across a street. I would occasionally get the answers wrong and had to prove I was human a second time.

Why make it so hard? Surely bots were not so sophisticated and determined to sign up for random accounts. When I heard a news story about Google using us to train their self-driving AI, it all made sense. I have mixed feelings about this. I am an indifferent driver, so the thought of machines taking over this duty while I read a book in the backseat is an appealing one.

My wife is not so sanguine about this prospect and not just because she enjoys driving. What if your AI is hacked? Who is legally responsible when something goes wrong? We’ve already seen news stories. But then too there is that niggling unease many of us feel that this will not just stop with practical tasks we don’t want to do.

It was way back in the 1980s when I first saw a plotter that replaced the work output of a drafter. It was fascinating to watch the pens fly across the table-sized piece of paper to produce a drawing of perfect fidelity that would have taken me hours to do. At this same time, I also saw CNC machines doing the work of a machinist in a similar manner.

It feels as if we are giving up control, step by step, to forces beyond our understanding and this process has been going on long before AI or automation were even concepts. When was the last time you made something practical? Or even fixed something? Have you ever worried about laying in food stores for the winter? It was not so long ago these were everyday concerns for most people. Now we place our trust in a globe-spanning system that provides for our every need. All we need do is contribute in some way. But what happens when all the ways we can contribute have been overtaken by more efficient machines?

The common trope that represents this fear is the android, a machine made to look and act just like us. I don’t believe such devices will ever be more than a curiosity akin to the automatons of the machine age. Or self-driving cars that need to understand traffic lights.

We humans are versatile beasts, but the System prizes specialization over versatility, so machines need not emulate us to be competitive. Indeed, computers can already best us at chess and Go, and they make inroads into creative pursuits like composing and writing every day. They beat us at our own games, one by one by one. How long before there aren’t any left?

This is not the most concerning aspect of AI, however. Even as machines compete with us on our own turf, we invest more and more importance in theirs, the virtual worlds of social media, cryptocurrencies, deep fakes … the list is ever-increasing. This is a domain where bots need not be physical to emulate and surpass us.

Given that, it’s only a matter of time until the desired answer to the question “Are you a robot?” will be “Yes”.  But hey, at least we have the weekend. See you at the Greyhound. We’ll party like its 1999.

Comments

There are currently no comments.