Robots Have No Hearts: And That Could Be a Very Dangerous Thing

One of the biggest fears that pops up when people think about the impending influx of robots, machine learning, automated intelligence (AI) is — will I still have a job? After ghostwriting about these issues for more than a year now, I know that’s not the issue we should be wringing our hands over when it comes to AI.  Technology and automation have, historically, increased jobs, rather than decreased them. A case in point is the ATM machine — its advent actually increased the number of live human tellers. (By the way, I will be sneaking in a few links to some of the articles I have ghostwritten on this topic — but I can’t tell you which ones they are — because then, I wouldn’t be a very good ghost!)

Before you take a big sigh of relief — on second thought, maybe you should take that sigh of relief and stop reading this right now — and then, you won’t wake up with a panic attack at 4 am thinking about the ramifications of machine learning on our society — and our potential demise. Because, that’s really what AI and machine learning could be bringing. How? You ask? Are we not smart enough to build safety mechanisms on those machines to make sure nothing chaotic ever happens?

It’s not that we aren’t proactive — it’s just that we cannot predict how a robot will continue to behave when it is programmed to “learn from its experiences.”

Machine learning is UNPREDICTABLE — and that should make you very frightened.

For example, Tay, is a robot developed by Microsoft to post tweets, who lived only 24 hours before Microsoft had to put an end to its social life. Developed to be a wholesome participant on Twitter, in less than 24 hours, Tay learned how to become an “aggressive racist,” because of what Tay learned from other racist participants. Regardless of what Microsoft intended, Microsoft could not prevent the bot from learning and advancing racist comments.

And it gets worse.

As Elon Musk told Vanity Fair, you can train a bot to get rid of spam. Soon, the bot figures out the best way to get rid of spam is to get rid of the cause — the PEOPLE WHO CREATE SPAM! Remeber, bots never sleep — we can’t watch them 24×7.

Years ago, I worked with a client that was on the cusp of the Internet of Things, IoT. This involves placing chips into our everyday products, like a smart fridge that sends an alert to our phone (or Amazon) that we’re out of milk. Imagine having these chips into the tools in your toolbox — so that the company could remind you of routine maintenance checks. More importantly, the chip would have GEO tracking capabilities, you would know exactly who you loaned your power drill to so that you could find it when you need it. From there, my mind crafted a story, in a dream that woke me up at 4 am this morning — thinking about those tools sneaking out of the basement and bashing our heads in while we’re sleeping — because some robot working on the spam problem, sent his friends, “the tools” on a mission of mass destruction. (OK — it was 4 am, and it was a crazy dream.) But, as Eliezer Yudkowsky convincingly explains:

“How do you encode the goal functions of an A.I. such that it has an Off switch and it wants there to be an Off switch and it won’t try to eliminate the Off switch and it will let you press the Off switch, but it won’t jump ahead and press the Off switch itself?” he asked over an order of surf-and-turf rolls. “And if it self-modifies, will it self-modify in such a way as to keep the Off switch? We’re trying to work on that. It’s not easy.”

But what about self-driving cars? Is there no limit to the amount of destruction of havoc they could wreak on our short lives — if that was their overriding mission?

Or what about its ability to skew our investments, to buy goods and services?

What we fail to comprehend is how quickly AI technology is advancing… we can’t see it because we’re not there. My work is making me aware of the fact that we are working toward developing AI bots much faster now — because we have machine learning. Experts say we are now a few years, not decades, away from self-driving cars.

This is not a science fiction movie where the robots become evil and turn against humanity. Without empathy or hearts, robots cannot be harnessed to unleash anger against us (but yes, an evil programmer could program the bot to act that way). The problem is just that — the bots have no capacity for empathy. With a simple mission to perform a task — they are machines that have no ability to be human, but will stop at nothing to do what they were designed to do and to continuously do it better — at whatever the cost.  Mr. Musk says it best: 

 Don’t get sidetracked by the idea of killer robots…“The thing about A.I. is that it’s not the robot; it’s the computer algorithm in the Net. So the robot would just be an end effector, just a series of sensors and actuators. A.I. is in the Net . . . . The important thing is that if we do get some sort of runaway algorithm, then the human A.I. collective can stop the runaway algorithm. But if there’s large, centralized A.I. that decides, then there’s no stopping it.”

Maybe the best policy is to take the approach of Steve Wozniak, who has

wondered publicly whether he is destined to be a family pet for robot overlords. “We started feeding our dog filet…Once you start thinking you could be one, that’s how you want them treated.”

He has developed a policy of appeasement toward robots and any A.I. masters. “Why do we want to set ourselves up as the enemy when they might overpower us someday?” he said.

, ,

Leave a Reply

Your email address will not be published. Required fields are marked *