Brown prof skeptical of Elon Musk’s warning about AI and killer robots

Executives, academics, lawmakers all debating future of artificial intelligence

PROVIDENCE, R.I. (WPRI) – Baxter is all arms.

Broad-shouldered and with no discernible neck, he’s built — quite literally — to pick things up and put them down. His vacant stare doesn’t offer much insight into whether Baxter finds his lot in life rewarding. But that’s OK, because Baxter is a robot.

There are actually three Baxters in Brown University’s robotics lab, and they’re all designed to work alongside humans who are performing physical tasks. Baxter can do a number of things: his vice-like hands can pick up blocks and arrange them in a pattern, or he can hold a pen and write calligraphy. Rethink Robotics, which creates Baxters, says on its website that the bots can be trained, not programmed. In essence, they can learn.

Michael Littman, a computer science professor at Brown University, has devoted his career to machine learning and artificial intelligence.

“The general form of it of course is trying to build software that can learn, that can actually get better at something after it’s been programed,” he explains during a recent interview with Eyewitness News in his office. “Continuing the programming process through experience, not just through writing code.”

He and a student offer a demonstration of Baxter. They place the arms of the gangly robot in a certain position, mark that position in a computer program, and repeat the process. As a result, the robot can learn what’s essentially a piece of choreography. In this case, Littman and his student are teaching the robot how to fist bump, but in real-world applications Baxter could help with things like sorting, lifting and moving.

Baxter is artificially intelligent because of the software that runs him. But artificial intelligence – or AI – isn’t just inside robots. Any number of machines can possess artificial intelligence – a laptop, a cell phone, a Roomba.

Littman says the field of AI is typically focused on solving a particular problem, like recognizing speech or vacuuming a room without running into the walls. And while machines can learn, there’s also the theory that machines will begin to “think.”

“Sometimes it’s referred to as ‘artificial general intelligence,’ or ‘AGI,’’ says Littman. “The notion that we’re not trying to solve particular problems like mapping paint colors to names, we’re trying to solve general problems like, how do you get around the in the world? How do you learn new things?”

Tesla and SpaceX CEO Elon Musk called attention to the question of how AI will develop during a much-discussed speech last month at the National Governors Association summer meeting in Providence.

“I think people should be really concerned about it,” Musk said to a packed ballroom at the Rhode Island Convention Center in July. “I keep sounding the alarm bell, but until people see robots going down the street killing people they don’t know how to react.”

 

The assembled crowd collectively let out a nervous laugh. But Musk didn’t seem to be joking.

“AI is a fundamental existential risk to human civilization and I don’t think people fully realize that,” Musk said.

Story continues below video.

Littman isn’t convinced. “I think a lot of us, like me, are very skeptical,” he says.

Behind him is a PR2, a hulking, humanoid robot that looks like it’s straight out of a retro sci-fi movie. Except the PR2 doesn’t work anymore, because its operating system is outdated and no one can seem to get him up and running anymore. As the PR2 sits stuck in animation, the robot hardly seems to lend credibility to Musk’s theory.

But Littman paints a hypothetical picture: a machine that’s built to do everything a human can do, only better. It’s not made of biological tissue, it’s made of electronics and metal. Then it reprograms itself to be more efficient and more intelligent, and then reprograms itself again, and again, and again.

“And then – dot, dot, dot – intelligence explosion!” says Littman. “And then we’ve got these systems that are not like anything that we’ve ever seen before. They’re way more intelligent than humanity. Um, what do we do that that point? Where are we in all of this as human beings? And that’s the backstory to a lot of the kind of comments that Elon Musk has made.”

While Littman does think that machines could one day be intelligent like humans, he doesn’t subscribe to Musk’s intelligence explosion theory. He believes there will be plenty of warning signs well ahead of the dramatic AI takeover Musk has forecast.

Littman has other concerns. Artificially intelligent software isn’t just powering robots. It’s used in search engines and on social media sites. Algorithms that take data and make decisions based on it. Littman says it can be a slippery slope.

“AI is also making decisions on things like whether or not you’re going to get a bank loan,” he explains. “It’s making decisions about whether or not you’re going to be shown a certain job ad. It’s making decisions about, you know, if you’re arrested and a judge is trying to decide how long you’re going to be in jail, it’s making suggestions about that.

“All of these are informed by data. These systems can optimize for the wrong thing. They can be optimized or for more justice like we’ve seen in the past and if that justice is biased in any way, it will perpetuate that justice,” he says.

Mark Tracy, a public policy expert, brought this very concern to state Rep. Aaron Regunberg, a Providence Democrat.

“What I really wanted to do was put something together that would raise awareness so that people would understand that private sector companies were creating these black box artificial intelligences that would affect people’s rights,” says Tracy.

He and Regunberg are currently working on a bill that would create a notification system for people impacted by AI decisions in the legal arena.

“The concern would be that if smart public policy is not keeping up with these technological shifts, are we going to be in a situation where folks can be harmed by decision-making processes that use criteria that may not be right or sensical or proper?” says Regunberg.

He hopes to introduce the bill next session. Tracy believes Rhode Island would be the first state to have this type of law on the books.

Musk called for proactive regulation of AI systems during his talk to the NGA meeting in Providence in July, saying society could be in big trouble otherwise.

Littman says understanding the current impacts of AI is a crucially important part of creating a healthy future alongside machines.

As for what that future will look like and how AI will evolve, it remains to be seen.

“Is it only the stuff of science fiction?” Littman asks. “That’s a really important question, I don’t think we know.”