3 Fascinating Things I Learned About AI From This Reddit AMA

Like Westworld, only a lot more complex.

Artificial intelligence has come a long way since its fanciful portrayals in science fiction and Hollywood films. As scientific advances accelerate and devices' capabilities grow in scope and accuracy, AI in even its basic form has penetrated nearly every facet of our daily life, from optimizing the results of your "election 2016" Google search to ordering dollhouses on Amazon's Echo Dot device. 

But the seemingly unrelenting pace at which AI is developing does little to assuage concerns about its potential threat to humanity. It also raises philosophical questions about the ethics of creating intelligence in artificial entities. These are topics that have been subject to plenty of discussion since the dawn of the technology, and if Joanna Bryson's Reddit AMA session on Friday is any indication, they still are.

Bryson is a professor in Artificial Intelligence at the University of Bath who also does consulting for the Institute of Electrical and Electronics Engineers (IEEE), the European Parliament, and the Organization for Economic Co-operation and Development (OECD). She invited Redditors to shoot her their burning questions about the science, engineering, and ethics of AI

"While I was doing my PhD, I noticed people were way too eager to say that a robot — just because it was shaped like a human— must be owed human obligations. This is basically nuts; people think it's about the intelligence, but smart phones are smarter than the vast majority of robots and no one thinks they are people," she wrote.

As someone with only a rudimentary understanding about AI, here's are the three most interesting things I learned.

Recommended

1. On regarding robots as humans and granting them rights.

The anthropomorphizing of robots that look and behave like humans is often depicted in popular culture (think Westworld). Bryson took issue with our penchant for dehumanizing humans and anthropomorphizing machines. 

"I'm very worried about the fact that we can treat people like they are not people, but cute robots like they are people," she wrote. "You need to ask yourself — what are ethics for? What do they protect? I wouldn't say it's 'self-awareness.' Computers have access to every part of their memory, that's what RAM means, but that doesn't make them something we need to worry about."

Bryson noted how people sometimes identify with things we don't have anything in common with, like robots, if they look like us.  "Even if we assumed we had a robot that was otherwise exactly like a human (I doubt we could build this, but let's pretend like [Isaac Asimov, popular sci-fi author] did), since we built it, we could make sure that its 'mind' was backed up constantly by wifi, so it wouldn't be a unique copy. We could ensure it didn't suffer when it was put down socially. We have complete authorship. So my line isn't 'torture robots!' My line is 'we are obliged to build robots we are not obliged to.'"

2. Designing human-like robots to serve us hints at an appetite for subjugating humans.

"So far the AI that is here now changing the world looks nothing like humans — web search, GPS, mass surveillance, recommender systems, etc.," Bryson wrote. "Vibrators have been giving physical pleasure for years, but some people want to dominate something that looks like a person," she wrote. "It's not good, but it's very complicated."

3. A robot's "morals" depend on the morals of the person or corporation that built it.

When a Reddit user asked Bryson how we should define "friendly" vs. "unfriendly" AI, she replied:

I would talk about in group and out group rather than friendly and unfriendly, because the real problem is humans, and who we decide we want to help. At least for now, we are the only moral agents — the only ones we've attributed responsibility to for their actions. Animals don't know (much) about responsibility, and computers may "know" about it but since they are constructed the legal person who owns or operates them has the responsibility. 

"So whether a device is 'evil' depends on who built it," Bryson wrote, and who owns it. "AI is no more evil or good than a laptop."

Cover image via Oleg Doroshin / Shutterstock.com

GET SOME POSITIVITY IN YOUR INBOX

Subscribe to our newsletter and get the latest news and exclusive updates.