
This article was originally published on the Tech Futures Lab blog December 30, 2017. Steven Skoczen is a writer and technologist with a depth of experience and expertise in artificial intelligence technologies. Find out more about Steven here.
A term coined by American computer scientist John McCarthy in 1956, Artificial Intelligence is used to describe everything from robotic process automation to actual robotics. It has received more attention recently in part due to the rise of big data, and the increasing ability that computer processing has to gain actionable insights from data.
Tech Futures Lab sat down with AI expert Steven Skoczen to debunk some weighty myths about artificial intelligence (AI). We got to the bottom of the technology behind the buzzword, how it’s perceived and what we can expect in the future.
Myth 1: AI is just one thing
What is AI really? According to Steven, AI has become a buzzword to encompass many different streams of technology including data, code, business intelligence, cognitive computing and machine learning.
He says, “AI is a buzzword that encapsulates 70 different fields of computational processing. There are a huge array of fields that are lumped as ‘AI’.”
Myth 2: AI is fundamentally different from computing as we know it
When it comes down to it AI isn’t ‘new’, Steven says. In fact, the technologies behind this phenomenon have been around for decades.
“AI is just code and data, and we’ve had code and data for a long time. What we consider computing for business intelligence now was around in the 80s and 90s. In reality, we’ve had ‘intelligent systems’ for a long time,” he says.
Myth 3: AI means human intelligence
There is a misconception that AI is synonymous with human intelligence, and this is fundamentally not the case, Steven says.
“Intelligence isn’t a generalisable concept. We think of intelligence as human intelligence, and this is why AI gets lumped together with this picture of a person. However, there are some tasks, some level of computing power, that we can’t and have never been able to match, and on the flipside the more primal, instinctual sides of us the harder it is to teach computers,” Steven says.
The Moravec Paradox explains this, he says. This concept came from AI and robotics researchers that discovered how high-level reasoning requires little computation, but low-level sensorimotor skills require huge computational resources. Hans Moracev, one of the founding researchers, wrote, "It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.”
Steven says, “There are things like empathy and connection that come easily to people, things that are generally deeply wired into human brains and are seemingly impossible to teach computers to do.”
Myth 4: Humans are going to be replaced by robots
“AI augments people, it doesn’t replace them - in the same way humans weren’t replaced by screwdrivers, hammers or even language,” Steven says.
Steven references a video from Vox, a news and media site focused on 21st century stories. The video depicts views from an economist and a technologist on how the rise of technology is impacting the future of work, and looks at how automation has impacted jobs in the past. In the video the economist highlights that research showing the decline of blue collar jobs neglects to take into account the rise of new industries and higher productivity, and how workers gain opportunities that hadn’t even been considered before.
Steven says if you look at the last 100 years, every decade we thought we were going to be replaced by robots, but overall the same level of employment has existed throughout history. While there is an argument to be made about Moore’s Law, and the fact that we are in a time of exponential change, there is no data to support the fact that unemployment rates will skyrocket, and in the past decade there has been no change in the employment rate.
“We focus on the part where technology comes in and disrupts industries and people lose their job. But what we miss, the bigger picture, is that this disruption means business expands and new companies and opportunities are created.
“It comes back to the point that AI is just code and data, it’s not the second coming, it’s just computers. Your iPhone is magical but it doesn’t replace you,” he says.
Myth 5: The best technology has already been invented
“Today AI is where we were with PCs in the 1980s. It would be a mistake to look at AI
a done deal, as an entire set of technologies. AI is a combination of software and hardware, and in the same way that the end game of the PC was the iPhone, we have no idea where this technology will go next. Those who are going to invent the iPhone of AI technologies are in primary school,” Steven says.
“In a lot of ways AI is still in really early stages of development - it’s mainly used by big businesses, we’re still largely exploring what’s possible, and we’re asking a lot of questions about what it means to be ‘intelligent’,” Steven says.
AI: Mixed perceptions and a lot of potential
AI is instigating many conversations around the world about our own intelligence, what it means to be a human, and how new technologies will change our lives.
“We have questions about our own intelligence - what does it mean if computers can do what we can do? This is existentially scary for a lot of people. If computers get really good at what we consider human skills as an AI avatar and it seems reasonably natural, what does that mean for us? This opens up interesting questions about our existence that other computer functionalities, such as spreadsheets, don’t,” Steven says.
“I see it as computers are simply getting better, faster and smarter. Artificial Intelligence doesn’t have a technology problem, it has a marketing problem. If we had a different term, such as matrix computing, which a lot of it is, people wouldn’t be so scared.”
Steven says, “For me, AI is really exciting. We are seeing how it can be relatively simple to augment different things to give new capabilities we don’t have. We all have cognitive biases - our brains are really bad at some things and have blind spots - and technology can be used to fill in those blind spots, in the same way our rear view mirrors help us drive.”
“I deeply believe technology can make us better people - fairer, more equitable, and build a better world. The potential is there,” Steven says.