Back in the 1950s, the fathers of the field, Minsky and McCarthy, described artificial intelligence as any task performed by a machine that would have previously been considered to require human intelligence.
That’s obviously a fairly broad definition, which is why you will sometimes see arguments over whether something is truly AI or not.
Modern definitions of what it means to create intelligence are more specific. Francois Chollet, an AI researcher at Google and creator of the machine-learning software library Keras, has said intelligence is tied to a system’s ability to adapt and improvise in a new environment, to generalise its knowledge and apply it to unfamiliar scenarios.
“Intelligence is the efficiency with which you acquire new skills at tasks you didn’t previously prepare for,” he said.
“Intelligence is not skill itself; it’s not what you can do; it’s how well and how efficiently you can learn new things.”
It’s a definition under which modern AI-powered systems, such as virtual assistants, would be characterised as having demonstrated ‘narrow AI’, the ability to generalise their training when carrying out a limited set of tasks, such as speech recognition or computer vision.
Typically, AI systems demonstrate at least some of the following behaviours associated with human intelligence: planning, learning, reasoning, problem-solving, knowledge representation, perception, motion, and manipulation and, to a lesser extent, social intelligence and creativity.
What are the uses for AI?
AI is ubiquitous today, used to recommend what you should buy next online, to understand what you say to virtual assistants, such as Amazon’s Alexa and Apple’s Siri, to recognise who and what is in a photo, spot spam, or spot spam detect credit card fraud.
What are the different types of AI?
At a very high level, artificial intelligence can be split into two broad types:
Narrow AI
Narrow AI is what we see all around us in computers today — intelligent systems that have been taught or have learned how to carry out specific tasks without being explicitly programmed how to do so.
This type of machine intelligence is evident in the speech and language recognition of the Siri virtual assistant on the Apple iPhone, in the vision-recognition systems on self-driving cars, or in the recommendation engines that suggest products you might like based on what you bought in the past. Unlike humans, these systems can only learn or be taught how to do defined tasks, which is why they are called narrow AI.
General AI
General AI is very different and is the type of adaptable intellect found in humans, a flexible form of intelligence capable of learning how to carry out vastly different tasks, anything from haircutting to building spreadsheets or reasoning about a wide variety of topics based on its accumulated experience.
This is the sort of AI more commonly seen in movies, the likes of HAL in 2001 or Skynet in The Terminator, but which doesn’t exist today – and AI experts are fiercely divided over how soon it will become a reality.
What can Narrow AI do?
There are a vast number of emerging applications for narrow AI:
- Interpreting video feeds from drones carrying out visual inspections of infrastructure such as oil pipelines.
- Organizing personal and business calendars.
- Responding to simple customer-service queries.
- Coordinating with other intelligent systems to carry out tasks like booking a hotel at a suitable time and location.
- Helping radiologists to spot potential tumors in X-rays.
- Flagging inappropriate content online, detecting wear and tear in elevators from data gathered by IoT devices.
- Generating a 3D model of the world from satellite imagery… the list goes on and on.
New applications of these learning systems are emerging all the time. Graphics card designer Nvidia recently revealed an AI-based system Maxine, which allows people to make good quality video calls, almost regardless of the speed of their internet connection. The system reduces the bandwidth needed for such calls by a factor of 10 by not transmitting the full video stream over the internet and instead of animating a small number of static images of the caller in a manner designed to reproduce the callers facial expressions and movements in real-time and to be indistinguishable from the video.
However, as much untapped potential as these systems have, sometimes ambitions for the technology outstrips reality. A case in point is self-driving cars, which themselves are underpinned by AI-powered systems such as computer vision. Electric car company Tesla is lagging some way behind CEO Elon Musk’s original timeline for the car’s Autopilot system being upgraded to “full self-driving” from the system’s more limited assisted-driving capabilities, with the Full Self-Driving option only recently rolled out to a select group of expert drivers as part of a beta testing program.
What can General AI do?
A survey conducted among four groups of experts in 2012/13 by AI researchers Vincent C Müller and philosopher Nick Bostrom reported a 50% chance that Artificial General Intelligence (AGI) would be developed between 2040 and 2050, rising to 90% by 2075. The group went even further, predicting that so-called ‘superintelligence‘ – which Bostrom defines as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest” — was expected some 30 years after the achievement of AGI.
However, recent assessments by AI experts are more cautious. Pioneers in the field of modern AI research such as Geoffrey Hinton, Demis Hassabis and Yann LeCun say society is nowhere near developing AGI. Given the scepticism of leading lights in the field of modern AI and the very different nature of modern narrow AI systems to AGI, there is perhaps little basis to fears that a general artificial intelligence will disrupt society in the near future.
That said, some AI experts believe such projections are wildly optimistic given our limited understanding of the human brain and believe that AGI is still centuries away.