What is AI?

AI

Artificial intelligence (AI) is intelligence that’s demonstrated by machines, as opposed to the natural intelligence inherent in humans. While AI has been around for decades, it’s come to the fore in recent years and is now a hot tech topic with basic commercial applications readily available and more complex AI systems moving into the commercial arena.

What is AI?

The term artificial intelligence (AI) was coined in the mid 1950s and it remained in the research realm (much of it academic) without commercial applications for decades. However, there has been significant progress made in AI research in recent years, fuelled by the availability of significant amounts of computing power (in both computers and cloud-based systems) as well as vast amounts of data. AI has come into the mainstream.

John McCarthy (who is credited with coining the term AI and is widely regarded as the father of AI) spearheaded the Dartmouth Research Project, which in the 1950s defined AI as “making a machine behave in ways that would be called intelligent if a human were so behaving”.

Fast forward over 60 years and there’s still no single accepted definition of AI to replace the very broad one outlined then. ESCP Europe professors Andreas Kaplan and Michael Haenlein published a succinct and not overly technical definition in the January-February 2019 edition of Business Horizons: “a system’s ability to interpret external data correctly, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaption”.

In a nutshell, computers use algorithms and data to simulate human intelligence processes (cognitive intelligence), like learning, reasoning and learning from experience.

AI can use data collected through IoT devices or big data sources like social media applications or company databases. AI is also incorporated in these as well as many other types of technology, such as machine learning (more below), automation, robotics, autonomous vehicles, smart cities, machine vision (ie the science of allowing computers to see) and natural language programming.

AI terms

AI is a general term that’s widely and sometimes erroneously used. It is also known as machine intelligence, but machine learning is something different.

Machine learning is the science of getting a computer to act without programming, and the algorithms learn through training so the quality of the computer’s predictions improves with experience. It is not synonymous with AI but an application of AI.

Deep learning is a subset of machine learning that mimics the activity of the human brain. It uses artificial neural networks, algorithms that are inspired by the structure and function of the brain. As ever larger neural networks are built and trained with more and more data, their performance continues to increase – unlike other machine learning techniques whose performance reaches a plateau.

There are those in academia and industry who feel that the term AI has negative connotations. They believe the use of ‘artificial’ can cause the average person to have unrealistic fears and improbable expectations about AI, and a number of alternative terms are being bandied about.

Augmented intelligence is suggested as a more neutral term which emphasises that the technology is designed to enhance human intelligence rather than replace it. However, the fact that augmented intelligence has the same abbreviation as artificial intelligence isn’t ideal. IBM has suggested the term intelligence augmentation (IA) while other alternatives include machine-augmented intelligence and cognitive augmentation.

But the term AI is now so freely used, it’s unlikely that any alternative will gain widespread acceptance and traction. Will the average person appreciate the nuance, or even care what the ‘a’ in AI stands for?

Categorisations

While there are multiple classifications of AI, it is typically broken down into three categories:

  • Artificial Narrow Intelligence (ANI, aka weak or narrow AI) has less intelligence than a human in all areas except the specific area it’s been programmed to handle. Using the analogy of a virtual assistant, Alexa or Siri can recognise your voice and turn on the coffee pot (if it’s been properly set up!), but can’t perform the task of making a cup of coffee.
  • Artificial General Intelligence (AGI, aka strong or general AI) has human cognitive abilities and is able to find a solution to an unfamiliar task without human intervention. It therefore has above human-level AI and can equal or outperform humans in several areas: Alexa/Siri has evolved into a humanoid robot that’s able to recognise your voice and make a cup of coffee.
  • Artificial Super Intelligence (ASI) is the ultimate AI that outperforms humans in all areas. Alexa/Siri is conscious and self-aware with superhuman abilities like being able to instantly solve complex maths problems or write a bestseller.

Kaplan and Haenlein classify three groups of AI that outline the characteristics of each AI system:

  • Analytical AI only has characteristics consistent with cognitive intelligence. Such AI systems generate a cognitive representation of the world and use learning based on past experience to inform future decisions. Most AI systems in use today fall into this category.
  • Human-inspired AI has cognitive as well as emotional intelligence. The system can understand human emotions and consider them in their decision making. As an example, MIT-founded Affectiva uses advanced vision systems to recognise emotions including joy, surprise and anger as well as or better than humans. This can be used by companies to recognise emotions during customer interactions and step in to improve the customer experience as necessary, or in job interviews.
  • Humanized AI has cognitive, emotional and social intelligence. As with ASI above, such an AI system would be self-conscious and self-aware in its interactions with others. Progress has been made in recognising and mimicking humans, but viable systems are a long way off.

Machine learning methods

All forms of AI are able to learn from past data and most sources identify three broad types of machine learning processes:

  • Supervised learning is a method of teaching by example. The AI system is trained using a large number of datasets which are labelled (often by humans) and the outputs are also labelled so the system can build on its learning once trained. As an example, in 2012 Google exposed an AI system to 10 million randomly selected YouTube video thumbnails over three days to teach it to recognise images of cats. More recently, Generative Adversarial Networks (GANs), can be fed a small amount of data that generate the vast amounts of new data they need to teach themselves. This can be thought of as semi-supervised learning.
  • In unsupervised learning the AI system tries to identify patterns in unlabelled data: it looks for similarities in data (for example fruits that weight a similar amount or articles on similar topics) that can be used to categorise the data.
  • Reinforcement learning can be viewed as a process of trial and error until the best possible outcome is achieved, with a reward at the end. The AI system is fed information and then considers all the variables and options in order to achieve the best possible result. Microsoft uses reinforcement learning to select headlines on MSN.com; the system is ‘rewarded’ with a higher score when more visitors click on a given link. There are many examples of AI systems beating video games through reinforcement learning. In the case of Pac-Man, the system would simply be given the information that the character can move up, down, left and right and that the object of the game is to maximise the score.

The future

It’s clear that AI has come a long way in recent years and that it still has a long way to go. There are many potential applications for AI that could revolutionise our lives in meaningful ways, but there is also considerable debate and disagreement about the extent to which AI should be regulated, developed and applied. Many well regarded people have concerns about the possibility of weaponised or the rise of superhuman robots that take over the world Terminator-style. One of them was Stephen Hawking, who said in 2016: “The rise of powerful AI will be either the best or the worst thing ever to happen to humanity”.

About Sacha Kavanagh

Research Analyst/ Technical Writer

Sacha has more than 20 years’ experience researching and writing about enterprise tech, telecoms, data centres, cloud and IoT. She is a researcher, writer and analyst, and a regular contributor to 5G.co.uk writing guides and articles on all aspects of 5G.

View more posts by Sacha Kavanagh >>>
Salford Launches Tender for 5G Infrastructure Project 11 July

Salford Launches Tender for 5G Infrastructure Project

Salford City Council issued a tender for partnerships...

Huddersfield Uni wins £1M grant for 5G training and research 10 July

Huddersfield Uni wins £1M grant for 5G training and research

EU research scheme to fund four PhD researchers at...

Quickline is bringing 5G broadband to Lincolnshire 10 July

Quickline is bringing 5G broadband to Lincolnshire

5G broadband is in the works for caravan parks in...