Also known as:
Machine learning, deep learning, neural networks, automation, automated decision-making, ‘smart’ technologies.
Artificial Intelligence (AI) is a field of computer science that seeks to create machines and software that can learn, reason, make decisions and act on their own to achieve their goals.
The term ‘artificial intelligence’ was first coined by US computer scientist John McCarthy in 1956 for a project to clarify and develop the concepts around ‘thinking machines’. McCarthy described artificial intelligence as any task performed by a machine or program that, if a human carried out the same activity, we would say the human had to ‘apply intelligence’ to accomplish the task.
Since the 1980s, significant development has occurred using human reasoning as a guide to provide better services or create better products. This area of research – often referred to as ‘machine learning’ – is aimed at giving computers the ability to learn and build models so that they can perform activities like prediction within specific domains.
The field is rapidly advancing due to developments in processing power, large increases in available data, and significant investment. The long-term potential of AI is virtually limitless. AI offers significant potential to automate procedural and administrative tasks, and enables numerous other technologies. Current examples of AI in practice include voice recognition, ‘smart’ assistants such as Siri, Alexa, Cortana and Google Now, spam filters, automated and assisted cancer detection, automated image captioning, robotics, facial recognition, and smart devices.