AI comes in many forms, each at its own stage of development with its own definition, techniques and capabilities. Some forms – such as Artificial General Intelligence, AI super-intelligence or Strong AI, the kind of AI that might someday automate all work and that we might lose control of – live in the future and may never (some say will never) happen. Other forms of AI are doing valuable work today and are driving growth in the high performance sector of the technology industry.
There’s a key distinction is between “strong” and “narrow” AI. The latter is limited in scope to handling specific tasks and specific problems; that’s the current state of AI. Strong AI would enable machines and robots to handle multiple tasks and to integrate the learning from multiple disciplines simultaneously, what’s referred to as the “emergent property.” Strong AI could, theoretically, take on human-like powers of intuition, emotion and empathy. R&D groups around the world are working on breakouts into early forms of strong AI, but for now, narrow AI is the rule of the day.
Another interesting point: AI has undergone several hype cycles since its emergence 60 years ago that, at each cycle’s end, left AI discredited, the bane of entrepreneurs and investors. As hot as AI is today, we hear talk that another “AI winter” may be in the offing. Some of these predictions refer to the unlikelihood of AI super-intelligence, but it’s also true that AI’s relative lack of technological maturity makes it extremely difficult to implement, requiring the specialized and expensive capabilities of data scientists under the direction of AI-savvy IT managers experienced in guiding AI projects to fruition. AI democratization, the integration of tools, techniques and technologies, combined with support services, that help overcome AI complexities, is critical to forestalling a modified, limited form of AI winter. It’s incumbent on technology vendors to bring AI within the skill level of more companies.
The following AI definitions aren’t meant to be the final word on AI terminology, the industry is growing and changing so fast that terms will change and new ones will be added. Instead, this is an attempt to frame the language we currently use. We invite your feedback in the hope of encouraging discussion and greater clarity, and we plan to update this list over time.
Our source for many of these definitions is a company well-versed in AI: Pegasystems, Cambridge, MA, for more than 30 years a developer of operations and customer engagement software and a company that studies the implications and impacts of AI in the workplace.
Artificial Intelligence – in Pegasystem’s definition, “is a broad term that covers many sub-fields of computer science that aim to build machines that can do things that require intelligence when done by humans. AI sub-fields include:
Machine Learning – rooted in data science, computational statistical models, algorithms and mathematical optimization, machine learning is the ability of computer systems to improve their performance by exposure to data without the need to follow explicitly programmed instructions, relying on patterns and inference instead. Machine learning is the process of automatically spotting patterns in large amounts of data that can then be used to make predictions. Using sample training data, ML algorithms build a model for identifying likely outcomes.
Deep Learning – a powerful machine learning technique, DL involves a family of algorithms that processes information in multi-layered “neural” networks in which the output from one layer becomes the input for the next (hence the term “deep”). In so doing, each layer transforms data into a more comprehensive representation of the overall object. Deep learning algorithms have proved successful in, for example, detecting cancerous cells or forecasting disease – but with one huge caveat: there’s no way to identify which factors the deep learning program uses to reach its conclusion. This can lead to problems involving AI bias, data ethics and “algorithm accountability.” This problem has led some regulated industries and public sector organizations, which must comply with anti-bias regulations, to abandon DL for the most transparent ML.
Computer Vision / Image Recognition – the ability of computers to identify objects, scenes and activities in images using techniques to decompose the task of analyzing images into manageable pieces, detecting the edges and textures and comparing images to known objects for classification.
Natural Language Processing / Speech Recognition – the ability of computers to work with text and language the way humans do to, for instance, extract meaning from text/speech or generate text that is readable, stylistically natural and grammatically correct.
Cognitive Computing – a term favored by IBM, cognitive computing applies knowledge from cognitive science to build an architecture of multiple AI subsystems – including machine learning, natural language processing (NLP), vision, and human-computer interaction – to simulate human thought processes with the aim of making high level decisions in complex situations. According to IBM, the goal is to help humans make better decisions, not make decisions for them.
Robotic Process Automation (RPA) – software configured to automatically capture and interpret existing applications for processing a transaction, manipulating data, triggering responses and communicating with other digital systems, often used to handle repetitive office tasks, such as forms processing. The key difference…from enterprise automation tools like business process management (BPM) is that RPA uses software or cognitive robots to perform and optimize process operations rather than human operators.”
Artificial General Intelligence – as discussed above, this is strong AI that could ultimately result in some form of consciousness. Related to this is “the singularity,” another futuristic concept around the idea that super AI could trigger “runaway technological growth…, resulting in a powerful super-intelligence far surpassing all human intelligence.”
Note: an earlier version of this article was published on January 19, 2018.