Industry attributes
Other attributes
Artificial intelligence, or AI, refers to software used by computers to mimic aspects of human intelligence. This could include perceiving, reasoning, learning, interacting with an environment, problem-solving, creativity, and more. AI is a wide-ranging branch of computer science that aims to build smart machines capable of tasks that previously required human intelligence. An interdisciplinary field with many approaches, AI has seen major advances, particularly in the fields of machine learning and deep learning that have the potential to transform industries across a number of sectors.
Specific use cases of AI include expert systems, natural language processing, speech recognition, and machine vision. Although AI is a broad subject, at its simplest, it is a field that combines computer science and robust datasets to enable problem-solving. AI systems have the ability to rationalize and take actions to improve the chances of achieving a specific goal. AI is finding widespread use across the business world, with companies finding applications to improve efficiency and make themselves more profitable.
In 2022, AI became much more popular due to the introduction of generative AI models, such as OpenAI's ChatGPT. The release of powerful generative AI models offers consumers, developers, businesses, and researchers a new approach to many of the tasks they regularly perform. This includes generating text, code, and images.
The expansion of AI has also brought with it criticism due to the possible loss of jobs, their potential for biases when making recommendations, and the difficulties in understanding how they came to an output. Researchers have found that AI models pick up the biases within the datasets they are trained on. AI systems are also opaque, meaning it is difficult to analyze how they made a decision or offered a recommendation.
While it is a rapidly growing industry, AI does not have a separate industry code in either the Standard Industrial Classification (SIC) system or the North American Industry Classification System (NAICS). However, the NAICS Code 541715 (Research and Development in the Physical, Engineering, and Life Sciences) includes fields that use artificial intelligence in research and development.
There are a number of ways to divide AI systems. A popular method includes four types, starting with the task-specific systems, already widely in use, through to a fully sentient artificial system:
- Type 1: Reactive machines—task-specific AI systems with no memory and are task-specific.
- Type 2: Limited memory—AI systems with memory that are capable of using past experiences to inform future decisions.
- Type 3: Theory of mind—a psychology term, when applied to AI refers to a system that has the social intelligence to understand emotions. This type of AI infers human intentions and predicts behavior.
- Type 4: Self-awareness—AI systems that have a sense of self, giving them consciousness. Machines with self-awareness understand their own current state. This type of AI does not yet exist.
Another classification divides AI into two divisions: weak AI or narrow AI and strong AI or artificial general intelligence (AGI).
Weak AI refers to AI trained and focused to perform specific tasks. Weak AI makes up most of the AI in use today. These systems operate within limited context simulating human intelligence for only a narrowly defined problem, such as transcribing human speech or curating content on a website. The following are specific examples of weak AI:
- Siri, Alexa, and other smart assistants
- Self-driving cars
- Google search
- Conversational bots
- Email spam filters
- Netflix’s recommendations
Strong AI is made up of Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI). AGI is a theoretical form of AI where a machine would have an intelligence equal to humans; it would have a self-aware consciousness that has the ability to solve problems, learn, and plan for the future. Artificial Super Intelligence (ASI)—also known as superintelligence—would surpass the intelligence and ability of the human brain. Strong AI has no practical examples in use today. The creation of strong AI is the goal of many AI researchers. However, some believe AGI should be limited, due to the potential risks of creating a powerful AI without appropriate guardrails.
Machine learning and deep learning are both sub-fields of artificial intelligence. A form of AI based on algorithms that can detect patterns in data and make predictions or recommendations, machine learning processes information itself rather than receiving explicit programming instructions. Machine learning algorithms adapt to new data and experiences, improving their performance over time. They take advantage of vast volumes of data, beyond that which a human could comprehend simultaneously.
Deep learning is a type of machine learning that processes a wider range of data resources, requiring even less human intervention and often producing more accurate results than traditional machine learning. Deep learning uses neural networks to make determinations about data and find the optimal response based on the data provided.
Neural networks, also known as artificial neural networks (ANNs) or simulated neural networks (SNNs), consist of node layers, including an input layer, one or more hidden layers, and an output layer. Each node connects to another and has an associated weight and threshold. Outputs from any individual node above a specified threshold mean the node is activated, sending data to the next layer of the network.
Neural networks are a subset of machine learning and are the key component of deep learning algorithms. Their name and structure are inspired by the human brain, mimicking the way that biological neurons signal to one another.
Neural networks rely on training data to learn and improve their accuracy over time. During training, all of its weights and thresholds are initially set to random values. Data is fed through the input layer passing through each succeeding layer, combining in complicated ways until it arrives, transformed, at the output layer. Weights and thresholds are continually adjusted until the same training data yield similar outputs. By ingesting data and processing it through multiple iterations, neural networks can discover and learn increasingly complicated features within the dataset.
Fine-tuned neural networks are a powerful tool in computer science and artificial intelligence, allowing researchers to classify and cluster data quickly. Neural networks can be classified into different types, with each generally used for different purposes. Common types of neural networks include those below:
- Feedforward neural networks, or multi-layer perceptrons (MLPs), are comprised of an input layer, a hidden layer or layers, and an output layer. Data is usually fed into these models to train them, and they are the foundation for computer vision, natural language processing, and other neural networks.
- Convolutional neural networks (CNNs) are similar to feedforward networks, but they’re usually utilized for image recognition, pattern recognition, and/or computer vision. CNNs harness principles from linear algebra, particularly matrix multiplication, to identify patterns within an image.
- Recurrent neural networks (RNNs) are identified by their feedback loops. These learning algorithms are primarily leveraged when using time-series data to make predictions about future outcomes, such as stock market predictions or sales forecasting.
Introduced by Google in 2017, transformer neural network architecture has become the basis of leading generative AI models. In comparison to previous neural network architectures, transformers are better able to apply context and make use of parallel computation for faster throughput. Transformers learn context by tracking relationships across sequential data using a mathematical technique called attention or self-attention. These attention mechanisms make it possible to detect subtle ways distant data elements within a series influence or depend on each other by weighing the importance of each part of the input data differently.
Computer vision is a field of AI that aims to enable computers to identify and understand objects and people in images and videos. Like other types of AI, computer vision seeks to perform and automate tasks that replicate human capabilities. In this case, computer vision seeks to replicate both the way humans see and the way humans make sense of what they see. Computer vision has a range of practical applications, making it a central component of many modern innovations and solutions. Computer vision uses inputs from sensing devices, artificial intelligence, machine learning, and deep learning to recognize patterns in visual data and then use those patterns to determine the content of other images.
Natural language processing (NLP) is a branch of AI that helps computers understand, interpret, and manipulate human language. NLP combines computational linguistics (rule-based modeling of human language) with statistical, machine learning, and deep learning models. NLP technologies enable computers to process human language in the form of text or voice data, understanding its full meaning, complete with the speaker or writer’s intent and sentiment. This includes taking into account the ambiguities of human languages, such as homonyms, homophones, sarcasm, idioms, metaphors, grammar and usage exceptions, and variations in sentence structure.
Common NLP tasks include the following:
- Speech recognition
- Speech tagging
- Word sense disambiguation
- Named entity recognition
- Sentiment analysis
- Natural language generation
Generative AI is a field based on producing content using AI models, including the creation of new text, images, video, audio, code, or synthetic data. Generative AI models are trained on vast datasets to understand the connections in, for example, natural language, between natural language and images, and the links between natural and programming languages. Commonly used generative models include Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), and transformers.
Advances in deep learning are expected to increase understanding in quantum mechanics. It is thought that quantum computers will accelerate AI. Quantum computers have the potential to surpass conventional ones in machine learning tasks such as data pattern recognition.
Semantic computing deals with the derivation, description, integration, and use of semantics (meaning, context, and intention) for resources including data, documents, tools, devices, processes, and people. Semantic computing includes analytics, semantics description languages, integration of data and services, interfaces, and applications. In AI, semantic computing involves the creation of ontologies that are combined with machine learning to help computers create new knowledge. Semantic technology helps cognitive computing extract useful information from unstructured data in pattern recognition and natural language processing.
The Internet of Things (IoT) refers to objects that connect and transfer data via the internet and the sharing of information between devices. IoT-based smart systems generate a large volume of data, including sensor data valuable to researchers in healthcare, bioinformatics, information sciences, policy, decision-making, government, and enterprises. AI can be combined with machine learning for the analysis of data and prediction.
Some lines of AI research aim to simulate the human brain. Artificial life, or the animate approach, is concerned with the conception and construction of artificial animals as simulations or actual robots. It aims to explain how certain faculties of the human brain might be inherited from the simplest adaptive abilities of animals. Evolutionary computation is a generic optimization technique that draws inspiration from the theory of evolution by natural selection.
As AI technology has matured, it has seen significant integration and adoption within the field of radiology, the science of using x-rays or other high-energy radiation for the diagnosis and treatment of diseases. Reliance on AI-driven applications in radiology has risen substantially; in 2020, clinical adoption of AI by radiologists was 30 percent, up from zero in 2015. AI applications for radiology are mainly used for two reasons: to improve workflow and to provide clinical decision support. With workflow applications, radiologists use AI apps to gather patient reports and exams in one place in order to analyze patient information, making it easier to interpret. Clinical decision applications are able to do a wide variety of analyses, including data analytics, image reconstruction, disease and anatomy identification, and advanced visualization. Challenges with AI in radiology include concerns about integrating AI applications, especially with an influx of applications gaining regulatory approval in recent years.
The concept of artificial intelligence has existed as early as the Ancient period. In China, a life-sized human automaton was made by a mechanical engineer named Yan Shi in 1000 BC. In Egypt, a Greek mathematician popularly known as Hero of Alexandria had his own works on programmable automata, which presages the modern discipline of cybernetics. In 1637, Rene Descartes identified the division between machines that might one day learn to perform a specific task and those that would be able to adapt to any job, like the difference between narrow and general AI.
In 1956, the term "artificial intelligence" was coined by John McCarthy for his proposed research project to investigate whether features of intelligence could be simulated by computers.
In the twentieth century, AI has continually developed and still advancing. The technological advances boosted the understanding of theories and forged techniques essential to artificial intelligence. AI is an integral part of the technology industry. AI techniques and advancements have been aiding in various fields in science and technology in solving challenges.