Glossary

We make Smart Manufacturing a Reality

Here are short definitions of the most frequently used terms related to AI in alphabetical order. If we missed any, please let us know!

Accuracy

In the context of manufacturing, especially quality control accuracy is defined as the ratio of correctly predicted instances (both true positives and true negatives) to the total instances.

Artificial Intelligence (AI)

A branch of computer science that aims to create machines capable of intelligent behavior, including learning, problem-solving, and decision-making.

Autonomous Systems

Machines or devices that can perform tasks or make decisions without direct human intervention. These systems utilize sensors, algorithms, and often artificial intelligence to perceive their environment, analyze data, and autonomously adapt their behavior to achieve specific goals. Examples include autonomous vehicles, drones, and robots that can operate, navigate, and make decisions independently, based on their programming and real-time input from the surrounding environment.

Cloud Computing

A technology paradigm that involves the delivery of computing services, including storage, processing power, and software applications, over the internet. Instead of relying on local servers or personal devices for data storage and computing tasks, cloud computing allows users to access and utilize these resources through remote servers hosted in data centers. Characteristics include on-demand access, resource pooling, easy scalability, and payment based on actual usage.

Computer Vision

The field of AI that enables machines to interpret and understand visual information from the world, often used in tasks like image recognition and object detection.

Confusion Matrix

A  table used to describe the performance of a classification model. It compares the actual target values with those predicted by the model, allowing you to see not just the number of correct predictions but also where the model is making errors. More details here.

Convolutional Neural Network

A neural network designed specifically for processing and analyzing visual data, such as images and videos. CNNs are particularly effective in tasks like image recognition, object detection, and classification.

Find more information about the algorithms used in visual inspection in our blog.

Deep Learning

A subfield of machine learning that uses neural networks with multiple layers (deep neural networks) to model and analyze complex patterns.

Descriptive AI

 

Descriptive AI, also known as descriptive analytics, is a type of data analysis that looks at past data to give an account of what has happened. It involves summarizing and analyzing large datasets to identify patterns and relationships that may exist within the data. Descriptive AI is used in various applications, including natural language processing, computer vision, and machine learning.

 

Digital Twin

A virtual representation of a physical object, system, or process. It mirrors the real-world counterpart and is continuously updated with data from sensors, devices, and other sources. Digital Twins enable real-time monitoring, analysis, and simulation, offering insights into the performance, status, and behavior of the corresponding physical entity. Digital twins find applications in various industries, such as manufacturing, healthcare, and smart cities for optimizing processes, predicting maintenance needs, and improving overall efficiency.

Edge Computing

Edge computing refers to the practice of processing and analyzing data near the source of generation, rather than relying on a centralized cloud-based system. In edge computing, computing resources, including data storage and processing power, are located closer to the devices or sensors producing the data, typically at the “edge” of the network. Characteristics include reduced latency, real-time processing, bandwidth efficiency, and enhanced privacy and security.

Explainable AI (XAI)

Development of artificial intelligence systems that can provide understandable and interpretable explanations for their decisions and predictions. The aim of XAI is to enhance transparency and trust in AI models by making their decision-making processes more accessible and comprehensible to humans. This is particularly important in critical applications where the ability to understand and trust AI decisions is crucial.

False Negatives

In the context of quality control in manufacturing, a “false negative” refers to a situation where a product is incorrectly identified as passing a quality test and meeting all required standards when, in fact, it has defects or does not meet specifications. This means that the quality control system fails to detect a defective product, allowing it to pass through to the next stage of production or even to the customer.

False Positives

In the context of quality control refers to a situation where a product is incorrectly identified as defective or failing a quality test when, in fact, it meets all the required standards and specifications.  The quality control system mistakenly flags a good product as bad which can result in the product being scrapped or undergoing unnecessary rework.

Generative Adversarial Network

An artificial intelligence model that consists of two neural networks, a generator, and a discriminator, trained simultaneously through adversarial training. The generator creates synthetic data, and the discriminator evaluates whether the generated data is real or fake. Over time, the generator aims to create increasingly realistic data, and the discriminator improves its ability to differentiate between real and generated data. GANs are widely used for generating realistic images, data augmentation, style transfer, and other tasks in generative modeling.

Generative AI

This term refers to systems that can produce new content such as text, images, or other media. Unlike descriptive AI, which focuses on improving access to existing information, generative AI has the ability to create new and original content. This technology operates on the principles of machine learning and deep neural networks, enabling it to comprehend patterns and produce novel content autonomously.

Human-Machine Interface (HMI)

The technology and tools that enable communication and interaction between humans and machines. In various systems and devices, HMIs provide a user-friendly interface, often visual, through which humans can monitor and control the operation of machines or processes. Examples include touchscreens, control panels, and graphical user interfaces (GUIs) that facilitate the exchange of information and commands between users and machines in industrial, automotive, and other applications.

HMI Simplification

The process of streamlining and optimizing the Human-Machine Interface (HMI) in a system or device. This involves making the user interface more intuitive, user-friendly, and efficient by reducing complexity, eliminating unnecessary features, and improving overall usability. The goal is to enhance the user experience, making it easier for individuals to interact with and control machines or systems, leading to improved efficiency and reduced potential for errors.

AI can play an important role in HMI simplification by reducing the number of adjustments a human has to make to a few critically important ones and then adjusting the rest automatically.

Internet of Things (IoT)

IoT is the network of interconnected physical devices (sensors, actuators, etc.) embedded with software and sensors to collect and exchange data. The IoT is relevant in the context of manufacturing in 1) quality control for continuous monitoring of product quality by integrating sensors into the manufacturing process, 2) predictive maintenance by monitoring equipment health in real-time for prediction of likely equipment failures and to pro-actively schedule maintenance as well as other applications e.g., inventory management, energy efficiency, and supply chain optimization.

Large Language Models (LLMs)

A type of language model notable for its ability to achieve general-purpose language understanding and generation. LLMs acquire these abilities by using massive amounts of data to learn billions of parameters during training and consuming large computational resources during their training and operation. ChatGPT is the most wellknown example. 

Machine Learning (ML)

A subset of AI that involves the development of algorithms that allow computers to learn patterns and make predictions or decisions without being explicitly programmed.

Natural Language Processing (NLP)

A subfield of artificial intelligence (AI) that focuses on the interaction between computers and human language. It involves the development of algorithms and computational models that enable machines to understand, interpret, and generate human-like text or speech. NLP encompasses a wide range of tasks, including language translation, sentiment analysis, text summarization, speech recognition, and question-answering systems. The goal of NLP is to bridge the gap between human communication and computer understanding, enabling machines to effectively process, analyze, and respond to natural language data

Neural Network

A computational model inspired by the human brain’s structure, composed of interconnected nodes (neurons) organized in layers to process information.

Precision

In the context of manufacturing and especially quality control precision is defined as the ratio of correctly predicted positive instances (true positives) to the total number of instances predicted as positive (both true positives and false positives). High precision means that a product that is predicted to be good, is indeed highly likely to be good.

Recurrent Neural Network

A neural network designed for processing sequential data, where the output not only depends on the current input but also on previous inputs in the sequence. RNNs are commonly used in tasks such as natural language processing, speech recognition, and time series prediction.

Reinforcement Learning

Reinforcement Learning (RL) is a machine learning paradigm where an agent learns to make decisions by interacting with an environment. The agent takes actions, receives feedback (rewards or penalties) from the environment, and adjusts its strategy over time to maximize cumulative reward. The goal of reinforcement learning is for the agent to discover an optimal policy—a set of actions that yields the highest overall reward in a given context. RL is commonly used in applications such as game playing, robotics, and autonomous systems.

Sensitivity (Recall)

In the context of quality control sensitivity is defined as the ratio of correctly predicted positive instances (true positives) to the total number of actual positive instances (both true positives and false negatives).  Sensitivity therefore provides a measurement for the percentage of good products correctly identified as good as well as the percentage of good ones that are incorrectly identified as defective. 

SHAP Value

SHAP (SHapley Additive exPlanations) values are a concept from cooperative game theory applied to machine learning. In the context of model interpretability, SHAP values provide a way to fairly distribute the contribution of each feature to the prediction made by a machine learning model. They offer insights into how much each feature contributes to the model’s output for a specific instance, allowing for a more understandable and transparent interpretation of model predictions. SHAP values are widely used in explaining the output of complex models, such as ensemble methods and deep neural networks.

Specificity (True Negative Rate)

In the context of quality control, specificity is defined as the ratio of true negatives (TN) to the total number of actual negative instances (both true negatives and false positives). It therefore measures the proportion of actual negative products that are correctly identified as negative.

Supervised Learning

A type of machine learning paradigm where a model is trained on a labeled dataset, which means that each input in the training data is associated with a corresponding target or output. The goal of supervised learning is for the model to learn the mapping or relationship between the input data and the desired output by generalizing from the labeled examples.

Training Set

A subset of a dataset used to train a machine learning model. It consists of a collection of input-output pairs or examples, where the input data is used to teach the model, and the corresponding output (or target) provides the expected result. During the training process, the model learns patterns, relationships, and features from the training set, enabling it to make predictions or classifications on new, unseen data. The quality and representativeness of the training set significantly impact the performance and generalization ability of the trained model.

Transfer Learning

A machine learning technique where a model trained on one task is repurposed or adapted for a second related task. Instead of training a model from scratch for the new task, transfer learning leverages the knowledge gained from solving a different but related problem. This approach is particularly useful when the amount of labeled data for the target task is limited, as the pre-trained model has already learned useful features from a larger dataset.

True Negatives

In the context of quality control in manufacturing, a true negative occurs when the quality control system correctly identifies a non-defective product as non-defective. This means the product meets all the required standards and specifications, and the system accurately confirms it as good.

True Positives

In the context of quality control in manufacturing, a true positive occurs when the quality control system correctly identifies a defective product as defective. This means the product has a defect, and the system accurately flags it as such.

Unsupervised Learning

A machine learning paradigm where a model is trained on unlabeled data, meaning there are no predefined output labels. The objective of unsupervised learning is to uncover patterns, relationships, or structures within the data without explicit guidance on what the model should learn.

Validation Set

A Large Language Models (LLMs) portion of a dataset separate from the training set that is used to fine-tune and evaluate the performance of a machine learning model during the training process. The validation set is not used to train the model but serves as an independent dataset to assess how well the model generalizes to new, unseen data. It helps prevent overfitting by providing a measure of the model’s performance on data it hasn’t seen before. The model’s hyperparameters and architecture can be adjusted based on the validation set’s performance to improve overall generalization to new data.