The Role of AI Models in Visual Inspection

Visual Inspection

If you are looking for a perfect application to start deploying AI in manufacturing, look no further than visual inspection. AI significantly outperforms conventional methods by bringing automation, speed and accuracy to visual inspection processes.

In this blog, we’ll talk about why AI is superior to both manual and machine vision-based quality inspection and then have a (non-technical) look at the different types of algorithms that underlie AI-based quality inspection.

The Need for AI-Based Visual Inspection

Visual inspection in manufacturing involves scrutinizing products for defects, inconsistencies or deviations from the quality standard.

Traditionally companies perform visual inspection in one of two ways:

  • Human inspection – Using staff members to inspect products has many disadvantages: it is slow, expensive and inconsistent. Humans can’t concentrate for 8 hours on a repetitive task like looking at a widget to see whether it has dents or scratches. Humans get tired, bored, and inattentive which results in mistakes, high staff turnover and challenges to fill these kinds of positions. In many manufacturing plants, the sheer volume of products is far too big for manual inspection of 100% of products. In these cases, the only option is to inspect a small number of samples with no guarantee that defects are detected before a large number of products are made that need to be scrapped.
  • Automated inspection using machine vision – Machine vision systems require that the characteristics of a product are defined and hard coded. Let’s look at the example of a label on a bottle or can: a machine vision system requires to exactly code how far maximally and minimally the label can be away from the edges and what the maximal degree of rotation can be. While this is doable, albeit fairly involved, this approach does not work at all for things like detecting a scratched or torn label. How does one define and code something as amorphous and variable as a scratch or tear?

The same characteristics that make visual inspection challenging for humans and conventional automated solutions like machine vision make them perfect for AI.

AI models can be trained to the difference between good and defective products and based on that learning detect and flag defective products and even categorize the defects. It can do so 24/7, with consistently high accuracy, at drastically lower hardware costs and greater flexibility in the case of product or process changes.

Visual inspection

 

The Algorithms that Drive AI-Based Visual Inspection

Let’s get (a bit) technical here and have a look at the three types of algorithms used for visual inspection and what they do. Please read on, even if you are not familiar with AI models, this blog was written for non-experts!

Convolutional Neural Networks (CNNs)

CNNs are powerful tool for image recognition tasks, making them well-suited for visual inspection. Neural networks are like computerized brains that are made up of interconnected artificial neurons. The “convolutional” part refers to the way the network processes visual data. Instead of looking at the entire image at once, CNNs break it down into smaller, overlapping pieces. Each piece is analyzed separately, and the information is then combined to understand the whole picture.

In the manufacturing context, CNNs can analyze product images and identify defects with remarkable accuracy.

How CNNs works:

  • CNNs process images through convolutional layers, capturing patterns and features at different levels of abstraction in the different layers.
  • The different layers are then pooled which reduces the size but retains the essential information. This makes the data more manageable.
  • A CNN is trained by exposing it to a large dataset of labeled images, allowing it to learn and recognize specific defect patterns.
  • Once trained, the CNN can quickly and accurately classify new images, identifying defects and anomalies in real-time.

Benefits of CNNs:

  • High accuracy in defect detection. We have seen accuracy of over 99.996% in real-life applications.
  • Adaptability to various manufacturing environments and product types. This is an important point that differentiates AI models from machine vision. For example, CNNs learn that the lighting conditions in a plant can change, either because of changes in natural light (time of day, season) or because light bulbs can get dimmer over time. Unlike machine vision, they can adjust and learn that e.g., a brighter or dimmer looking label is not a defect.

Visual Inspection

Recurrent Neural Networks (RNNs)

While CNNs excel at image recognition, RNNs specialize in processing sequential data, making them valuable for visual inspection tasks that involve video streams or time-series data.

How RNNs work:

  • RNNs process data sequentially, making them suitable for tasks where the order of information matters, such as tracking the production process over time.
  • The model can learn temporal dependencies and detect subtle changes or deviations from the norm.
  • RNNs can be combined with CNNs to create hybrid models that leverage both spatial and temporal information.

Benefits:

  • RNNs are highly effective in analyzing video streams for continuous inspection and are therefore suitable for applications requiring the analysis of sequential data

RNNs play a secondary role in visual inspection, however, because for the vast majority of use cases, even in cases of fast-moving production lines, static pictures taken in very short intervals are generally sufficient.

Visual Inspection

Generative Adversarial Networks (GANs)

GANs are a class of AI models known for their ability to generate new or synthetic data that is indistinguishable from real data. In visual inspection, GANs can be employed to augment datasets, simulate defects, or generate realistic images for training purposes.

How GANs works:

  • GANs consist of a generator and a discriminator. The generator creates synthetic data, while the discriminator tries to distinguish between real and synthetic data.
  • Through adversarial training, GANs improve the quality of synthetic data, making it challenging to differentiate from real-world examples.
  • Augmented datasets can enhance the robustness of visual inspection models.

Benefits:

  • GANs augment data for improved model training. This is especially critical for applications like categorization of rare defects where there might not be enough examples available to properly train the model. Synthetic data supplement theses training libraries.
  • GANs can also simulate various types of defects that might occur in manufacturing processes. This helps in training robust models that can identify a wide range of potential issues.

AI solves a real and pressing issue in manufacturing, namely how to inspect 100% of products cheaply, quickly, reliably and with very high accuracy. For manufacturing leaders who want to dip their toes into AI implementation, visual inspection is a great first application that can prove the value of AI and help the organization getting familiar and comfortable with these new tools.

We are here to help you take that step. Please be in touch to discuss this in more detail.