By Mike Mohseni
Early developments in industrial AI followed the same path as established applications such as facial recognition, the school of more data, better prediction. But the issue of data scarcity in industrial applications is a significant hurdle, especially in production lines involving materials processing. In these applications, data generation is costly and extremely slow.
Another key difference between AI for commerce and industrial applications is the availability of a priori knowledge. In contrast to consumer behavior, decades of research and knowledge development unearthed the nature of even intrinsically complicated processes such as steel production.
Data scarcity and an established knowledge base are two main differences between manufacturing applications and conventional use-cases of AI. Recently, many studies focused on incorporating these characteristics in optimizing AI to deliver valuable business cases for industrial applications.
Synthetic data for digital twins. In digital twin applications, the production line or the whole manufacturing facility is simulated in computer programs providing a robust platform for low-cost trials before the actual production. In material processing applications such as welding, physical models of the processes are often available and can be incorporated into the digital twin platform. However, the computational cost of running these models is too high to provide real-time intelligence. Therefore, AI models in these applications can serve as a fast replacement for classic physics-based models. For this purpose, trials based on physics-based models generate synthetic data and compensate for the costs and time to acquire real-life data.
Physics-based model tuning. Complex machine learning frameworks (e.g. deep learning) are highly non-linear models with many parameters to be set during training. Therefore, a large training dataset is required to find these parameters and ensure the model produces meaningful predictions during deployment. In manufacturing and materials processing, a large body of knowledge is available about the physical nature of the processes. This knowledge can be used to adjust machine learning parameters so that less data is needed for training. The interpretation of this approach is that by incorporating the physics-based knowledge, the number of unknown relations and correlations to be learned by the machine learning model reduces.
A simple way to incorporate physical knowledge about processes in training AI models is thresholding. For example, the upper bound of melt temperature can be set based on the physical knowledge about the casting process of metals to guide the training, then avoid predicting unrealistic temperatures. The more sophisticated approaches use material properties and processing equations in updating the internal variable of the machine learning algorithms, e.g., updating the activation function in a neural network for temperature domain prediction based on the heat transfer model.
Physics-based input selection. Inputs to a machine learning model are selected based on the expected outcome of the model. However, the type of input data also determines how much data is needed for training. In other words, it is important to consider how the machine learning model would learn to perform a specific operation. For example, we consider a machine learning model that determines whether cracks exist on the surface of a part based on an image of the surface. If a normal image of 860* 640-pixel size is used as input, then at least 550 thousand parameters should be found during machine learning model training, treating each pixel as an input. We assumed the image is gray-scale, the network has only one layer, and no other post-processing is used. An expert with domain knowledge knows that every crack on the surface has two elongated edges. With this knowledge, the machine learning model should simply learn to detect specific patterns of edges on the surface. In this case, a simple preprocessing can convert the original image to a frame that only preserves edges. The updated image requires lower resolution since it still carries the important information, i.e. edge patterns. Lower resolution and removing unnecessary features from the image significantly reduce the input size, and fewer training data are needed for machine learning model development.
The raw data in manufacturing and materials processing often carry various information or features. The physics-based knowledge can guide determining features to be used for machine learning developments.
Machine learning model selection. The selection of a proper model for a machine learning application is not important just when data limitation exists but is more critical in such cases. Simple models such as random forest have far fewer parameters than deep learning models. As a result, the amount of data needed to develop simpler models is lower. Model selection depends on the expected outcome or the application of the machine learning solution, however, it is important to acknowledge more complex models do not necessarily offer better performance.
The approaches mentioned are only examples of recent developments in applied AI for industrial applications. Stay tuned to learn how AutoMetrics is incorporating machine learning to enhance the scalability of its products and enable robust automated inspection.