G06T2207/30164

GROUND ENGAGING TOOL WEAR AND LOSS DETECTION SYSTEM AND METHOD

An example wear detection system receives a plurality of images from a plurality of sensors associated with a work machine. Individual sensors of the plurality of sensors have respective fields-of-view different from other sensors of the plurality of sensors. The wear detection system identifies a first region of interest and second region of interest associated with the at least one GET. The wear detection system determines a first set of image points and a second set of images points for the at least one GET based on geometric parameters associated with the GET. The wear detection system determines a wear level or loss for the at least one GET based on the GET measurement.

DATACENTER DASHBOARD WITH TEMPORAL FEATURES

A system and method for monitoring performance of an industrial process includes an input port for receiving signals representative of one or more performance parameters generated by the industrial process, a user interface including a display and a controller that is operably coupled with the input port and the user interface. The controller is configured to repeatedly receive signals over time via the input port representative of the one or more performance parameters of the industrial process and to generate a plurality of snapshots, wherein each snapshot includes a graphical representation of the one or more performance parameters of the industrial process at a corresponding time. The controller is configured to generate an animatable heat map including two or more of the plurality of snapshots arranged temporally and to display the animatable heat map on the display.

SYSTEM AND METHOD FOR DETERMINATION OF A 3D INFORMATION AND OF A MODIFICATION OF A METALLURGICAL VESSEL
20230051041 · 2023-02-16 ·

Method, imaging system (5), data processing device (60) and system (10) for determination of a 3D information (90), especially of a point cloud (80) or of a 3D surface reconstruction (81) or of a 3D object (82), of an inner part (55) of a metallurgical vessel (50) or of a modification, the method comprising the steps of providing (100) a metallurgical vessel (50); capturing (110) a first optical image (21) of at least one first inner part (51) of the metallurgical vessel (50), from a first imaging device position (22) outside of the metallurgical vessel (50), with a first optical axis (23), by a first imaging device (20); capturing (120) a second optical image (31) of at least one second inner part (52) of the metallurgical vessel (50), from a second imaging device position (32) outside of the metallurgical vessel (50), with a second optical axis (33), by a second imaging device (30); calculating (130) a 3D information (90), such as a point cloud (80) or a 3D surface reconstruction (81) or a 3D object (82), of at least one inner part (55) of the metallurgical vessel (50) from at least the first optical image (21) and the second optical image (31), whereas the first optical image (21) is captured from a first fixed imaging device position (22) with a first fixed optical axis (23) and whereas the second optical image (31) is captured from a second fixed imaging device position (32) with a second fixed optical axis (33).

METHOD OF IN-PROCESS DETECTION AND MAPPING OF DEFECTS IN A COMPOSITE LAYUP

A method of detecting defects in a composite layup includes capturing, using an infrared camera, reference images of a reference layup being laid up by a reference layup head. The method also includes manually reviewing the reference images for defects, and generating reference defect masks indicating defects in the reference images. The method further includes training, using the reference images and reference defect masks, a neural network, creating a machine learning model that, given a production image as input, outputs a production defect mask indicating the defect location and the defect type of each defect. The method also includes capturing, using an infrared camera, production images of a production layup being laid up by the production layup head, and applying the model to the production images to automatically generate a production defect masks indicating each defect in the production images.

GROUND ENGAGING TOOL WEAR AND LOSS DETECTION SYSTEM AND METHOD

An example wear detection system receives first image data related to at least one ground engaging tool (GET) of a work machine from one or more sensors at a first time instance in a dig-dump cycle of the work machine. The wear detection system processes the first image data to determine a first wear measurement and first wear level for the at least one GET. The wear detection system determines whether the first wear level is indicative of a GET replacement condition. The wear detection system generates an alert when the first wear level is indicative of the GET replacement condition. The wear detection system receives second image data related to the at least one GET a second time instance different from the first time instance when the first wear level is not indicative of the GET replacement condition and determines a second wear measurement and second wear level for the at least one GET. The wear detection system generates an alert indicative of the first wear level and the second wear level based on determining that the first wear level and the second wear level are indicative of the GET replacement condition.

SYSTEMS AND METHODS FOR VISUAL INSPECTION AND 3D MEASUREMENT

Systems and methods for inspecting the outer skin of a honeycomb body are provided. The inspection system comprises a rotational sub-assembly configured to rotate the honeycomb body, a camera sub-assembly configured to image at least a portion of the outer skin of the honeycomb body as it rotates, a three-dimensional (3D) line sensor sub-assembly configured to obtain height information from the outer skin of the honeycomb body; and an edge sensor sub-assembly configured to obtain edge data from the circumferential edges of the honeycomb body. In some examples, the inspection system utilizes a universal coordinate system to synchronize or align the data obtain from each of these sources to prevent redundant or duplicative detection of one or more defects on the outer skin of the honeycomb body.

Quality inspection of laser material processing

A method for quality inspection of laser material processing includes performing laser material processing on a workpiece and generating, by a sensor, raw image data of secondary emissions during the laser material processing of the workpiece. The method also includes determining a quality of the laser material processing by analyzing the raw image data of the secondary emissions.

Deep learning-based method and device for calculating overhang of battery

A deep learning-based method for calculating an overhang of a battery includes the following steps: obtaining a training sample image set; training a neural network according to the training sample image set to obtain a segmentation network model; detecting an object detection image of the battery to be detected according to the segmentation network model to obtain a corresponding first binarized image; obtaining top coordinates of each of a positive electrode and a negative electrode of the battery to be detected according to the first binarized image; and calculating the overhang of the battery to be detected according to the top coordinates.

Control device and method of sectors for the assembly of the turbine stators of a turbine

A control device controls sectors for the assembly of turbine stators of a turbine. Each turbine stator is formed of an assembly of sectors juxtaposed to one another, and each sector has a reference. The control device includes an automated system for identifying the sector with means for reading the sector reference, a database of the references of the sectors that form the turbine stators of the turbine, and means for associating the read reference of the sector with a determined turbine stator of the turbine.

System and method for automated surface assessment

Embodiments described herein provide a system for assessing the surface of an object for detecting contamination or other defects. During operation, the system obtains an input image indicating the contamination on the object and generates a synthetic image using an artificial intelligence (AI) model based on the input image. The synthetic image can indicate the object without the contamination. The system then determines a difference between the input image and the synthetic image to identify an image area corresponding to the contamination. Subsequently, the system generates a contamination map of the contamination by highlighting the image area based on one or more image enhancement operations.