G06V10/803

Multi-sensor spatial data auto-synchronization system and method
11367204 · 2022-06-21 · ·

A multi-sensor spatial data auto-synchronization system and method is provided. The method may include collecting laser point cloud data through a laser radar and pre-processing the laser point cloud data; collecting image point cloud data through a monocular camera and collecting intrinsic parameters of the camera by using a calibration board; performing plane fitting on the pre-processed laser point cloud data to determine coordinates of fitted laser point cloud data; performing image feature extraction on the image point cloud data; calculating pose transformation matrices for the laser point cloud data coordinates and an image feature data result by using a PNP algorithm; and optimizing a rotation vector and a translation vector in the pose transformation matrix. The present invention achieves automatic calculation of a spatial synchronization relationship between two sensors, and greatly improves the data synchronization precision of the laser radar and the monocular camera.

COLORED CONTACT LENS, MANUFACTURING METHOD OF COLORED CONTACT LENS, AND IRIS RECOGNITION SYSTEM
20220187624 · 2022-06-16 · ·

Provided is a colored contact lens having: a lens; and a colored region formed in the lens, at least a part of the colored region is arranged at a position overlapping an iris of a wearer when the colored contact lens is worn, and the colored region has infrared transparency.

Merging LiDAR Information and Camera Information
20220185324 · 2022-06-16 ·

Among other things, techniques are described for merging LiDAR information and camera information for autonomous annotation. The techniques include a vehicle that includes at least one LiDAR device configured to detect electromagnetic radiation; at least one camera configured to generate camera information of objects proximate to the vehicle; at least one computer-readable media storing computer-executable instructions; at least one processor communicatively coupled to the at least one LiDAR device and the at least one camera and a control circuit communicatively coupled to the at least one processor, wherein the control circuit is configured to operate the vehicle based on a location of the object.

MAINTENANCE COMPUTING SYSTEM AND METHOD FOR AIRCRAFT WITH PREDICTIVE CLASSIFIER
20220188670 · 2022-06-16 ·

A computing system includes a processor and a non-volatile memory storing executable instructions that, in response to execution by the processor, cause the processor to execute an inspection classifier including at least a first artificial intelligence model, the inspection classifier being configured to receive run-time event input data from a plurality of data sources associated with an aircraft, the data sources including structural health monitoring sensors instrumented on the aircraft; extract features of the run-time event input data; determine a predicted inspection classification based upon the extracted features, the predicted inspection classification being one of a plurality of candidate inspection classifications; and output the predicted inspection classification.

LATERAL SAFETY AREA
20220185288 · 2022-06-16 ·

Techniques for determining a safety area for a vehicle are discussed herein. In some cases, a first safety area can be based on a vehicle travelling through an environment and a second safety area can be based on a steering control or a velocity of the vehicle. A width of the safety areas can be updated based on a position of a bounding box associated with the vehicle. The position can be based on the vehicle traversing along a trajectory. Sensor data can be filtered based on the sensor data falling within the safety area(s).

Action recognition in videos using 3D spatio-temporal convolutional neural networks

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing video data. An example system receives video data and generates optical flow data. An image sequence from the video data is provided to a first 3D spatio-temporal convolutional neural network to process the image data in at least three space-time dimensions and to provide a first convolutional neural network output. A corresponding sequence of optical flow image frames is provided to a second 3D spatio-temporal convolutional neural network to process the optical flow data in at least three space-time dimensions and to provide a second convolutional neural network output. The first and second convolutional neural network outputs are combined to provide a system output.

Machine-learning-assisted self-improving object-identification system and method

A system and method of identifying and tracking objects comprises registering an identity of a person who visits an area designated for holding objects, capturing an image of the area designated for holding objects, submitting a version of the image to a deep neural network trained to detect and recognize objects in images like those objects held in the designated area, detecting an object in the version of the image, associating the registered identity of the person with the detected object, retraining the deep neural network using the version of the image if the deep neural network is unable to recognize the detected object, and tracking a location of the detected object while the detected object is in the area designated for holding objects.

Method for dynamically monitoring content of rare earth element component based on time-series feature

The disclosure discloses a method for dynamically monitoring the content of a rare earth element (REE) component based on a time-series feature. Using an image information acquisition device to periodically acquire a time-series image of a rare earth (RE) solution to be monitored; extracting a time-series feature of the time-series image in a mixed color space; determining whether a time-series feature value of the time-series image is in an expected interval of the mixed color space; calculating a histogram intersection distance between the time-series image and a sample image in a sample data set in the HSV color space, and determining the content of the REE component corresponding to the time-series image according to a component content corresponding to a sample image with a larger histogram intersection distance, if the determination result indicates no; otherwise, directly waiting for the acquisition of a time-series image at a next sampling time point.

System for mapping and monitoring emissions and air pollutant levels within a geographical area
11360236 · 2022-06-14 · ·

The present disclosure provides a system comprising a plurality of autonomous units within a geographical region, each configured with a sensor array and a cognitive emission and air pollutant mapping module that enables them to map their surrounding environment and sense and overlay pollutant and emissions data onto said map, including cameras and object detection algorithms for tracking and photographing pollutant sources. Each unit securely transmits the fused map and pollutant source data to one or more servers that compile a complete 3D map of the geographical area overlaid with pollution data which is updated in real time, and also notify relevant third parties to action pollutant sources within the area. The system can further comprise a plurality of smart light poles for displaying pollution data and advisory notices to citizens within sub-regions of the area.

CORRELATED IMAGE ANALYSIS FOR 3D BIOPSY

The present invention relates to image analysis of pathology images. In order to improve reliability in image analysis of pathology images, a method is provided for providing support in identifying at least one feature of a tissue sample in a microscopic image. The method comprises the steps of providing a first image of a first microscopy 5 modality representing an area of the tissue sample, providing a second image of a second microscopy modality representing the said area of the tissue sample, generating a first high intensity image by applying a first high intensity filter to the first image or a first low intensity image by applying a first low intensity filter to the first image to obtain first information of the at least one feature, generating a second high intensity image by applying 10 a second high intensity filter to the second image or a second low intensity image by applying a second low intensity filter to the second image to obtain second information of the at least one feature, calculating a correlation of an image pair comprising one of the first high intensity image and the first low intensity image and one of the second high intensity image and the second low intensity image for correlating the first information and the second 15 information of the at least one feature, and outputting the calculated correlation for providing support in identifying the at least one feature of the tissue sample.