G06V10/80

MULTI-CHANNEL OBJECT MATCHING
20230011829 · 2023-01-12 · ·

A method may include obtaining first sensor data captured by a first sensor system and second sensor data captured by a second sensor system of a different type from the first sensor system. The method may include detecting a first object included in the first sensor data and a second object included in the second sensor data. The method may include assigning a first label to the first object and a second label to the second object after comparing the first and the second sensor data. The first and second labels may indicate degrees to which the first and the second objects match. Responsive to the first and second labels indicating that the first and the second objects match, the method may include designating a matched object representative of the first object and the second object and sending the matched object to a downstream computing system of an autonomous vehicle.

Apparatus and method for detecting elements of an assembly
20230215154 · 2023-07-06 ·

The disclosure relates to apparatuses and methods for detecting elements of an assembly, such as electrical components in a printed circuit board. First and second artificially intelligent classifiers are provided for detecting elements in a high-resolution image of the assembly, wherein the first artificially intelligent classifier is pre-trained to detect first elements and the second artificially intelligent classifier is pre-trained to detect second elements, each of the first elements having a size within a first size range, and each of the second elements having a size within a second size range, in which the first size range includes elements having a size that is greater than the size of elements included within the second size range. The second artificially intelligent classifier can be prevented from subsequently searching for elements within bounding boxes previously obtained by the first artificially intelligent classifier.

DRIVER ATTENTION AREA PREDICTION SYSTEM
20230222756 · 2023-07-13 ·

A driver attention area prediction method includes: S1, acquiring an original driving video of a driver attention area and preprocessing the original driving video, thereby obtaining a processed driving video sequence; S2, constructing a deep learning model through a deep learning keras framework and training the deep learning model to obtain a trained deep learning model; S3, performing area prediction on the processed driving video sequence through the trained deep learning model, thereby obtaining a driver attention area prediction result; and S4, outputting the driver attention area prediction result. Moreover, a driver attention area prediction system includes a driving video acquisition and preprocessing module, a model training module, a model application module and a result output module. Differentiated training can be carried out on driving attentions in LHT and RHT scenes, and driving attentions can be accurately predicted as per scenes and conditions.

CAMERA-RADAR SENSOR FUSION USING LOCAL ATTENTION MECHANISM
20230213643 · 2023-07-06 ·

Methods, computer systems, and apparatus, including computer programs encoded on computer storage media, for processing sensor data. In one aspect, a method includes obtaining image data representing a camera sensor measurement of a scene; obtaining radar data representing a radar sensor measurement of the scene; generating a feature representation of the image data; generating a respective initial depth estimate for each of a subset of the plurality of pixels; generating a feature representation of the radar data; for each of the subset of the plurality of pixels, generating a respective adjusted depth estimate for the pixel using the initial depth estimate for the pixel and the radar feature vectors for a corresponding subset of the plurality of radar reflection points; generating a fused point cloud that includes a plurality of three-dimensional data points; and processing the fused point cloud to generate an output that characterizes the scene.

CAMERA-RADAR SENSOR FUSION USING LOCAL ATTENTION MECHANISM
20230213643 · 2023-07-06 ·

Methods, computer systems, and apparatus, including computer programs encoded on computer storage media, for processing sensor data. In one aspect, a method includes obtaining image data representing a camera sensor measurement of a scene; obtaining radar data representing a radar sensor measurement of the scene; generating a feature representation of the image data; generating a respective initial depth estimate for each of a subset of the plurality of pixels; generating a feature representation of the radar data; for each of the subset of the plurality of pixels, generating a respective adjusted depth estimate for the pixel using the initial depth estimate for the pixel and the radar feature vectors for a corresponding subset of the plurality of radar reflection points; generating a fused point cloud that includes a plurality of three-dimensional data points; and processing the fused point cloud to generate an output that characterizes the scene.

Automatic sensor conflict resolution for sensor fusion system

A system and method that automatically resolves conflicts among sensor information in a sensor fusion robot system. Such methods can accommodate converging ambiguous and divergent sensor information in a manner that can allow continued, and relatively accurate, robotic operations. The processes can include handling sensor conflict via sensor prioritization, including, but not limited, prioritization based on the particular stage or segment of the assembly operation when the conflict occurs, overriding sensor data that exceeds a threshold value, and/or prioritization based on evaluations of recent sensor performance, predictions, system configuration, and/or historical information. The processes can include responding to sensor conflicts through comparisons of the accuracy of workpiece location predictions from different sensors during different assembly stages in connection with arriving at a determination of which sensor(s) is providing accurate and reliable predictions.

Neural network categorization accuracy with categorical graph neural networks

Neural network-based categorization can be improved by incorporating graph neural networks that operate on a graph representing the taxonomy of the categories into which a given input is to be categorized by the neural network based-categorization. The output of a graph neural network, operating on a graph representing the taxonomy of categories, can be combined with the output of a neural network operating upon the input to be categorized, such as through an interaction of multidimensional output data, such as a dot product of output vectors. In such a manner, information conveying the explicit relationships between categories, as defined by the taxonomy, can be incorporated into the categorization. To recapture information, incorporate new information, or reemphasize information a second neural network can also operate upon the input to be categorized, with the output of such a second neural network being merged with the output of the interaction.

Gesture recognition using multiple antenna

Various embodiments wirelessly detect micro gestures using multiple antenna of a gesture sensor device. At times, the gesture sensor device transmits multiple outgoing radio frequency (RF) signals, each outgoing RF signal transmitted via a respective antenna of the gesture sensor device. The outgoing RF signals are configured to help capture information that can be used to identify micro-gestures performed by a hand. The gesture sensor device captures incoming RF signals generated by the outgoing RF signals reflecting off of the hand, and then analyzes the incoming RF signals to identify the micro-gesture.

Object detection and image cropping using a multi-detector approach
11694456 · 2023-07-04 · ·

Systems, methods and computer program products for detecting objects using a multi-detector are disclosed, according to various embodiments. In one aspect, a computer-implemented method includes defining an analysis profile comprising an initial number of analysis cycles dedicated to each of a plurality of detectors, where each detector is independently configured to detect objects according to a unique set of analysis parameters and/or a unique detector algorithm. The method also includes: receiving digital video data that depicts at least one object; analyzing the digital video data using some or all of the detectors in accordance with the analysis profile, where the analyzing produces an analysis result for each detector used in the analysis. Further, the method includes updating the analysis profile by adjusting the number of analysis cycles dedicated to at least one of the detectors based on the analysis results.

Spoofing detection apparatus, spoofing detection method, and computer-readable recording medium
11694475 · 2023-07-04 · ·

A spoofing detection apparatus comprises obtaining, from an image capture apparatus, a first image frame that includes the face of a subject person obtained when a light-emitting apparatus is emitting light and a second image frame that includes the face of the subject person obtained when the light-emitting apparatus is turned off, extracting, from the first image frame, information specifying a face portion of the subject person, and extract, from the second image frame, information specifying a face portion of the subject person, extracting a portion that includes a bright point formed by reflection in an iris region of an eye of the subject person, from the first image frame, extracts a portion corresponding to the portion that includes the bright point, from the second image frame, and calculates a feature that is independent of the position of the bright point, and determining authenticity of subject person based on the feature.