G06V10/811

SENSOR TRANSFORMATION ATTENTION NETWORK (STAN) MODEL

A sensor transformation attention network (STAN) model including sensors configured to collect input signals, attention modules configured to calculate attention scores of feature vectors corresponding to the input signals, a merge module configured to calculate attention values of the attention scores, and generate a merged transformation vector based on the attention values and the feature vectors, and a task-specific module configured to classify the merged transformation vector is provided.

ASSOCIATION OF CAMERA IMAGES AND RADAR DATA IN AUTONOMOUS VEHICLE APPLICATIONS
20230038842 · 2023-02-09 ·

The described aspects and implementations enable fast and accurate object identification in autonomous vehicle (AV) applications by combining radar data with camera images. In one implementation, disclosed is a method and a system to perform the method that includes obtaining a radar image of a first hypothetical object in an environment of the AV, obtaining a camera image of a second hypothetical object in the environment of the AV, and processing the radar image and the camera image using one or more machine-learning models MLMs to obtain a prediction measure representing a likelihood that the first hypothetical object and the second hypothetical object correspond to a same object in the environment of the AV.

Method and system for distributed learning and adaptation in autonomous driving vehicles

The present teaching relates to system, method, medium for in-situ perception in an autonomous driving vehicle. A plurality of types of sensor data acquired continuously by a plurality of types of sensors deployed on the vehicle are first received, where the plurality of types of sensor data provide information about surrounding of the vehicle. Based on at least one model, one or more items are tracked from a first of the plurality of types of sensor data acquired by one or more of a first type of the plurality of types of sensors, wherein the one or more items appear in the surrounding of the vehicle. At least some of the one or more items are then automatically labeled on-the-fly via either cross modality validation or cross temporal validation of the one or more items and are used to locally adapt, on-the-fly, the at least one model in the vehicle.

METHOD AND SYSTEM OF MULTI-ATTRIBUTE NETWORK BASED FAKE IMAGERY DETECTION (MANFID)
20230040237 · 2023-02-09 ·

A method for detecting fake images includes: obtaining an image for authentication, and hand-crafting a multi-attribute classifier to determine whether the image is authentic. Hand-crafting the multi-attribute classifier includes fusing at least an image classifier, an image spectrum classifier, a co-occurrence matrix classifier, and a one-dimensional (1D) power spectrum density (PSD) classifier. The multi-attribute classifier is trained by pre-processing training images to generate an attribute-specific training dataset to train each of the image classifier, the image spectrum classifier, the co-occurrence matrix classifier, and the 1D PSD classifier.

Sensor fusion for precipitation detection and control of vehicles

An apparatus includes a processor configured to be disposed with a vehicle and a memory coupled to the processor. The memory stores instructions to cause the processor to receive, at least two of: radar data, camera data, lidar data, or sonar data. The sensor data is associated with a predefined region of a vicinity of the vehicle while the vehicle is traveling during a first time period. At least a portion of the vehicle is positioned within the predefined region during the first time period. The method also includes detecting that no other vehicle is present within the predefined region. An environment of the vehicle during the first time period is classified as one state from a set of states that includes at least one of dry, light rain, heavy rain, light snow, or heavy snow, based on at least two of the sensor data to produce an environment classification. An operational parameter of the vehicle based on the environment classification is modified.

DEVICE AND METHOD FOR RECOGNIZING FINGERPRINT

A device for recognizing a fingerprint, includes: a fingerprint sensor; at least two moisture detection electrodes disposed within a preset range of the fingerprint sensor; and a processing module coupled to the fingerprint sensor and the at least two moisture detection electrodes. The fingerprint sensor is configured to output a fingerprint signal to the processing module when a user touches the fingerprint sensor and the at least two moisture detection electrodes with a finger. The processing module is configured to acquire a characteristic value which is positively related to an impedance between the at least two moisture detection electrodes when the user touches the fingerprint sensor and the at least two moisture detection electrodes with the finger; determine a fingerprint recognition parameter which matches the characteristic value; and perform fingerprint recognition according to the determined fingerprint recognition parameter and the fingerprint signal.

LOCALIZATION AND MAPPING METHOD
20180012105 · 2018-01-11 ·

A method comprising: obtaining a three-dimensional (3D) point cloud about an object; obtaining binary feature descriptors for feature points in a 2D image about the object; assigning a plurality of index values for each feature point as multiple bits of the corresponding binary feature descriptor; storing the binary feature descriptor in a table entry of a plurality of hash key tables of a database image; obtaining query binary feature descriptors for feature points in a query image; matching the query binary feature descriptors to the binary feature descriptors of the database image; reselecting one bit of the hash key of the matched database image; and re-indexing the feature points in the table entries of the hash key table of the database image.

Apparatus for Q-learning for continuous actions with cross-entropy guided policies and method thereof

An apparatus for performing continuous actions includes a memory storing instructions, and a processor configured to execute the instructions to obtain a first action of an agent, based on a current state of the agent, using a cross-entropy guided policy (CGP) neural network, and control to perform the obtained first action. The CGP neural network is trained using a cross-entropy method (CEM) policy neural network for obtaining a second action of the agent based on an input state of the agent, and the CEM policy neural network is trained using a CEM and trained separately from the training of the CGP neural network.

Image-based techniques for stabilizing positioning estimates
11711565 · 2023-07-25 · ·

A device implementing a system for estimating device location includes at least one processor configured to receive a first estimated position of the device at a first time. The at least one processor is further configured to capture, using an image sensor of the device, images during a time period defined by the first time and a second time, and determine, based on the images, a second estimated position of the device, the second estimated position being relative to the first estimated position. The at least one processor is further configured to receive a third estimated position of the device at the second time, and estimate a location of the device based on the second estimated position and the third estimated position.

Control method, terminal, and system using environmental feature data and biological feature data to display a current movement picture

A control method includes obtaining feature data using at least one sensor, the feature data being acquired by the terminal using the at least one sensor, generating an action instruction based on the feature data and a decision-making mechanism of the terminal, and executing the action instruction. In this application, various aspects of feature data are acquired using a plurality of sensors, data analysis is performed on the feature data, and a corresponding action instruction is then generated based on a corresponding decision-making mechanism to implement interactive control.