Patent classifications
G06V10/803
Distributed vector-raster fusion
In some examples, a method of vector-raster data fusion includes receiving vector data for a geographical location, and statistically analyzing the vector data to obtain vector statistics. In some examples the method further includes rasterizing the vector statistics, and storing at least one of the vector data and the rasterized vector statistics together in a key-value store together with previously stored raster data for the geographical location. In some examples, the vector data further includes metadata, and the method further includes storing the metadata in at least one of the key-value store or a separate vector database.
VEHICLE DRIVER ASSIST SYSTEM
A vehicle driver assist system includes an expert evaluation system to fuse information acquired from various data sources. The data sources can correspond to conditions associated with the vehicle as a unit as well as external elements. The expert evaluation system monitors and evaluates the information from the data sources according to a set of rules by converting each data value into a metric value, determining a weight for each metric, assigning the determined weight to the metric, and generating a weighted metric corresponding to each data value. The expert evaluation system compares each weighted metric (or a linear combination of metrics) against one or more thresholds. The results from the comparison provide an estimation of a likelihood of one or more traffic features occurring.
Identifying objects within images from different sources
Techniques are disclosed for providing a notification that a person is at a particular location. For example, a resident device may receive from a user device an image that shows a face of a first person, the image being captured by a first camera of the user device. The resident device may also receive, from another device having a second camera, a second image showing a portion of a face of a second person, the second camera having a viewable area showing a particular location. The resident device may determine a score indicating a level of similarity between a first set of characteristics associated with the face of the first person and a second set of characteristics associated with the face of a second person. The resident device may then provide to the user device a notification based on determining the score.
TECHNIQUES FOR FINGERPRINT DETECTION AND USER AUTHENTICATION
We present several unique techniques for using touch sensor arrays to detect fingerprints and authenticate a user.
Systems and methods for determining a risk score using machine learning based at least in part upon collected sensor data
A system and method for analyzing risk and providing risk mitigation instructions. The system receives analyzes sensor data and other data corresponding to a user to determine a test group. The system uses the test group to determine a risk score, and, subsequently, a risk mitigation strategy. Machine learning techniques are implemented to refine how the test group, risk score, and mitigation are each selected.
VERIFICATION SYSTEM
A device includes memory and a processor. The device receives biometric information. The device receives location information. The device analyzes the received biometric information with stored biometric information. The device analyzes the received location information with stored location information. The device determines whether the received biometric information matches the stored biometric information. The device determines whether the received location information matches the stored location information. The device sends an electronic communication that indicates whether the received biometric information matches the stored biometric information and whether the received local information matches the stored location information.
SYSTEM AND METHODS TO OPTIMIZE NEURAL NETWORKS USING SENSOR FUSION
A method for optimizing a neural network is provided, including: (1) capturing, via a first sensor group having a first field of view, a first sample set having a first sensor domain corresponding to the first field of view; (2) capturing, via a second sensor group having a second field of view, a second sample set having a second sensor domain corresponding to the second field of view; (3) generating regions of interest of the second sample set; (4) translating the regions of interest to the first sensor domain; (5) identifying nodes of the neural network which correspond to the translated regions; and (6) optimizing the neural network by at least one of (a) increasing the weight value of the nodes corresponding to the one or more translated regions and (b) decreasing the weight value of the nodes not corresponding to the one or more translated regions.
METHOD AND SYSTEM FOR ANNOTATING SENSOR DATA
A computer-implemented method for annotating driving scenario sensor data, including the steps of receiving raw sensor data, the raw sensor data comprising a plurality of successive LIDAR point clouds and/or a plurality of successive camera images, recognizing objects in each image of the camera data and/or each point cloud using one or more neural networks, correlating objects within successive images and/or point clouds, removing false positive results on the basis of plausibility criteria, and exporting the annotated sensor data of the driving scenario.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, COMPUTER PROGRAM PRODUCT, AND RECORDING MEDIUM
An information processing apparatus according to one embodiment includes a memory and one or more hardware processors. The memory stores order information in which an order of pieces of meta-information for a character to be recognized is defined. The one or more hardware processors are connected to the memory and function as a recognition unit and an update unit. The recognition unit serves to perform character recognition on an image including a character string by using first meta-information specified from the pieces of the meta-information. The update unit serves to update the first meta-information to second meta-information. in accordance with the order information in a case when a confidence score of the character recognition satisfies a predetermined condition. The character recognition is performed by using the second meta-information.
SENSOR FUSION
A plurality of images can be acquired from a plurality of sensors and a plurality of flattened patches can be extracted from the plurality of images. An image location in the plurality of images and a sensor type token identifying a type of sensor used to acquire an image in the plurality of images from which the respective flattened patch was acquired can be added to each of the plurality of flattened patches. The flattened patches can be concatenated into a flat tensor and add a task token indicating a processing task to the flat tensor, wherein the flat tensor is a one-dimensional array that includes two or more types of data. The flat tensor can be input to a first deep neural network that includes a plurality of encoder layers and a plurality of decoder layers and outputs transformer output. The transformer output can be input to a second deep neural network that determines an object prediction indicated by the token and the object predictions can be output.