Patent classifications
G06V10/809
System And Method For Exercise Type Recognition Using Wearables
The present disclosure provides for using multiple inertial measurement units (IMUs) to recognize particular user activity, such as particular types of exercises and repetitions of such exercises. The IMUs may be located in consumer products, such as smartwatches and earbuds. Each IMU may include an accelerometer and a gyroscope, each with three axes of measurement, for a total of 12 raw measurement streams. A training image includes a plurality of subplots or tiles, each depicting a separate data stream. The training image is then used to train a machine learning model to recognize IMU data as corresponding to a particular type of exercise.
SYSTEM AND METHOD FOR GENERATING REALISTIC SIMULATION DATA FOR TRAINING AN AUTONOMOUS DRIVER
A method for training a model for generating simulation data for training an autonomous driving agent, comprising: analyzing real data, collected from a driving environment, to identify a plurality of environment classes, a plurality of moving agent classes, and a plurality of movement pattern classes; generating a training environment, according to one environment class; and in at least one training iteration: generating, by a simulation generation model, a simulated driving environment according to the training environment and according to a plurality of generated training agents, each associated with one of the plurality of agent classes and one of the plurality of movement pattern classes; collecting simulated driving data from the simulated environment; and modifying at least one model parameter of the simulation generation model to minimize a difference between a simulation statistical fingerprint, computed using the simulated driving data, and a real statistical fingerprint, computed using the real data.
AUTOMATIC INSPECTION USING ARTIFICIAL INTELLIGENCE MODELS
An inspection method includes receiving a plurality of training images and an image of a target object obtained from inspection of the target object. The method further includes generating, by one or more training codes, a plurality of inference codes. The one or more training codes are configured to receive the plurality of training images as input and output the plurality of inference codes. The one or more training codes and the plurality of inference codes includes computer executable instructions. The method further includes selecting one or more inference codes from the plurality inference codes based on a user input and/or one or more characteristics of at least a portion of the received plurality of training images. The method also includes inspecting the received image using the one or more inference codes of the plurality of inference codes.
Automated analysis of cellular samples having intermixing of analytically distinct patterns of analyte staining
Systems and methods discussed herein include, among other things, a method comprising quantifying analyte staining of a biological compartment in a region in which said staining is intermixed with analyte staining of an analytically-distinct distinct biological compartment. Disclosed systems and methods include, for example, a system and method for identifying membrane staining of an analyte of interest in regions where diffuse membrane staining is intermixed with cytoplasmic staining and/or punctate staining is disclosed. Disclosed systems and methods include, for example, a system and method for quantifying membrane staining of an analyte of interest in tissue or cytological samples having regions in which membrane staining is intermixed with cytoplasmic staining and/or punctate staining.
Annotation cross-labeling for autonomous control systems
An annotation system uses annotations for a first set of sensor measurements from a first sensor to identify annotations for a second set of sensor measurements from a second sensor. The annotation system identifies reference annotations in the first set of sensor measurements that indicates a location of a characteristic object in the two-dimensional space. The annotation system determines a spatial region in the three-dimensional space of the second set of sensor measurements that corresponds to a portion of the scene represented in the annotation of the first set of sensor measurements. The annotation system determines annotations within the spatial region of the second set of sensor measurements that indicates a location of the characteristic object in the three-dimensional space.
Neural networks for coarse- and fine-object classifications
Aspects of the subject matter disclosed herein include methods, systems, and other techniques for training, in a first phase, an object classifier neural network with a first set of training data, the first set of training data including a first plurality of training examples, each training example in the first set of training data being labeled with a coarse-object classification; and training, in a second phase after completion of the first phase, the object classifier neural network with a second set of training data, the second set of training data including a second plurality of training examples, each training example in the second set of training data being labeled with a fine-object classification.
Traveling environment recognition apparatus
A traveling environment recognition apparatus includes a first detector that detects an object in a first region outside a vehicle, a second detector that detects an object in a second search region that at least partly overlaps with the first region, a determiner that determines whether two objects respectively detected by the two detectors are a same object, in an overlapping region of the search regions; and a recognizer that integrates the detected objects, and to recognize the detected objects as one fusion object. The recognizer compares a threshold with a parameter based on distances from the vehicle to the detected objects, to recognize the fusion object using a worst value of detection results of the detected objects when the detected objects are near the vehicle, and recognize the fusion object using a mean value of the detection results when the detected objects are far from the vehicle.
METHOD AND DEVICE FOR SELECTING A FINGERPRINT IMAGE IN A SEQUENCE OF IMAGES FOR AUTHENTICATION OR IDENTIFICATION
A method for selecting an image of a fingerprint for identifying an individual is described. The method includes: acquiring a current image comprising a fingerprint and segmenting said fingerprint; determining a value representing a stability of said current image; determining a value representing the sharpness of said current image; determining a score, said score being a combination of said value representing a stability, of said value representing a sharpness and of a number of segmented fingerprints; and selecting said current image for identifying said individual in the case where said score is higher than a first threshold value, and otherwise storing said current image in memory as best image in the case where the score thereof is higher than a best score value, and repeating this method.
INTELLIGENT LIDAR SCANNING
An intelligent three dimensional scanner system mountable on a movable machine, including three dimensional scanner, a camera, and control circuitry. The control circuitry is configured to receive three dimensional point data from the three dimensional scanner, receive two dimensional image data from the camera, input the two dimensional image data to a machine learning model, which identifies objects in the two dimensional image data, fuse the three dimensional point data with the identified objects in the two dimensional image data in order to identify the objects in the three dimensional point data, and control a scan pattern to direct scanning resolution of the three dimensional scanner based on the identified objects.
Method and apparatus for generating road map, electronic device, and computer storage medium
Method and apparatus for generating a road map, electronic device, and non-transitory computer storage medium are disclosed, including: inputting a remote sensing image into a first neural network to extract first road feature information of multiple channels via the first neural network; inputting the first road feature information of multiple channels into a third neural network, to extract third road feature information of multiple channels via the third neural network, where the third neural network is a neural network trained by using road direction information as supervision information; fusing the first road feature information and the third road feature information; and generating a road map according to a fusion result.