Patent classifications
G06V10/809
Detection device, detection method, and recording medium for detecting an object in an image
An information processing device is an information processing device including a processor. The processor obtains a detection result of a first detector for detecting a first target in first sensing data; and based on the detection result of the first detector, determines a setting of processing by a second detector for detecting a second target in second sensing data next in an order after the first sensing data, the second target being different from the first target.
LABELING, VISUALIZATION, AND VOLUMETRIC QUANTIFICATION OF HIGH-GRADE BRAIN GLIOMA FROM MRI IMAGES
Systems, methods, and computer program products are provided for segmenting a brain tumor from various MRI sequencing techniques. A plurality of MRI sequences of a head of a patient are received. Each MRI sequence includes a T1-weighted with contrast image, a Fluid Attenuated Inversion Recovery (FLAIR) image, a T1-weighted image, and a T2-weighted image. Each image of the plurality of MRI sequences is registered to an anatomical atlas. A plurality of modified MRI sequences are generated by removing a skull from each image in the plurality of MRI sequences. A tumor segmentation map is determined by segmenting a tumor within a brain in each image in the plurality of modified MRI sequences. The tumor segmentation map is applied to each of the plurality of MRI sequences to thereby generate a plurality of labelled MRI sequences
Method for driving assistance and mobile device using the method
A method for assisting a driver to drive a vehicle in a safer manner includes capturing images of road in front of the vehicle, and identifying a traffic sign in the images. A first image frame is captured at a first time and a second image frame is captured at a later second time when the images do comprise the traffic sign. A change in size or other apparent change of the traffic sign from the first image frame to the second image frame is determined, and conformity or non-conformity with a predetermined rule is then determined. The traffic sign can be analyzed and recognized to trigger the vehicle to perform an action accordingly when conformity is found. A device providing assistance with driving is also provided.
COMPUTER VISION INFERENCING FOR NON-DESTRUCTIVE TESTING
An inspection system is provided and includes a camera and controller. The controller can include one or more processors in communication with the camera and receive a plurality of images of a target captured by the camera. The controller can also determine, using a first computer vision algorithm, a first prediction and corresponding confidence level for substantially all of the images. The controller can select a subset of the images having the first prediction confidence level greater than or equal to a first prediction threshold value. The controller can additionally determine, using a second computer vision algorithm, a second prediction and corresponding second prediction confidence level for each of the selected images. The at least one second prediction can require more time to determine than the at least one first prediction. The controller can output the second prediction and the second prediction confidence level for each of the selected images.
SYSTEMS AND METHODS FOR PROVIDING AND USING CONFIDENCE ESTIMATIONS FOR SEMANTIC LABELING
Systems and methods for processing and using sensor data. The methods comprise: obtaining semantic labels assigned to data points; performing a supervised machine learning algorithm and an unsupervised machine learning algorithm to respectively generate a first confidence score and a second confidence score for each semantic label of said semantic labels, the first and second confidence scores each representing a degree of confidence that the semantic label is correctly assigned to a respective one of the data points; generating a final confidence score for each said semantic label based on the first and second confidence scores; selecting subsets of the data points based on the final confidence scores; and aggregating the data points of the subsets to produce an aggregate set of data points.
MACHINE LEARNING-BASED POINT CLOUD ALIGNMENT CLASSIFICATION
Provided are methods, systems, and computer program products for machine-learning based point cloud alignment classification. An example method may include: obtaining at least two light detection and ranging (LiDAR) point clouds; processing the at least two LiDAR point clouds using at least one classifier network; obtaining at least one output dataset from the at least one classifier network; determining that the at least two LiDAR point clouds are misaligned based on the at least one output dataset; and performing a first action based on the determining that the at least two LiDAR point clouds are misaligned.
Image Processing and Automatic Learning on Low Complexity Edge Apparatus and Methods of Operation
An edge device for image processing includes a series of linked components which can be independently optimized. A specialized change detector which optimizes the events collected at the expense of false positives is accompanied by a trainable module, which uses training feedback to reduce the false positives over time. A “look ahead module” peeks ahead in time and determines whether an inference pipeline needs to run. This allocates a definite amount of time for the validation and training module. The training module is operated in terms of a quantum of time. Processing time during phases of no scene activity is reserved to carry out training. A lightweight detector and the classifier are trainable modules. A site optimizer is made up of rules and sub-modules using spatio-temporal heuristics to handle specific false positives while optimally combining the change detector and inference module results.
GRADING APPARATUS AND METHOD BASED ON DIGITAL DATA
A grading apparatus and a method based on digital data are provided. In the method, feature information of an image is obtained through a first model. Content of the image includes a real object, and the first model is trained based on a deep learning algorithm. A first inference result is determined according to a first feature in the feature information. The first feature is a region feature and is corresponding to objects, and the first inference result is one or more defects on the real object. A second inference result of a second feature in the feature information is determined through a second model based on a semantic algorithm. The second feature is related to locations, and the second inference result is related to context presented by the real object. The first and the second inference results are fused to obtain a grading result of the real object.
Methods for optical character recognition (OCR)
A method is provided for Optical Character Recognition (OCR). A plurality of OCR decoding results each having a plurality of positions is obtained from capturing and decoding a plurality of images of the same one or more OCR characters. A recognized character in each OCR decoding result is compared with the recognized character that occupies an identical position in each of the other OCR decoding results. A number of occurrences that each particular recognized character occupies the identical position in the plurality of OCR decoding results is calculated. An individual confidence score is assigned to each particular recognized character based on the number of occurrences, with a highest individual confidence score assigned to a particular recognized character having the greatest number of occurrences. Determining which particular recognized character has been assigned the highest individual confidence score determines which particular recognized character comprises a presumptively valid character for the identical position.
Concurrent ensemble model training for open sets
Described are systems and methods for training machine learning models of an ensemble of models that are de-correlated. For example, two or more machine learning models may be concurrently trained (e.g., co-trained) while adding a decorrelation component to one or both models that decreases the pairwise correlation between the outputs of the models. Unlike traditional approaches, in accordance with the disclosed implementations, only the negative results need to be decorrelated.