Patent classifications
G06V10/774
Semantic labeling of point clouds using images
Systems and methods for semantic labeling of point clouds using images. Some implementations may include obtaining a point cloud that is based on lidar data reflecting one or more objects in a space; obtaining an image that includes a view of at least one of the one or more objects in the space; determining a projection of points from the point cloud onto the image; generating, using the projection, an augmented image that includes one or more channels of data from the point cloud and one or more channels of data from the image; inputting the augmented image to a two dimensional convolutional neural network to obtain a semantic labeled image wherein elements of the semantic labeled image include respective predictions; and mapping, by reversing the projection, predictions of the semantic labeled image to respective points of the point cloud to obtain a semantic labeled point cloud.
METHOD FOR DETECTING DEFECT AND METHOD FOR TRAINING MODEL
The present disclosure provides a method and device for detecting an image category. The method includes: acquiring a sample data set including a plurality of sample images labeled with a category, the sample data set including a training data set and a verification data set; training a deep learning model using the training data set to obtain, according to different numbers of training rounds, at least two trained models; testing the at least two trained models using the verification data set to generate a verification test result; generating, based on the verification test result, a verification test index; determining, according to the verification test index, a target model from the at least two trained models; and predict a to-be-tested image of the target object using the target model to obtain the category of the to-be-tested image.
METHOD FOR DETECTING DEFECT AND METHOD FOR TRAINING MODEL
The present disclosure provides a method and device for detecting an image category. The method includes: acquiring a sample data set including a plurality of sample images labeled with a category, the sample data set including a training data set and a verification data set; training a deep learning model using the training data set to obtain, according to different numbers of training rounds, at least two trained models; testing the at least two trained models using the verification data set to generate a verification test result; generating, based on the verification test result, a verification test index; determining, according to the verification test index, a target model from the at least two trained models; and predict a to-be-tested image of the target object using the target model to obtain the category of the to-be-tested image.
System and method for determining situation of facility by imaging sensing data of facility
Embodiments relate to a method and system for determining a situation of a facility by imaging a sensing data of the facility including receiving sensing data through a plurality of sensors at a query time, generating a situation image at the query time, showing the situation of the facility at the query time based on the sensing data, and determining if an abnormal situation occurred at the query time by applying the situation image to a pre-learned situation determination model.
System and method for efficiently managing large datasets for training an AI model
Embodiments described herein provide a system for facilitating efficient dataset management. During operation, the system obtains a first dataset comprising a plurality of elements. The system then determines a set of categories for a respective element of the plurality of elements by applying a plurality of AI models to the first dataset. A respective category can correspond to an AI model. Subsequently, the system selects a set of sample elements associated with a respective category of a respective AI model and determines a second dataset based on the selected sample elements.
Neural network learning device, method, and program
A large amount of training data is typically required to perform deep network leaning, making it difficult to achieve using a few pieces of data. In order to solve this problem, the neural network device according to the present invention is provided with: a feature extraction unit which extracts features from training data using a learning neural network; an adversarial feature generation unit which generates an adversarial feature from the extracted features using the learning neural network; a pattern recognition unit which calculates a neural network recognition result using the training data and the adversarial feature; and a network learning unit which performs neural network learning so that the recognition result approaches a desired output.
Differentiating between live and spoof fingers in fingerprint analysis by machine learning
The present disclosure relates to a method performed in a fingerprint analysis system for facilitating differentiating between a live finger and a spoof finger. The method comprises acquiring a plurality of time-sequences of images, each of the time-sequences showing a respective finger as it engages a detection surface of a fingerprint sensor. Each of the time-sequences comprises at least a first image and a last image showing a fingerprint topography of the finger, wherein the respective fingers of some of the time-sequences are known to be live fingers and the respective fingers of some other of the time-sequences are known to be spoof fingers. The method also comprises training a machine learning algorithm on the plurality of time-sequences to produce a model of the machine learning algorithm for differentiating between a live finger and a spoof finger.
Differentiating between live and spoof fingers in fingerprint analysis by machine learning
The present disclosure relates to a method performed in a fingerprint analysis system for facilitating differentiating between a live finger and a spoof finger. The method comprises acquiring a plurality of time-sequences of images, each of the time-sequences showing a respective finger as it engages a detection surface of a fingerprint sensor. Each of the time-sequences comprises at least a first image and a last image showing a fingerprint topography of the finger, wherein the respective fingers of some of the time-sequences are known to be live fingers and the respective fingers of some other of the time-sequences are known to be spoof fingers. The method also comprises training a machine learning algorithm on the plurality of time-sequences to produce a model of the machine learning algorithm for differentiating between a live finger and a spoof finger.
Computer-implemented interfaces for identifying and revealing selected objects from video
A computer-implemented visual interface for identifying and revealing objects from video-based media provides visual cues to enable users to interact with video-based media. Objects in videos are inferred and identified based upon automatic interpretations of the video and/or audio that is associated with the video. The automatic interpretations may be performed by a computer-implemented neural network. The computer-implemented visual interface is integrated with the video to enable users to interact with the identified objects. User interactions with the visual interface may be through either touch or non-touch means. Information is delivered to users that is based upon the identified objects, including in augmented or virtual reality-based form, responsive to user interactions with the computer-implemented visual interface.
Predictive use of quantitative imaging
The present disclosure provides systems and methods for predicting a disease state of a subject using ultrasound imaging and ancillary information to the ultrasound imaging. At least two quantitative measurements of a subject, including at least one measurement taken using ultrasound imaging, as part of quantified information can be identified. One of the quantitative measurements can be compared to a first predetermined standard, included as part of ancillary information to the quantified information, in order to identify a first initial value. Further, another of the quantitative measurements can be compared to a second predetermined standard, included as part of the ancillary information, in order to identify a second initial value. Subsequently, the quantitative information can be correlated with the ancillary information using the first initial value and the second initial value to determine a final value that is predictive of a disease state of the subject.