Patent classifications
G06V10/72
Medical information processing apparatus and medical information processing method
According to one embodiment, a medical information processing apparatus has processing circuitry. The processing circuitry acquires medical data on a subject, acquires numerical data obtained by digitizing an acquisition condition of the medical data, and applies a machine learning model to input data including the numerical data and the medical data, thereby generating output data based on the medical data.
METHODS AND SYSTEMS FOR USE IN PROCESSING IMAGES RELATED TO CROP PHENOLOGY
Systems and methods are provided for use in processing image data of crops associated with one or more plots. One example computer-implemented method includes accessing a data set including images associated with one or more plots. The method then includes, for each plot, comparing a first index value of a first image of the plot at time n to an index value of a second image of the plot at time n+1; in response to the second index value being greater than the first index value, flagging the second image; and modifying the data set by removing at least part of the second image based on the flag. The method further includes accessing phenotypic data for the one or more plots at a time consistent with the images and training a model based on data including the modified data set and the accessed phenotypic data.
METHODS AND SYSTEMS FOR USE IN PROCESSING IMAGES RELATED TO CROP PHENOLOGY
Systems and methods are provided for use in processing image data of crops associated with one or more plots. One example computer-implemented method includes accessing a data set including images associated with one or more plots. The method then includes, for each plot, comparing a first index value of a first image of the plot at time n to an index value of a second image of the plot at time n+1; in response to the second index value being greater than the first index value, flagging the second image; and modifying the data set by removing at least part of the second image based on the flag. The method further includes accessing phenotypic data for the one or more plots at a time consistent with the images and training a model based on data including the modified data set and the accessed phenotypic data.
TRAINING METHOD AND DEVICE FOR IMAGE IDENTIFYING MODEL, AND IMAGE IDENTIFYING METHOD
The present disclosure provides a training method and device for an image identifying model, and an image identifying method. The training method comprises: obtaining image samples of a plurality of categories; inputting image samples of each category into a feature extraction layer of the image identifying model to extract a feature vector of each image sample; calculating a statistical characteristic information of an actual distribution function corresponding to each category according to the feature vector of each image sample of the each category; establishing an augmented distribution function corresponding to the each category according to the statistical characteristic information; obtaining augmented sample features of the each category based on the augmented distribution function; and inputting feature vectors of the image samples and the augmented sample features into a classification layer of the image identifying model for supervised learning.
TRAINING METHOD AND DEVICE FOR IMAGE IDENTIFYING MODEL, AND IMAGE IDENTIFYING METHOD
The present disclosure provides a training method and device for an image identifying model, and an image identifying method. The training method comprises: obtaining image samples of a plurality of categories; inputting image samples of each category into a feature extraction layer of the image identifying model to extract a feature vector of each image sample; calculating a statistical characteristic information of an actual distribution function corresponding to each category according to the feature vector of each image sample of the each category; establishing an augmented distribution function corresponding to the each category according to the statistical characteristic information; obtaining augmented sample features of the each category based on the augmented distribution function; and inputting feature vectors of the image samples and the augmented sample features into a classification layer of the image identifying model for supervised learning.
COMMUNICATION SYSTEM AND TERMINAL
An object is to provide a communication system and a terminal capable of predicting future communication quality in order to enable variations in communication quality due to variations in environment to be addressed. A communication system and a terminal according to the present invention learn an input and output relationship from terminal information such as surrounding environment information of the terminal that can be acquired by a camera, a sensor, or the like, and position information of the terminal and current communication quality to generate a learning model, and predict future communication quality using the learning model, the surrounding environment information and the terminal information.
COMMUNICATION SYSTEM AND TERMINAL
An object is to provide a communication system and a terminal capable of predicting future communication quality in order to enable variations in communication quality due to variations in environment to be addressed. A communication system and a terminal according to the present invention learn an input and output relationship from terminal information such as surrounding environment information of the terminal that can be acquired by a camera, a sensor, or the like, and position information of the terminal and current communication quality to generate a learning model, and predict future communication quality using the learning model, the surrounding environment information and the terminal information.
FULL-AUTOMATIC CLASSIFICATION METHOD FOR THREE-DIMENSIONAL POINT CLOUD AND DEEP NEURAL NETWORK MODEL
A full-automatic classification method for a three-dimensional point cloud, including: acquiring a three-dimensional point cloud dataset; performing down-sampling on a three-dimensional point cloud represented by the three-dimensional point cloud dataset, selecting some points in the three-dimensional point cloud as sampling points, constructing a point cloud area group based on each sampling point, extracting a global feature of each point cloud area group, and replacing the point cloud area group where the sampling point is located with the sampling point; performing up-sampling on the three-dimensional point cloud, and performing splicing fusion on the global features of the point cloud area group where each point in the three-dimensional point cloud is located; performing category discrimination on each point in the three-dimensional point cloud; performing statistics on the number of points contained in each category, and selecting the category with the largest number of points as the category of the three-dimensional point cloud.
Machine learning based identification of visually complementary item collections
Aspects of the present disclosure relate to machine learning techniques for identifying collections of items, such as furniture items, that are visually complementary. These techniques can rely on computer vision and item imagery. For example, a first portion of a machine learning system can be trained to extract aesthetic item qualities or attributes from pixel values of images of the items. A second portion of the machine learning system can learn correlations between these extracted aesthetic qualities and the level of visual coordination between items. Thus, the disclosed techniques use computer vision machine learning to programmatically determine whether items visually coordinate with one another based on pixel values of images of those items.
Apparatus and method of using AI metadata related to image quality
An image providing apparatus configured to generate, by using a first artificial intelligence (AI) network, AI metadata including class information and at least one class map, in which the class information includes at least one class corresponding to a type of an object among a plurality of predefined objects included in a first image and the at least one class map indicates a region corresponding to each class in the first image, generate an encoded image by encoding the first image, and output the encoded image and the AI metadata through the output interface.