G06V10/806

Systems and methods for quantitative phenotyping of fibrosis
11798163 · 2023-10-24 · ·

Systems and methods are provided for computer aided phenotyping of fibrosis-related conditions. A digital image indicates presence of collagens in a biological tissue sample. The image is processed to quantify parameters, each parameter describing a feature of the collagens that is expected to be different for different phenotypes of fibrosis. At least some features are tissue level features that describe macroscopic characteristics of the collagens, morphometric level features that describe morphometric characteristics of the collagens, and texture level features that describe an organization of the collagens. At least some of the plurality of parameters are statistics associated with histograms corresponding to distributions of the associated parameters across at least some of the digital image. At least some of the plurality of parameters are combined to obtain one or more composite scores that quantify a phenotype of fibrosis for the biological tissue sample.

Model learning device, model learning method, and program

Simultaneous learning of a plurality of different tasks and domains, with low costs and high precision, is enabled. A learning unit 160, on the basis of learning data, uses a target encoder that takes data of a target domain as input and outputs a target feature expression, a source encoder that takes data of a source domain as input and outputs a source feature expression, a common encoder that takes data of the target domain or the source domain as input and outputs a common feature expression, a target decoder that takes output of the target encoder and the common encoder as input and outputs a result of executing a task with regard to data of the target domain, and a source decoder that takes output of the source encoder and the common encoder as input and outputs a result of executing a task with regard to data of the source domain, to learn so that the output of the target decoder matches training data, and the output of the source decoder matches training data.

Automated extraction of echocardiograph measurements from medical images

Mechanisms are provided to implement an automated echocardiograph measurement extraction system. The automated echocardiograph measurement extraction system receives medical imaging data comprising one or more medical images and inputs the one or more medical images into a deep learning network. The deep learning network automatically processes the one or more medical images to generate an extracted echocardiograph measurement vector output comprising one or more values for echocardiograph measurements extracted from the one or more medical images. The deep learning network outputs the extracted echocardiograph measurement vector output to a medical image viewer.

METHODS AND SYSTEMS FOR EMOTION-CONTROLLABLE GENERALIZED TALKING FACE GENERATION

This disclosure relates generally to methods and systems for emotion-controllable generalized talking face generation of an arbitrary face image. Most of the conventional techniques for the realistic talking face generation may not be efficient to control the emotion over the face and have limited scope of generalization to an arbitrary unknown target face. The present disclosure proposes a graph convolutional network that uses speech content feature along with an independent emotion input to generate emotion and speech-induced motion on facial geometry-aware landmark representation. The facial geometry-aware landmark representation is further used in by an optical flow-guided texture generation network for producing the texture. A two-branch optical flow-guided texture generation network with motion and texture branches is designed to consider the motion and texture content independently. The optical flow-guided texture generation network then renders emotional talking face animation from a single image of any arbitrary target face.

Point cloud segmentation method, computer-readable storage medium, and computer device

This application relates to a point cloud segmentation method, a computer-readable storage medium, and a computer device. The method includes encoding a to-be-processed point cloud to obtain a shared feature, the shared feature referring to a feature shared at a semantic level and at an instance level; decoding the shared feature to obtain a semantic feature and an instance feature respectively; adapting the semantic feature to an instance feature space and fusing the semantic feature with the instance feature, to obtain a semantic-fused instance feature of the point cloud, the semantic-fused instance feature representing an instance feature fused with the semantic feature; dividing the semantic-fused instance feature of the point cloud, to obtain a semantic-fused instance feature of each point in the point cloud; and determining an instance category to which each point belongs according to the semantic-fused instance feature of each point.

Method and apparatus for object detection in image, vehicle, and robot

This application discloses a method and apparatus for object detection in an image, a vehicle, and a robot. The method for object detection in an image is performed by a computing device. The method includes determining an image feature of an image; determining a correlation of pixels in the image based on the image feature; updating the image feature of the image based on the correlation to obtain an updated image feature; and determining an object detection result in the image according to the updated image feature.

Method for retrieving footprint images

A method for retrieving footprint images is provided, comprising: pre-training models; cleaning footprint data and conducting expansion pre-processing by using the pre-trained models, dividing the footprint data into multiple data sets; adjusting full connection layers and classification layers of the models; training the models again by using the data sets through the parameters of the pre-trained models; saving the models trained twice, removing the classification layer, executing a feature extraction for images in an image library and a retrieval library to form a feature index library; connecting the features extracted by three models to form fused features, establishing a fused feature vector index library; extracting the features of the images in the image library to be retrieved in advance, and establishing a feature vector library; calculating distances in the retrieval library and the image library when a single footprint image is inputted, thereby outputting the image with the highest similarity.

SYSTEMS AND METHODS FOR PREDICTING CROP SIZE AND YIELD

Methods for predicting a yield of fruit growing in an agricultural plot are provided. At a first time, a first plurality of images of a canopy of the agricultural plot is obtained from an aerial view of the canopy of the agricultural plot. From the first plurality of images, a first number of detectable fruit is estimated. At a second time, a second plurality of images of the canopy of the agricultural plot is obtained from the aerial view of the canopy of the agricultural plot. From the second plurality of images, a second number of detectable fruit is estimated. Using at least the first number of detectable fruit and the second number of detectable fruit and agricultural plot information, predict the yield of fruit from the agricultural plot.

Method and System for Identifying Objects

The present disclosure provides methods and/or systems for identifying an object. An example method includes: generating a plurality of synthesized images according to a three-dimensional digital model, the plurality of synthesized images having different view angles; respectively extracting eigenvectors of the plurality of synthesized images; generating a first fused vector by fusing the eigenvectors of the plurality of synthesized images; inputting the first fused vector into a classifier to train the classifier; acquiring a plurality of pictures of the object, the plurality of pictures respectively having same view angles as at least a portion of the plurality of synthesized images; respectively extracting eigenvectors of the plurality of pictures; generating a second fused vector by fusing the eigenvectors of the plurality of pictures; and inputting the second fused vector into the trained classifier to obtain a classification result of the object.

Cognitive function estimation device, learning device, and cognitive function estimation method
11810373 · 2023-11-07 · ·

Provided are a vehicle outside information acquiring unit to acquire vehicle outside information, a face information acquiring unit to acquire face information, a biological information acquiring unit to acquire biological information, a vehicle information acquiring unit to acquire vehicle information, a vehicle outside information feature amount extracting unit to extract a vehicle outside information feature amount on the basis of the vehicle outside information, a face information feature amount extracting unit to extract a face information feature amount in accordance with the vehicle outside information feature amount, a biological information feature amount extracting unit to extract a biological information feature amount in accordance with the vehicle outside information feature amount, a vehicle information feature amount extracting unit to extract a vehicle information feature amount in accordance with the vehicle outside information feature amount, and a cognitive function estimation unit to estimate whether a cognitive function of a driver is low on the basis of a machine learning model, the vehicle outside information feature amount, and at least one of the face information feature amount, the biological information feature amount, or the vehicle information feature amount.