G06V10/34

Methods And Apparatus For Machine Learning To Analyze Musculo-Skeletal Rehabilitation From Images

A method can include receiving (1) images of at least one subject and (2) at least one total mass value for the at least one subject. The method can further include executing a first machine learning model to identify joints of the at least one subject. The method can further include executing a second machine learning model to determine limbs of the at least one subject based on the joints and the images. The method can further include generating three-dimensional (3D) representations of a skeleton based on the joints and the limbs. The method can further include determining a torque value for each limb, based on at least one of a mass value and a linear acceleration value, or a torque inertia and an angular acceleration value. The method can further include generating a risk assessment report based on at least one torque value being above a predetermined threshold.

METHOD AND APPARATUS FOR DETERMINING THE LEVEL OF DEVELOPED FINGERPRINT

A developed fingerprint level determination apparatus includes: a fingerprint image acquisition unit configured to obtain at least two or more developed fingerprint images according to a fingerprint development technique; and a ridge level determination unit configured to check respective ridge development level information of the obtained fingerprint images and compare the respective ridge development level information with each other.

QUANTITATIVE, BIOMECHANICAL-BASED ANALYSIS WITH OUTCOMES AND CONTEXT

Systems and methods are disclosed for generating a 3D avatar using a biomechanical analysis of observed actions with a focus on representing actions through computer-generated 3D avatars. Physical quantities of biomechanical actions can be measured from the observations, and the system can analyze these values, compare them to target or optimal values, and use the observations and known biomechanical capabilities to generate 3D avatars.

QUANTITATIVE, BIOMECHANICAL-BASED ANALYSIS WITH OUTCOMES AND CONTEXT

Systems and methods are disclosed for generating a 3D avatar using a biomechanical analysis of observed actions with a focus on representing actions through computer-generated 3D avatars. Physical quantities of biomechanical actions can be measured from the observations, and the system can analyze these values, compare them to target or optimal values, and use the observations and known biomechanical capabilities to generate 3D avatars.

DISEASE DETECTION WITH MASKED ATTENTION

A candidate generator generates a set of candidate three-dimensional image patches from an input volume. A candidate classifier classifies the set of candidate three-dimensional image patches as containing or not containing disease. Classifying the set of candidate three-dimensional image patches comprises generating an attention mask for each given candidate three-dimensional image patch within the set of candidate three-dimensional image patches to form a set of attention masks, applying the set of attention masks to the set of candidate three-dimensional image patches to form a set of masked image patches, and classifying the set of masked image patches as containing or not containing the disease. The candidate classifier applies soft attention and hard attention to the three-dimensional image patches such that distinctive image regions are highlighted proportionally to their contribution to classification while completely removing image regions that may cause confusion.

ELECTRONIC DEVICE PERFORMING IMAGE INPAINTING AND METHOD OF OPERATING THE SAME
20220375050 · 2022-11-24 ·

An image inpainting method is provided. The image inpainting method includes determining a missing region in an original image, generating an input image to be reconstructed from the original image, based on the missing region, obtaining a mask image indicating the missing region, determining whether to extract a structural feature of the missing region, based on an attribute of the missing region, obtaining structure vectors each consisting of one or more lines and one or more junctions by applying the input image and the mask image to a first model for extracting a structural feature of the input image, and obtaining an inpainted image in which the missing region in the input image is reconstructed by applying the input image, the mask image, and a structure vector image converted from the structure vectors to a second model for reconstructing the input image.

Methods and systems for detecting a centerline of a vessel

This application disclosures a method and system for detecting a centerline of a vessel. The method may include obtaining image data, wherein the image data may include vessel data; selecting two endpoints of the vessel based on the vessel data; transforming the image data to generate a transformed image based on at least one image transformation function; and determining a path of the centerline of the vessel connecting the first endpoint of the vessel and the second endpoint of the vessel to obtain the centerline of the vessel based on the transformed image. The two endpoints of the vessel may include a first endpoint of the vessel and a second endpoint of the vessel.

Methods and systems for detecting a centerline of a vessel

This application disclosures a method and system for detecting a centerline of a vessel. The method may include obtaining image data, wherein the image data may include vessel data; selecting two endpoints of the vessel based on the vessel data; transforming the image data to generate a transformed image based on at least one image transformation function; and determining a path of the centerline of the vessel connecting the first endpoint of the vessel and the second endpoint of the vessel to obtain the centerline of the vessel based on the transformed image. The two endpoints of the vessel may include a first endpoint of the vessel and a second endpoint of the vessel.

Self-supervised hierarchical motion learning for video action recognition

There are numerous features in video that can be detected using computer-based systems, such as objects and/or motion. The detection of these features, and in particular the detection of motion, has many useful applications, such as action recognition, activity detection, object tracking, etc. The present disclosure provides a neural network that learns motion from unlabeled video frames. In particular, the neural network uses the unlabeled video frames to perform self-supervised hierarchical motion learning. The present disclosure also describes how the learned motion can be used in video action recognition.

Self-supervised hierarchical motion learning for video action recognition

There are numerous features in video that can be detected using computer-based systems, such as objects and/or motion. The detection of these features, and in particular the detection of motion, has many useful applications, such as action recognition, activity detection, object tracking, etc. The present disclosure provides a neural network that learns motion from unlabeled video frames. In particular, the neural network uses the unlabeled video frames to perform self-supervised hierarchical motion learning. The present disclosure also describes how the learned motion can be used in video action recognition.