Patent classifications
G06V2201/033
LUMBAR SPINE ANNATOMICAL ANNOTATION BASED ON MAGNETIC RESONANCE IMAGES USING ARTIFICIAL INTELLIGENCE
A system for automated comprehensive assessment of clinical lumbar MRIs includes a MRI standardization component that reads MRI data from raw lumbar MRI files, uses an artificial intelligence (AI) model to convert the raw MRI data into a standardized format. A core assessment component automatically generates MRI assessment results, including multi-tissue anatomical annotation, multi-pathology detection and multi-pathology progression prediction based on the structured MRI data package. The core assessment component contains a semantic segmentation module that utilizes a deep learning artificial intelligence (AI) model to generate an MRI assessment results that contains multi-tissue anatomical annotation, a pathology detection module to generate multi-pathology detection, and a pathology progression prediction module to generate multi-pathology progression prediction. A model optimization component archives clinical MRI data and MRI assessment results based on comments provided by a specialist, and periodically optimizes the AI deep learning model of the core assessment component.
Hand skeleton learning, lifting, and denoising from 2D images
A processor identifies keypoints on a hand in a two-dimensional image that is captured by a camera. A three-dimensional pose of the hand is determined using locations of the keypoints to access lookup tables (LUTs) that represent potential poses of the hand as a function of the locations of the keypoints. In some embodiments, the keypoints include locations of tips of fingers and a thumb, joints that connect phalanxes of the fingers and the thumb, palm knuckles that represent a point of attachment of the fingers and the thumb to a palm, and a wrist location that indicates a point of attachment of the hand to a forearm. Some embodiments of the LUTs represent 2D coordinates of the fingers and the thumb in corresponding finger pose planes as a function of the locations of the tips of the fingers or thumb relative to the corresponding palm knuckles.
DETECTING AND SEGMENTING REGIONS OF INTEREST IN BIOMEDICAL IMAGES USING NEURAL NETWORKS
Systems and methods for detecting regions of interests (ROIs) in biomedical images are described herein. A computing system may identify a biomedical image having an ROI. The computer system may apply an object detection model to the biomedical image. The object detection model may generate a feature map using the biomedical image. The object detection model may generate an anchor box corresponding to a portion of the pixels in the feature map. The computing system may apply an instance segmentation model to identify a segment of the biomedical image within the anchor box corresponding to the ROI. The computer system may provide an output based on the segment to identify the ROI in the biomedical image.
TOOTH SEGMENTATION APPARATUS AND METHOD FOR TOOTH IMAGE
The tooth segmentation apparatus detects a boundary box of each tooth from dental scan data in a form of a mesh, sets a boundary condition from the boundary box of each tooth, and segments a tooth region of each tooth in the dental scan data based on the boundary condition of each tooth.
PORTABLE MEDICAL EDUCATION DEVICE, MEDICAL EDUCATION PLATFORM, AND MEDICAL EDUCATION METHODS
A portable medical education device, medical education platform, and medical education methods are disclosed. The medical education portable device enables a camera to capture a specific picture to generate an image, extracts several features from the image, converts the features into an identification code, and transmits the identification code to the medical education platform. The medical education platform stores several three-dimensional medical models and finds a specific three-dimensional medical model from the three-dimensional medical models according to the identification code, wherein the preset code corresponding to the specific three-dimensional medical model is the same as the identification code. After that, the medical education platform transmits the specific three-dimensional medical model to the portable medical education device, and the portable medical education device enables a display screen to present a reality scene and enables the reality scene to show the specific three-dimensional medical model.
DETERMINING RELATIVE 3D POSITIONS AND ORIENTATIONS BETWEEN OBJECTS IN 2D MEDICAL IMAGES
Systems and methods are provided for processing X-ray images, wherein the methods are implemented as a software program product executable on a processing unit of the systems. Generally, an X-ray image is received by the system, the X-ray image being a projection image of a first object and a second object. The first and second objects are classified and a respective 3D model of the objects is received. At the first object, a geometrical aspect like an axis or a line is determined, and at the second object, another geometrical aspect like a point is determined. Finally, a spatial relation between the first object and the second object is determined based on a 3D model of the first object, a 3D model of the second object, and the information that the point of the second object is located on the geometrical aspect of the first object.
Head mounted display device and operating method thereof
Provided are an HMD device and operating method thereof. The operating method of an HMD device includes displaying at least one object in a display area of a transparent display, obtaining an image of a hand of a user interacting with the displayed object; determining a direction in which the hand is facing based on the obtained image, and performing a function for the object corresponding the direction in which the hand is facing.
AUTOMATICALLY SEGMENTING VERTEBRAL BONES IN 3D MEDICAL IMAGES
Disclosed herein are systems and methods for vertebral bone segmentation and vertebral bone enhancement in medical images.
EXPRESSION GENERATION FOR ANIMATION OBJECT
An expression generation method for an animation object is provided. In the method, a first facial expression of a target animation object is acquired by a first animation application from a facial expression set generated by a second animation application. The facial expression set includes different facial expressions of the target animation object. A display parameter of the acquired first facial expression in the first animation application is adjusted based on a first user input to obtain a second facial expression of the target animation object. A target animation of the target animation object that includes an image frame of the second facial expression is generated.
METHOD AND APPARATUS OF PROCESSING IMAGE
Disclosed are a method and an apparatus of processing image. The method includes: obtaining an initial bone segmentation result; and fusing the initial bone segmentation result based on characteristics of and correspondences between a plurality of bone segmentation results in the initial bone segmentation result, to obtain a target bone segmentation result. The initial bone segmentation result includes the plurality of bone segmentation results generated by a plurality of different deep learning models. Methods in the embodiments of the present application can improve precision of a fusion result of the plurality of bone segmentation results.