Patent classifications
G06V10/755
Method and apparatus for extracting facial feature, and method and apparatus for facial recognition
A method and an apparatus for extracting a facial feature and a method and an apparatus for recognizing a face are provided, in which the apparatus for extracting a facial feature may extract facial landmarks from a current input image, sample a skin region and a facial component region based on the extracted facial landmarks, generate a probabilistic model associated with the sampled skin region and the facial component region, extract the facial component region from a face region included in the input image using the generated probabilistic model, and extract facial feature information from the extracted facial component region.
Device and method for finding cell nucleus of target cell from cell image
The present invention discloses a method for finding a cell nucleus of a target cell from a cell image, wherein the cell image includes the target cell and at least one variation cell, and the target cell includes cytoplasm and the cell nucleus. The method includes steps of: (a) processing the cell image via an image processor such that the cytoplasm, the cell nucleus and the variation cell have different shades of color; (b) demarcating the outlines of the cytoplasm, the cell nucleus and the variation cell; (c) calculating geometrical reference points of the outlines; (d) calculating the distances from the geometrical reference point of the cytoplasm outline to the geometrical reference point of the cell nucleus outline and to the geometrical reference points of the variation cell outlines; and (e) finding a specific geometrical reference point having a shortest distance to locate a specific outline corresponding to the specific geometrical reference point as the cell nucleus.
SYSTEMS AND METHODS FOR CAPTURING, TRANSFERRING, AND RENDERING VIEWPOINT-ADAPTIVE THREE-DIMENSIONAL (3D) PERSONAS
Systems and methods relate to receiving a plurality of video streams captured of a subject by a plurality of video cameras, each video stream including video frames time-synchronized according to a shared frame rate, each video camera having a known vantage point in a predetermined coordinate system; obtaining at least one three-dimensional (3D) mesh of the subject at the shared frame rate, the 3D mesh time-synchronized with the video frames of the video streams, the at least one mesh including a plurality of vertices with known locations in the predetermined coordinate system; calculating one or more lists of visible-vertices at the shared frame rate, each list including a subset of the plurality of vertices of the at least one 3D mesh of the subject, the subset being a function of the location of the known vantage point associated with at least one of the plurality of video cameras; generating one or more time-synchronized data streams at the shared frame rate, the one or more time-synchronized data streams including: one or more video streams encoding at least one of the plurality of video streams; and one or more geometric-data streams including the calculated one or more visible-vertices lists; and transmitting the one or more time-synchronized data streams to a receiver for rendering of a viewpoint-adaptive 3D persona of the subject.
SYSTEMS AND METHODS COMPRESSION, TRANSFER, AND RECONSTRUCTION OF THREE-DIMENSIONAL (3D) DATA MESHES
Systems and methods relate to encoded video streams including geometric-data streams transmitted to a receiver for rendering of a viewpoint-adaptive 3D persona. A method includes obtaining at least one triangle-based three-dimensional (3D) submesh of a subject, wherein the obtained triangle-based 3D submesh includes a plurality of submesh vertices that define a plurality of submesh triangles, identifying a plurality of strips of the submesh triangles, generating triangle-strip data representing the identified strips of submesh triangles, generating compressed-submesh data that includes the triangle-strip data, and transmitting the compressed-submesh data to a receiver for reconstruction of the triangle-based 3D submesh of the subject.
SYSTEMS AND METHODS FOR REFERENCE-MODEL-BASED MODIFICATION OF A THREE-DIMENSIONAL (3D) MESH DATA MODEL
Systems and methods relate to encoded video streams including geometric-data streams transmitted to a receiver for rendering of a viewpoint-adaptive 3D persona. A method includes obtaining a three-dimensional (3D) mesh of a subject generated from depth-camera-captured information about the subject, obtaining a facial-mesh model, locating a facial portion of the obtained 3D mesh of the subject, computing a geometric transform based on the facial portion and the facial-mesh model, the geometric transform determined in response to one or more aggregated error differences between a plurality of feature points on the facial-mesh model and a plurality of corresponding feature points on the facial portion of the obtained 3D mesh, generating a transformed facial-mesh model using the geometric transform and generating a hybrid mesh of the subject at least in part by combining the transformed facial-mesh model and at least a portion of the obtained 3D mesh.
SYSTEMS AND METHODS FOR RECONSTRUCTION AND RENDERING OF VIEWPOINT-ADAPTIVE THREE-DIMENSIONAL (3D) PERSONAS
Systems and methods relate receiving video streams captured of a subject by video cameras, each video stream including video frames that are time-synchronized with the video, each video camera having a known vantage point in a predetermined coordinate system; obtaining at least one three-dimensional (3D) mesh of the subject, the mesh being time-synchronized and including a plurality of mesh vertices with known locations; identifying a user-selected viewpoint, and identifying a viewpoint-specific subset of the mesh vertices visible; generating 3D submeshes of the subject by calculating visible-vertices lists from the vantage point of each video camera from which the viewpoint-specific subset of mesh vertices is visible; projecting mesh vertices from the calculated visible-vertices lists on to video pixels; and rendering viewpoint-adaptive 3D personas of the subject by weighting video pixel colors from different video-camera vantage points according to the geometric relationship of each video-camera vantage point to the user-selected viewpoint.
SYSTEMS AND METHODS FOR CONTOURING A SET OF MEDICAL IMAGES
Systems and methods are provided for contouring a set of medical images. Deformation field data is generated between a source image and a target image of the set of medical images. The deformation field data relates structures in the source image to corresponding structures in the target image and is generated in accordance with a deformable registration algorithm. The deformation field data is utilized to generate target contour data associated with the target image from source contour data, associated with the source image, that identifies one or more objects within the source image.
Method and system for surgical tool localization during anatomical surgery
Various aspects of a method and system to localize surgical tools during anatomical surgery are disclosed herein. In accordance with an embodiment of the disclosure, the method is implementable in an image-processing engine, which is communicatively coupled to an image-capturing device that captures one or more video frames. The method includes determination of one or more physical characteristics of one or more surgical tools present in the one or more video frames, based on one or more color and geometric constraints. Thereafter, two-dimensional (2D) masks of the one or more surgical tools are detected, based on the one or more physical characteristics of the one or more surgical tools. Further, poses of the one or more surgical tools are estimated, when the 2D masks of the one or more surgical tools are occluded at tips and/or ends of the one or more surgical tools.
DEVICE AND METHOD FOR REPRESENTING AN ANATOMICAL SHAPE OF A LIVING BEING
The present invention relates to a device for encoding an anatomical shape (13) of a living being, comprising a receiving unit (12) for receiving an anatomical shape (13); a shape representation generating unit (14) for generating a shape representation of the anatomical shape (13) by using one or more shape representation models and determining the value of one or more shape representation coefficients of the one or more shape representation models; and a conversion unit (16) for converting the determined value of the one or more shape representation coefficients into a human-readable code comprising one or more human-readable characters.
Systems, methods, and apparatuses for implementing medical image segmentation using interactive refinement
Medical image segmentation using interactive refinement, in which the trained deep models are then utilized for the processing of medical imaging are described. Operating a two-step deep learning training framework including receiving original input images at the deep learning training framework; generating an initial prediction image specifying image segmentation by base segmentation model; receiving user input guidance signals; routing each of (i) the original input images, (ii) the initial prediction image, and (iii) the user input guidance signals to an InterCNN; generating a refined prediction image specifying refined image segmentation by processing each of the (i) the original input images, (ii) the initial prediction image, and (iii) the user input guidance signals through the InterCNN to render the refined prediction image incorporating the user input guidance signals; and outputting a refined segmentation mask to the deep learning training framework as a guidance signal.