Patent classifications
G06V10/755
Systems and methods compression, transfer, and reconstruction of three-dimensional (3D) data meshes
An exemplary method includes generating a 3D mesh of a subject based on frames of time-synchronized video streams of a subject, the frames associated with a first time and generating a transformed facial-mesh model based on a facial portion of the 3D mesh and a facial-mesh model. The method further includes generating a hybrid mesh by combining the transformed facial-mesh model and at least a portion of the 3D mesh. The method further includes generating a current 3D mesh based on frames of the time-synchronized video streams associated with a second time that temporally follows the first time. The method further includes generating a deformed historical 3D mesh by applying a non-rigid deformation process to the hybrid mesh based on the current 3D mesh. The method further includes compressing the deformed historical 3D mesh to form at least one triangle-based 3D submesh including a plurality of submesh triangles.
Method and devices for determining metrology sites
Methods for determining metrology sites for products includes detecting corresponding objects in measurement data of one or more product samples, and aligning the detected objects are aligned. The methods also include analyzing the aligned objects, and determining metrology sites based on the analysis. Devices use such methods to determine metrology sites for products.
Systems and methods for model-based modification of a three-dimensional (3D) mesh
An illustrative method includes obtaining a three-dimensional (3D) mesh of a subject, obtaining a mesh model, and generating a hybrid mesh of the subject. The generating includes replacing a portion of the 3D mesh with the mesh model such that the hybrid mesh includes a non-replaced portion of the 3D mesh represented at a first resolution and the mesh model representing the replaced portion of the 3D mesh at a second resolution.
SYSTEMS, METHODS, AND APPARATUSES FOR IMPLEMENTING MEDICAL IMAGE SEGMENTATION USING INTERACTIVE REFINEMENT
Described herein are means for implementing medical image segmentation using interactive refinement, in which the trained deep models are then utilized for the processing of medical imaging. For instance, an exemplary system is specially configured for operating a two-step deep learning training framework including means for receiving original input images at the deep learning training framework; means for generating an initial prediction image specifying image segmentation by processing the original input images through the base segmentation model to render the initial prediction image in the absence of user input guidance signals; means for receiving user input guidance signals indicating user-guided segmentation refinements to the initial prediction image; means for routing each of (i) the original input images, (ii) the initial prediction image, and (iii) the user input guidance signals to an InterCNN; means for generating a refined prediction image specifying refined image segmentation by processing each of the (i) the original input images, (ii) the initial prediction image, and (iii) the user input guidance signals through the InterCNN to render the refined prediction image incorporating the user input guidance signals; and means for outputting a refined segmentation mask based on application of the user input guidance signals to the deep learning training framework as a guidance signal. Other related embodiments are disclosed.
DRIVE ASSIST APPARATUS, DRIVE ASSIST METHOD, AND DRIVE ASSIST SYSTEM
A drive assist apparatus includes a storage unit that stores a three-dimensional model indicating a moving region, an input unit that receives, from a sensor group installed in the moving region, first height information indicating a first height which is a height of the mobile object and second height information indicating a second height which is a height of an object that satisfies a predetermined distance criterion from the mobile object, an extraction unit that extracts, from the three-dimensional model, a first plan view based on the first height information and a second plan view based on the second height information, a generation unit that generates a combined map for two-dimensionally showing the moving region and assisting the driving of the mobile object by combining the first plan view and the second plan view, and an output unit that transmits a generated combined map to the mobile object.
METHODS AND SYSTEMS FOR EXTRACTING BLOOD VESSEL
A method for determining a centerline of a blood vessel in an image associated with a subject is provided. The method includes obtaining a centerline model used for identifying a centerline of a blood vessel and identifying the centerline of the blood vessel based on the centerline model.
METHOD, APPARATUS, ELECTRONIC DEVICE, COMPUTER PROGRAM AND COMPUTER-READABLE RECORDING MEDIUM FOR DETECTING LANE MARKING BASED ON VEHICLE IMAGE
There is provided a method for detecting a lane marking using a processor including acquiring a drive image captured by an image capturing device of a vehicle which is running, detecting an edge corresponding to a lane marking from the acquired drive image and generating an edge image based on the detected edge, detecting a linear component based on the detected edge and generating a linearly processed edge image based on the detected linear component, detecting a lane marking point corresponding to the lane marking using the generated edge image and the linearly processed edge image, and detecting the lane marking based on the detected lane marking point.
SYSTEMS AND METHODS FOR RECONSTRUCTION AND RENDERING OF VIEWPOINT-ADAPTIVE THREE-DIMENSIONAL (3D) PERSONAS
An exemplary method includes maintaining a receiver-side mesh-vertices list, receiving duplicative-vertex information from a sender, and responsively reducing the receiver-side mesh-vertices list in accordance with the received duplicative-vertex information, and rendering, using the reduced receiver-side mesh-vertices list, viewpoint-adaptive three-dimensional (3D) personas of a subject at least in part by weighting video pixel colors from different video-camera vantage points of video cameras that capture video streams of the subject, the weighting being performed according to a respective geometric relationship of each video-camera vantage point to a user-selected viewpoint.
Method for determining cellular Nuclear-to-Cytoplasmic Ratio
The present disclosure is to provide a computer-aided cell segmentation method for determining cellular Nuclear-to-Cytoplasmic ratio, which comprises acts of obtaining a cytological image using non-invasive in vivo biopsy technique; performing a nuclei segmentation process to identify a position and a contour of each of identified nuclei in the cytological image; performing a cytoplasmic process with an improved active contour model to obtain a cytoplasmic region for each identified nucleus based; and determine a cellular Nuclear-to-Cytoplasmic ratio based on the obtained nucleus and cytoplasmic regions.
Method for determining cellular nuclear-to-cytoplasmic ratio
The present disclosure is to provide a computer-aided cell segmentation method for determining cellular Nuclear-to-Cytoplasmic ratio, which comprises acts of obtaining a cytological image using non-invasive in vivo biopsy technique; performing a nuclei segmentation process to identify a position and a contour of each of identified nuclei in the cytological image; performing a cytoplasmic process with an improved active contour model to obtain a cytoplasmic region for each identified nucleus based; and determine a cellular Nuclear-to-Cytoplasmic ratio based on the obtained nucleus and cytoplasmic regions.