Patent classifications
G06V10/754
Pattern inspection apparatus and pattern inspection method
According to one aspect of the present invention, a pattern inspection apparatus includes reference outline creation processing circuitry configured to create a reference outline of a reference figure pattern, which serves as a reference, by using pattern data of a design pattern that serves as a base of a figure pattern formed on a substrate; outline extraction processing circuitry configured to extract an outline of the figure pattern in the measurement image from the measurement image using, as starting points, a plurality of points that are positioned on the reference outline; and comparison processing circuitry configured to compare the reference outline with the outline of the figure pattern.
AUTOMATED IMPLANT MOVEMENT ANALYSIS SYSTEMS AND RELATED METHODS
Methods, systems, workstations, and computer program products that provide automated implant analysis of batches of image data sets of a plurality of different patients having an implant coupled to bone using a first data set of a first patient from the batch of image data sets, the first data set comprising a first image stack and a second image stack and allowing a user to select parameter settings for implant movement analysis of the implant including selecting a first object of interest and a second reference object. Measurements of movement of the implant and/or coupled bone can be automatically calculated and selected parameter settings can be automatically propogated to other image data sets of other patients of the batch of image data sets and measurements for the batch of image data sets of others of the different patients can be automatically calculated.
AUTOMATED IMPLANT MOVEMENT ANALYSIS SYSTEMS AND RELATED METHODS
Methods, systems, workstations, and computer program products that provide automated implant analysis using first and second sets of patient image stacks of a patient having at least one metallic implant coupled to bone. Relevant image stack pairs are selected from the first and second patient image stacks, the image stack pairs having at least one common target object or part of a target object for analysis therein. Bone and the at least one metallic implant are segmented in the first and second image stacks to define segmented objects and/or segmented parts of objects. Selected relevant image stack pairs from the first and second patient image stacks can be registered using the selected segmented objects and/or the segmented parts of objects. Measurements of movement of the implant and/or coupled bone after the registration can be calculated using the selected segmented objects and/or the segmented parts of objects.
SYSTEMS AND METHODS FOR MODIFYING LABELED CONTENT
Systems and methods are disclosed for modifying labeled target content for a capture device. A computer-implemented method may use a computer system that includes non-transient electronic storage, a graphical user interface, and one or more physical computer processors. The computer-implemented method may include: obtaining labeled target content, the labeled target content including one or more facial features that have been labeled; modifying the labeled target content to match dynamically captured content from a first capture device to generate modified target content; and storing the modified target content. The dynamically captured content may include the one or more facial features.
Device, System, and Method of Generating a Reduced-Size Volumetric Dataset
Device, system, and method of generating a reduced-size volumetric dataset. A method includes receiving a plurality of three-dimensional volumetric datasets that correspond to a particular object; and generating, from that plurality of three-dimensional volumetric datasets, a single uniform mesh dataset that corresponds to that particular object. The size of that single uniform mesh dataset is less than 1/4 of the aggregate size of the plurality of three-dimensional volumetric datasets. The resulting uniform mesh is temporally coherent, and can be used for animating that object, as well as for introducing modifications to that object or to clothing or garments worn by that object.
Virtualization of Tangible Interface Objects
An example system includes a stand configured to position a computing device proximate to a physical activity surface. The system further includes a video capture device, a detector, and an activity application. The video capture device is coupled for communication with the computing device and is adapted to capture a video stream that includes an activity scene of the physical activity surface and one or more interface objects physically interactable with by a user. The detector is executable to detect motion in the activity scene based on the processing and, responsive to detecting the motion, process the video stream to detect one or more interface objects included in the activity scene of the physical activity surface. The activity application is executable to present virtual information on a display of the computing device based on the one or more detected interface objects.
Unsupervised asymmetry detection
Asymmetries are detected in one or more images by partitioning each image to create a set of patches. Salient patches are identified, and an independent displacement for each patch is identified. The techniques used to identify the salient patches and the displacement for each patch are combined in a function to generate a score for each patch. The scores can be used to identify possible asymmetries.
Method for microscopic image acquisition based on sequential section
A method for microscopic image acquisition based on a sequential slice. The method includes; acquiring a sample of the sequential slice and a navigation image thereof; identifying and labeling the sample of the sequential slice in the navigation image by utilizing methods of image processing and machine learning; placing the sample of the sequential slice in a microscope, establishing a coordinate transformation matrix for a navigation image-microscope actual sampling space coordinate, and navigating and locating a random pixel point in the navigation image to a center of the microscope's visual field; locating the sample of the sequential slice under a low resolution visual field, binding a sample acquisition parameter; based on the binding of the sample acquisition parameter, recording a relationship of relative of locations between a center point of a high resolution acquisition region and a center point after being matched with a sample template.
Image deformation processing method and apparatus, and computer storage medium
A method and system are provided. The method includes positioning facial feature base points in a face image in an obtained image. A deformation template is obtained, the deformation template carrying configuration reference points and configuration base points. In the facial feature base points, a current reference point is determined corresponding to the configuration reference point, and a to-be-matched base point is determined corresponding to the configuration base point. A target base point is determined that corresponds to the configuration base point and that is in a to-be-processed image. The target base point and the corresponding to-be-matched base point forming form a mapping point pair. A to-be-processed image point is mapped to a corresponding target location according to a location relationship between the target base point and the to-be-matched base point, and a location relationship between the mapping point pair and the to-be-processed image point.
Image-based global registration system and method applicable to bronchoscopy guidance
A global registration system and method identifies bronchoscope position without the need for significant bronchoscope maneuvers, technician intervention, or electromagnetic sensors. Virtual bronchoscopy (VB) renderings of a 3D airway tree are obtained including VB views of branch positions within the airway tree. At least one real bronchoscopic (RB) video frame is received from a bronchoscope inserted into the airway tree. An algorithm according to the invention is executed on a computer to identify the several most likely branch positions having a VB view closest to the received RB view, and the 3D position of the bronchoscope within the airway tree is determined in accordance with the branch position identified in the VB view. The preferred embodiment involves a fast local registration search over all the branches in a global airway-bifurcation search space, with the weighted normalized sum of squares distance metric used for finding the best match.