Patent classifications
G06T7/337
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND COMPUTER-READABLE RECORDING MEDIUM
An information processing apparatus according to an embodiment of the present technology includes a line-of-sight estimator, a correction amount calculator, and a registration determination section. The line-of-sight estimator calculates an estimation vector obtained by estimating a direction of a line of sight of a user. The correction amount calculator calculates a correction amount related to the estimation vector on the basis of at least one object that is within a specified angular range that is set using the estimation vector as a reference. The registration determination section determines whether to register, in a data store, calibration data in which the estimation vector and the correction amount are associated with each other, on the basis of a parameter related to the at least one object within the specified angular range.
IMAGE REGISTRATION METHOD AND ELECTRONIC DEVICE
An image registration method includes: acquiring a target image comprising a target object; inputting the target image to a preset network model, and outputting position information and rotation angle information of the target object; obtaining a reference image comprising the target object by querying a preset image database according to the position information and the rotation angle information; and performing image registration on the target image and the reference image to obtain a corresponding position of the target object of the target image in the reference image.
SYSTEMS AND METHODS FOR IMAGE PROCESSING BASED ON OPTIMAL TRANSPORT AND EPIPOLAR GEOMETRY
Systems and methods for image processing for determining a registration map between a first image of a scene with a second image of the scene, include solving an optimal transport (OT) problem to produce the registration map by optimizing a cost function that determines a minimum of a ground cost distance between the first and the second images modified with an epipolar geometry-based regularizer including a distance that quantifies the violation of an epipolar geometry constraint between corresponding points defined by the registration map. The ground cost compares a ground cost distance of features extracted within the first image with a ground cost distance of features extracted from the second image.
CONSTRUCTION SITE DIGITAL FIELD BOOK FOR THREE-DIMENSIONAL SCANNERS
A method, system, and computer product that track scanning data acquired by a three-dimensional (3D) coordinate scanner is provided. The method includes storing a digital representation of an environment in memory of a mobile computing device. A first scan is performed with the 3D coordinate scanner in an area of the environment. A location of the first scan is determined on the digital representation. The first scan is registered with the digital representation. The location of the 3D coordinate scanner is indicated on the digital representation at the time of the first scan.
REGISTRATION CHAINING WITH INFORMATION TRANSFER
A registration chaining system provides information transfer along a chain of registrations of images of same or different modalities. A registration at each link is based on a shared feature readily distinguished in a pair of images. The information is transferred using the registration.
Multi-imaging mode image alignment
Methods and systems for aligning images of a specimen generated with different modes of an imaging subsystem are provided. One method includes separately aligning first and second images generated with first and second modes, respectively, to a design for the specimen. For a location of interest in the first image, the method includes generating a first difference image for the location of interest and the first mode and generating a second difference image for the location of interest and the second mode. The method also includes aligning the first and second difference images to each other and determining information for the location of interest from results of the aligning.
System and method for predictive fusion
An image fusion system provides a predicted alignment between images of different modalities and synchronization of the alignment, once acquired. A spatial tracker detects and tracks a position and orientation of an imaging device within an environment. A predicted pose of an anatomical feature can be determined, based on previously acquired image data, with respect to a desired position and orientation of the imaging device. When the imaging device is moved into the desired position and orientation, a relationship is established between the pose of the anatomical feature in the image data and the pose of the anatomical feature imaged by the imaging device. Based on tracking information provided by the spatial tracker, the relationship is maintained even when the imaging device moves to various positions during a procedure.
METHOD OF DETECTING MEASUREMENT ERROR OF SEM EQUIPMENT AND METHOD OF ALIGNING SEM EQUIPMENT
There are provided a method of accurately detecting a measurement error of SEM equipment by comparing and aligning a design image with an SEM image, and a method of accurately aligning SEM equipment based on a detected measurement error. The method of detecting a measurement error of SEM equipment includes acquiring SEM images of a measurement target, performing pre-processing on the SEM images and design images corresponding thereto, selecting training SEM images from among the SEM images, performing training by using the training SEM images and training design images and generating a conversion model between the SEM images and the design images, converting the SEM images into conversion design images by using the conversion model, extracting an alignment coordinate value by comparing and aligning the conversion design images with the design images, and determining a measurement error of the SEM equipment based on the alignment coordinate value.
Sensor alignment
Described herein are systems, methods, and non-transitory computer readable media for performing an alignment between a first vehicle sensor and a second vehicle sensor. Two-dimensional (2D) data indicative of a scene within an environment being traversed by a vehicle is captured by the first vehicle sensor such as a camera or a collection of multiple cameras within a sensor assembly. A three-dimensional (3D) representation of the scene is constructed using the 2D data. 3D point cloud data also indicative of the scene is captured by the second vehicle sensor, which may be a LiDAR. A 3D point cloud representation of the scene is constructed based on the 3D point cloud data. A rigid transformation is determined between the 3D representation of the scene and the 3D point cloud representation of the scene and the alignment between the sensors is performed based at least in part on the determined rigid transformation.
Subsurface formation imaging
A method includes generating a set of sub-images of a subsurface formation based on measurement values acquired by a plurality of sensors corresponding to one or more signals that have propagated through the subsurface formation, wherein each of the set of sub-images correspond to one of the plurality of sensors. The plurality of sensors are on a tool in a borehole, wherein each of the plurality of sensors are at different spatial positions with respect to each other. The method also includes generating a combined image by aligning the set of sub-images based on the measurement values, wherein the aligning of the set of sub-images is independent of acceleration of the tool during tool motion.