Patent classifications
G06T2207/30244
System and method for predictive fusion
An image fusion system provides a predicted alignment between images of different modalities and synchronization of the alignment, once acquired. A spatial tracker detects and tracks a position and orientation of an imaging device within an environment. A predicted pose of an anatomical feature can be determined, based on previously acquired image data, with respect to a desired position and orientation of the imaging device. When the imaging device is moved into the desired position and orientation, a relationship is established between the pose of the anatomical feature in the image data and the pose of the anatomical feature imaged by the imaging device. Based on tracking information provided by the spatial tracker, the relationship is maintained even when the imaging device moves to various positions during a procedure.
Realistic virtual/augmented/mixed reality viewing and interactions
The present invention discloses systems and methods for both viewing and interacting with a virtual reality (VR), an augmented reality (AR) or a mixed reality (MR). More specifically, the systems and methods allow the user to interact with aspects of such realities including virtual items presented in such realities or within such environments by manipulating a control device that has an inside-out camera mounted on-board. The apparatus or system uses two distinct representations including a reduced representation in determining the pose of the control device and uses these representations to compute an interactive pose portion of the control device to be used for interacting with the virtual item. The reduced representation is consonant with a constrained motion of the control device.
Map display method, device, storage medium and terminal
The present disclosure discloses a map display method performed at a terminal. The method includes: obtaining a real scene image of a current location and target navigation data of navigation from the current location to a destination; determining, according to the current location and the target navigation data, virtual navigation prompt information to be overlaid in the real scene image; determining, according to a device configuration parameter of a target device capturing the real scene image, a first location on which the virtual navigation prompt information is overlaid in the real scene image; performing verification detection on the current device configuration parameter; and overlaying the virtual navigation prompt information on the first location when the current device configuration parameter passes through the verification detection. According to the present disclosure, an AR technology is applied to the navigation field so that map display manners are more diverse and diversified.
METHOD, APPARATUS, SYSTEM, AND STORAGE MEDIUM FOR 3D RECONSTRUCTION
A method, device, computer system and computer readable storage medium for 3D reconstruction are provided. The method comprises: performing a 3D reconstruction of an original 2D image of a target object to generate an original 3D object corresponding to the original 2D image; selecting a complementary view of the target object from candidate views based on a reconstruction quality of the original 3D object at the candidate views; obtaining a complementary 2D image of the target object based on the complementary view; performing a 3D reconstruction of the complementary 2D image to generate a complementary 3D object corresponding to the complementary 2D image; and fusing the original 3D object and the complementary 3D object to obtain a 3D reconstruction result of the target object.
PURE POSE SOLUTION METHOD AND SYSTEM FOR MULTI-VIEW CAMERA POSE AND SCENE
A pure pose solution method and system for a multi-view camera pose and scene are provided. The method includes: a pure rotation recognition (PRR) step: performing PRR on all views, and marking views having a pure rotation abnormality, to obtain marked views and non-marked views; a global translation linear (GTL) calculation step: selecting one of the non-marked views as a reference view, constructing a constraint t.sub.r=0, constructing a GTL constraint, solving a global translation (I), reconstructing a global translation of the marked views according to t.sub.r and (I), and screening out a correct solution of the global translation; and a structure analytical reconstruction (SAR) step: performing analytical reconstruction on coordinates of all 3D points according to a correct solution of a global pose. The method and system can greatly improve the computational efficiency and robustness of the multi-view camera pose and scene structure reconstruction.
IMAGE DISPLAY METHOD AND APPARATUS, COMPUTER DEVICE, AND STORAGE MEDIUM
An image display method includes: obtaining a homography matrix between a first target plane and a second target plane according to the first target plane and the second target plane; obtaining a target displacement according to the homography matrix and an attitude; obtaining a target pose according to the target displacement, the target pose including a position and an attitude of a camera coordinate system of a current frame image in a world coordinate system; and displaying an AR image according to the target pose.
A METHOD AND SYSTEM FOR GENERATING A COLORED TRIDIMENSIONAL MAP
A method for generating a colored tridimensional map of an environment surrounding a device, including the steps of receiving a first stream including a plurality of N point cloud frames from a tridimensional sensor, receiving a second stream including a plurality of M images from a camera, generating a global tridimensional map by merging the plurality of N point cloud frames in a reference coordinate system, and determining the colored tridimensional map by assigning a calculated color to each tridimensional data point of global tridimensional map, the calculated color being determined from color values of pixels of the plurality of M images.
OWN-POSITION ESTIMATING DEVICE, MOVING BODY, OWN-POSITION ESTIMATING METHOD, AND OWN-POSITION ESTIMATING PROGRAM
An own-position estimating device for estimating an own-position of a moving body by matching a feature extracted from an acquired image with a database in which position information and the feature are associated with each other in advance, includes an estimating unit estimating the own-position of the moving body by matching the feature extracted by the extracting unit with the database, and a determination threshold value adjusting unit adjusting a determination threshold value for extracting the feature, in which the determination threshold value adjusting unit acquires the database in a state in which the determination threshold value is adjusted, and adjusts the determination threshold value on the basis of the determination threshold value linked to each of the position information items in the database, and the extracting unit extracts the feature from the image by using the determination threshold value adjusted by the determination threshold value adjusting unit.
UNDERWATER ORGANISM IMAGING AID SYSTEM, UNDERWATER ORGANISM IMAGING AID METHOD, AND STORAGE MEDIUM
An underwater organism imaging aid system according to an aspect of the present disclosure includes, at least one memory configured to store instructions, and at least one processor configured to execute the instructions to: detect an underwater organism from an image acquired by a camera, determine a positional relationship between the underwater organism detected and the camera, and output auxiliary information for moving the camera in such a way that a side face of the underwater organism and an imaging face of the camera face each other based on the positional relationship.
PUPIL DETECTION DEVICE, LINE-OF-SIGHT DETECTION DEVICE, OCCUPANT MONITORING SYSTEM, AND PUPIL DETECTION METHOD
A pupil detection device includes an eye area image obtaining unit that obtains image data representing an eye area image in a captured image obtained by a camera; a luminance gradient calculating unit that calculates luminance gradient vectors corresponding to respective individual image units in the eye area image, using the image data; an evaluation value calculating unit that calculates evaluation values corresponding to the respective individual image units, using the luminance gradient vectors; and a pupil location detecting unit that detects a pupil location in the eye area image, using the evaluation values.