Patent classifications
G06T2215/08
IMAGE PROCESSING APPARATUS AND METHOD
An image processing apparatus and method are provided. The image processing apparatus acquires a target image including a depth image of a scene, determines three-dimensional (3D) point cloud data corresponding to the depth image based on the depth image, and extracts an object included in the scene to acquire an object extraction result based on the 3D point cloud data.
System and method for creating a navigable, three-dimensional virtual reality environment having ultra-wide field of view
The present invention relates to a system and method for capturing video of a real-world scene over a field of view that may exceed the field of view of a user, manipulating the captured video, and then stereoscopically displaying the manipulated image to the user in a head mounted display to create a virtual environment having length, width, and depth in the image. By capturing and manipulating video for a field of view that exceeds the field of view of the user, the system and method can quickly respond to movement by the user to update the display allowing the user to look and pan around, i.e., navigate, inside the three-dimensional virtual environment.
PROJECTION OF 3D IMAGE DATA ONTO A FLAT SURFACE
In a method for transformation of a 3D representation of a part of a body, a central axis of the part of the body to be represented is defined, and a transformation surface that is axis-symmetrical to the central axis is defined. The transformation surface is transformed into a plane and a transformation of the voxels representing the part of the body oriented to the transformed transformation surface is carried out.
GENERATING EQUIRECTANGULAR IMAGERY OF A 3D VIRTUAL ENVIRONMENT
Examples are disclosed that relate to generating an equirectangular image of a three-dimensional (3D) virtual environment in a computer-automated fashion. In one example, a 3D virtual position of a virtual camera in a 3D virtual environment is specified. For each of a plurality of different yaw angles rotated about an axis extending through the 3D virtual position, the virtual camera is used to acquire an image strip of pixels parallel to the axis of rotation. Image strips of pixels of the 3D environment acquired at the different yaw angles are assembled to form an equirectangular image of the 3D virtual environment from the specified 3D virtual position.
Image processing apparatus and method
An image processing apparatus and method are provided. The image processing apparatus acquires a target image including a depth image of a scene, determines three-dimensional (3D) point cloud data corresponding to the depth image based on the depth image, and extracts an object included in the scene to acquire an object extraction result based on the 3D point cloud data.
ANALYZING AORTIC VALVE CALCIFICATION
A system and a method are provided for analyzing an image of an aortic valve structure to enable assessment of aortic valve calcifications. The system comprises an image interface for obtaining an image of an aortic valve structure, the aortic valve structure comprising aortic valve leaflets and an aortic bulbus. The system further comprises a segmentation subsystem for segmenting the aortic valve structure in the image to obtain a segmentation of the aortic valve structure. The system further comprises an identification subsystem for identifying a calcification on the aortic valve leaflets by analyzing the image of the aortic valve structure. The system further comprises an analysis subsystem configured for determining a centerline of the aortic bulbus by analyzing the segmentation of the aortic valve structure, and for projecting the calcification from the centerline of the aortic bulbus onto the aortic bulbus, thereby obtaining a projection indicating a location of the calcification as projected onto the aortic bulbus. The system further comprises an output unit for generating data representing the projection. Provided information on the accurate location of calcifications after a valve replacement may be advantageously used, for example, to effectively analyze the risk of paravalvular leakages of Transcatheter aortic valve implantation (TAVI) interventions for assessing the suitability of a patient for TAVI procedure.
View direction determination
Among other things, one or more techniques and/or systems are provided for defining a view direction for a texture image used to texture a geometry. That is, a geometry may represent a multi-dimensional surface of a scene, such as a city. The geometry may be textured using one or more texture images depicting the scene from various view directions. Because more than one texture image may contribute to texturing portions of the geometry, a view direction for a texture image may be selectively defined based upon a coverage metric associated with an amount of non-textured geometry pixels that are textured by the texture image along the view direction. In an example, a texture image may be defined according to a customized configuration, such as a spherical configuration, a cylindrical configuration, etc. In this way, redundant texturing of the geometry may be mitigated based upon the selectively identified view direction(s).
Transform method for rendering post-rotation panoramic images
A transform method applied in an image processing system is disclosed, comprising: when the image capture module is rotated, respectively performing inverse rotation operations over post-rotation space coordinates of three first vertices from a integral vertex stream according to rotation angles of the image capture module to obtain their pre-rotation space coordinates; calculating pre-rotation longitudes and latitudes of the three first vertices according to their pre-rotation space coordinates; selecting one from a pre-rotation panoramic image, a south polar image and a north polar image as a texture image to determine a texture ID for the three first vertices according to their pre-rotation latitudes; and, calculating pre-rotation texture coordinates according to the texture ID and the pre-rotation longitudes and latitudes to form a first complete data structure for each of the three first vertices.
JOINT IMAGE UNFOLDING APPARATUS, JOINT IMAGE UNFOLDING METHOD, AND JOINT IMAGE UNFOLDING PROGRAM
A joint image unfolding apparatus, a joint image unfolding method, and a non-transitory computer readable recording medium storing a joint image unfolding program are provided to make it possible to check information regarding the entire cartilage in a joint with high accuracy. An image obtaining unit (21) obtains a three-dimensional image of a joint having cartilage. An unfolding unit (23) unfolds the cartilage included in the three-dimensional image with reference to a specific reference axis in the joint to generate an unfolded image.
SYSTEM AND METHOD FOR CREATING A NAVIGABLE, THREE-DIMENSIONAL VIRTUAL REALITY ENVIRONMENT HAVING ULTRA-WIDE FIELD OF VIEW
The present invention relates to a system and method for capturing video of a real-world scene over a field of view that may exceed the field of view of a user, manipulating the captured video, and then stereoscopically displaying the manipulated image to the user in a head mounted display to create a virtual environment having length, width, and depth in the image. By capturing and manipulating video for a field of view that exceeds the field of view of the user, the system and method can quickly respond to movement by the user to update the display allowing the user to look and pan around, i.e., navigate, inside the three-dimensional virtual environment.