Patent classifications
G06T2215/08
Method and Apparatus for Mapping Virtual-Reality Image to a Segmented Sphere Projection Format
Methods and apparatus of processing spherical images related to segmented sphere projection (SSP) are disclosed. According to one method, a North Pole region of the spherical image is projected to a first circular image and a South Pole region of the spherical image is projected to a second circular image using a mapping process selected from a mapping group comprising equal-area mapping, non-uniform mapping and cubemap mapping. Methods and apparatus of processing spherical images related to rotated sphere projection (RSP) are also disclosed. According to this method, the spherical image is projected into a first part of rotated sphere projection corresponding to a region of the spherical image and a second part of rotated sphere projection corresponding to a remaining part of the spherical image using equal-area mapping.
Image processing apparatus and method
An image processing apparatus and method are provided. The image processing apparatus acquires a target image including a depth image of a scene, determines three-dimensional (3D) point cloud data corresponding to the depth image based on the depth image, and extracts an object included in the scene to acquire an object extraction result based on the 3D point cloud data.
Converting imagery and charts to polar projection
Embodiments relate to converting imagery to a polar projection. Initially, a map request that specifies the polar projection for a geographic area is obtained. The geographic area into a number of image regions. A first source image is obtained for a first image region, where the first source image is at a first target resolution, and a second source image is obtained for a second image region, where the second source image is at a second target resolution that is determined based on a geographic location of the second image region. The first source image and the second source image are projected into the polar projection to obtain a single output image. At this stage, a polar coordinate system that corresponds to the polar projection is used to render the single output image in a spatial map.
Image forming device executing drawing process and recording medium
Provided is an image forming device and a recording medium which can execute plural drawing processes in parallel without having to use different data lists for drawing processes. The image process device can include simultaneously plural drawing process sections each of which executes its rendering for lines one by one based on a display list. The image forming device causes the plural drawing process sections, each of which renders for a specific region that includes plural lines within a page, to render for different lines every specific number of lines, the specific number being determined by subtracting 1 from all the numbers of the plural drawing process sections that render their renderings for the specific regions.
Analyzing aortic valve calcification
A system and a method are provided for analyzing an image of an aortic valve structure to enable assessment of aortic valve calcifications. The system comprises an image interface for obtaining an image of an aortic valve structure, the aortic valve structure comprising aortic valve leaflets and an aortic bulbus. The system further comprises a segmentation subsystem for segmenting the aortic valve structure in the image to obtain a segmentation of the aortic valve structure. The system further comprises an identification subsystem for identifying a calcification on the aortic valve leaflets by analyzing the image of the aortic valve structure. The system further comprises an analysis subsystem configured for determining a centerline of the aortic bulbus by analyzing the segmentation of the aortic valve structure, and for projecting the calcification from the centerline of the aortic bulbus onto the aortic bulbus, thereby obtaining a projection indicating a location of the calcification as projected onto the aortic bulbus. The system further comprises an output unit for generating data representing the projection. Provided information on the accurate location of calcifications after a valve replacement may be advantageously used, for example, to effectively analyze the risk of paravalvular leakages of Transcatheter aortic valve implantation (TAVI) interventions for assessing the suitability of a patient for TAVI procedure.
Converting Imagery and Charts to Polar Projection
Embodiments relate to converting imagery to a polar projection. Initially, a map request that specifies the polar projection for a geographic area is obtained. The geographic area into a number of image regions. A first source image is obtained for a first image region, where the first source image is at a first target resolution, and a second source image is obtained for a second image region, where the second source image is at a second target resolution that is determined based on a geographic location of the second image region. The first source image and the second source image are projected into the polar projection to obtain a single output image. At this stage, a polar coordinate system that corresponds to the polar projection is used to render the single output image in a spatial map.
SYSTEM AND METHOD FOR CREATING A NAVIGABLE, THREE-DIMENSIONAL VIRTUAL REALITY ENVIRONMENT HAVING ULTRA-WIDE FIELD OF VIEW
The present invention relates to a system and method for capturing video of a real-world scene over a field of view that may exceed the field of view of a user, manipulating the captured video, and then stereoscopically displaying the manipulated image to the user in a head mounted display to create a virtual environment having length, width, and depth in the image. By capturing and manipulating video for a field of view that exceeds the field of view of the user, the system and method can quickly respond to movement by the user to update the display allowing the user to look and pan around, i.e., navigate, inside the three-dimensional virtual environment.
IMAGE FORMING DEVICE EXECUTING DRAWING PROCESS AND RECORDING MEDIUM
Provided is an image forming device and a recording medium which can execute plural drawing processes in parallel without having to use different data lists for drawing processes. The image process device can include simultaneously plural drawing process sections each of which executes its rendering for lines one by one based on a display list. The image forming device causes the plural drawing process sections, each of which renders for a specific region that includes plural lines within a page, to render for different lines every specific number of lines, the specific number being determined by subtracting 1 from all the numbers of the plural drawing process sections that render their renderings for the specific regions.
System and method for creating a navigable, three-dimensional virtual reality environment having ultra-wide field of view
The present invention relates to a system and method for capturing video of a real-world scene over a field of view that may exceed the field of view of a user, manipulating the captured video, and then stereoscopically displaying the manipulated image to the user in a head mounted display to create a virtual environment having length, width, and depth in the image. By capturing and manipulating video for a field of view that exceeds the field of view of the user, the system and method can quickly respond to movement by the user to update the display allowing the user to look and pan around, i.e., navigate, inside the three-dimensional virtual environment.
Redundant pixel mitigation
Among other things, one or more techniques and/or systems are provided for mitigating redundant pixel texture contribution for texturing a geometry. That is, the geometry may represent a multidimensional surface of a scene, such as a city. The geometry may be textured using one or more texture images (e.g., an image comprising color values and/or depth values) depicting the scene from various view directions (e.g., a top-down view, an oblique view, etc.). Because more than one texture image may contribute to texturing a pixel of the geometry (e.g., due to overlapping views of the scene), redundant pixel texture contribution may arise. Accordingly, a redundant textured pixel within a texture image may be knocked out (e.g., in-painted) from the texture image to generate a modified texture image that may be relatively efficient to store and/or stream to a client due to enhanced compression of the modified texture image.