Patent classifications
G06T2215/06
Determining lighting information for rendering a scene in computer graphics using illumination point sampling
Rendering system combines point sampling and volume sampling operations to produce rendering outputs. For example, to determine color information for a surface location in a 3-D scene, one or more point sampling operations are conducted in a volume around the surface location, and one or more sampling operations of volumetric light transport data are performed farther from the surface location. A transition zone between point sampling and volume sampling can be provided, in which both point and volume sampling operations are conducted. Data obtained from point and volume sampling operations can be blended in determining color information for the surface location. For example, point samples are obtained by tracing a ray for each point sample, to identify an intersection between another surface and the ray, to be shaded, and volume samples are obtained from a nested 3-D grids of volume elements expressing light transport data at different levels of granularity.
ALLOCATION OF PRIMITIVES TO PRIMITIVE BLOCKS
An application sends primitives to a graphics processing system so that an image of a 3D scene can be rendered. The primitives are placed into primitive blocks for storage and retrieval from a parameter memory. Rather than simply placing the first primitives into a primitive block until the primitive block is full and then placing further primitives into the next primitive block, multiple primitive blocks can be open such that a primitive block allocation module can allocate primitives to one of the open primitive blocks to thereby sort the primitives into primitive blocks according to their spatial positions. By grouping primitives together into primitive blocks in accordance with their spatial positions, the performance of a rasterization module can be improved. For example, in a tile-based rendering system this may mean that fewer primitive blocks need to be fetched by a hidden surface removal module in order to process a tile.
Procedural world generation using tertiary data
Procedural world generation using tertiary data is described. In an example, a computing device can receive road network data associated with the real environment and a road mesh associated with a real environment. The computing device can associate the road network data with the road mesh to generate a simulated environment. Additionally, the computing device can associate supplemental data with the road network data and the road mesh to enhance the simulated environment (e.g., supplementing information otherwise unavailable to the sensor data due to an occlusion). The computing device can output the simulated environment for at least one of testing, validating, or training an algorithm used by an autonomous robotic computing device for at least one of navigating, planning, or decision making.
HOLOGRAM STREAMING MACHINE
Example systems and methods perform streaming of volumetric media and accommodate high user interactivity. A device is configured to access and render streaming holograms and may implement a window as a buffer. In addition, a hologram streaming machine can be configured to stream full or partial holograms in the form of 3D blocks, where different 3D blocks represent a same portion of hologram but may have different resolutions depending on where the user is positioned and looking relative to each 3D block, thus saving network capacity by focusing on what the user is looking at. Since many 3D blocks may be empty much of the time, may be occluded or far away from the user's viewing position, or may be numerous within a 3D space, the device can be configured to request 3D blocks based on their utility, which may be calculated based on bitrate, visibility, or distance.
Rib developed image generation apparatus using a core line, method, and program
A rib extraction unit extracts a rib from a three-dimensional image, a core line setting unit sets a core line of the rib, and a specific axis direction determination unit determines a specific axis direction in a cross section crossing the core line of the rib. An image generation unit moves a position of the core line in the specific axis direction according to an instruction to move the core line in the specific axis direction and generates a developed image of at least one rib based on a cross section along the core line at the moved position.
Method for capturing images of a preferably structured surface of an object and device for image capture
The invention relates to a method for capturing images of a preferably structured surface of an object, using at least one line-scan camera for scanning the surface, wherein the surface is illuminated in a structured manner and wherein for reconstruction of the surface a time-oriented evaluation and/or spatial evaluation of acquired images is effected optionally taking into account a relative movement between the line-scan camera and the surface. Said method is carried out by a device for capturing images of a preferably structured surface of an object.
Hologram streaming machine
Example systems and methods perform streaming of volumetric media and accommodate high user interactivity. A device is configured to access and render streaming holograms and may implement a window as a buffer. In addition, a hologram streaming machine can be configured to stream full or partial holograms in the form of 3D blocks, where different 3D blocks represent a same portion of hologram but may have different resolutions depending on where the user is positioned and looking relative to each 3D block, thus saving network capacity by focusing on what the user is looking at. Since many 3D blocks may be empty much of the time, may be occluded or far away from the user's viewing position, or may be numerous within a 3D space, the device can be configured to request 3D blocks based on their utility, which may be calculated based on bitrate, visibility, or distance.
GEOMETRY BUFFER SLICE TOOL
A method for visualizing a three-dimensional volume for use in a virtual reality environment is performed by uploading two-dimensional images for evaluation, creating planar depictions of the two-dimensional images, and using thresholds to determine if voxels should be drawn. A voxel volume is created from the planar depictions and voxels. A user defines a plane to be used for slicing the voxel volume, and sets values of the plane location and plane normal. The slice plane is placed within the voxel volume and defines a desired remaining portion of the volumetric plane to be displayed. All but the desired remaining portion of the voxel volume is discarded and the remaining portion is displayed.
2D IMAGE CONSTRUCTION USING 3D DATA
A 2D image is constructed from constituent 2D images that show different views of the same object. Construction is performed by taking image tiles, referred to as tonal triangles, from the constituent 2D images and combining them using 3D data for the object. The 3D data define a wireframe model comprising triangles, called contour triangles. Two tonal triangles are combined based on neighbor relationships between the contour triangles that correspond to those two tonal triangles. Additional tonal triangles may be combined as desired, until the 2D constructed image is of a size that shows the subject of interest. Compared to conventional processes for stitching and montaging, the process generates a 2D constructed image that is a more accurate presentation of the true area, shape, and/or size of the subject.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND COMPUTER PROGRAM PRODUCT
An information processing apparatus configured to paste a full-spherical panoramic image along an inner wall of a virtual three-dimensional sphere; calculate an arrangement position for arranging a planar image closer to a center point of the virtual three-dimensional sphere than the inner wall, in such an orientation that a line-of-sight direction from the center point to the inner wall and a perpendicular line of the planar image are parallel to each other, the planar image being obtained by pasting an embedding image to be embedded in the full-spherical panoramic image, on a two-dimensional plane; and display a display image on a display unit. The display image is a two-dimensional image viewed from the center point in the line-of-sight direction in a state in which the full-spherical panoramic image is pasted along the inner wall of the virtual three-dimensional sphere and the planar image is arranged at an arrangement position.