Patent classifications
G06T15/503
METHOD AND APPARATUS FOR INTERACTIVE VOLUMETRIC VISUALIZATION
Disclosed is a method and apparatus for enabling interactive visualization of three-dimensional volumetric models. The method involves maintaining three-dimensional volumetric models represented by explicit surfaces. In accordance with an embodiment of the disclosure, the method also involves, for a current point of view, generating and displaying images of the volumetric models in a manner that clarifies internal structures by accounting for light attenuation inside the volumetric models as a function of spatial positions of the explicit surfaces. The method also involves, upon receiving user input that adjusts a display variable, repeating the generating and the displaying of the images in accordance with the display variable that has been adjusted, thereby enabling interactive visualization of the volumetric models while simultaneously clarifying the internal structures by accounting for the light attenuation inside the volumetric models.
Systems and methods for generating dynamic real-time high-quality lighting for digital animation
Systems, methods, and non-transitory computer-readable media can receive a first set of static lighting information associated with a first static lighting setup and a second set of static lighting information associated with a second static lighting setup. The first set of static lighting information and the second set of static lighting information are associated with a scene to be rendered. A first set of global illumination information is precomputed based on the first set of static lighting information. A second set of global illumination information is precomputed based on the second set of static lighting information. The first and second sets of global illumination information are blended to derive a blended set of global illumination information. The scene is rendered in a real-time application based on the blended set of global illumination information.
NEURAL OPACITY POINT CLOUD
A method of rendering an object is provided. The method comprises: encoding a feature vector to each point in a point cloud for an object, wherein the feature vector comprises an alpha matte; projecting each point in the point cloud and the corresponding feature vector to a target view to compute a feature map; and using a neural rendering network to decode the feature map into a RGB image and the alpha matte and to update the feature vector.
Digital Content View Control System
Digital content view control is described as leveraging a hierarchical structure of objects defined within the digital content to control how those objects are rendered in a user interface. In one example, a user input is received to display a view of objects within digital content displayed in a user interface. In response, a data query module is configured to fetch data describing a hierarchical structure of the digital content. From this, a z-order determination module determines a z-order of objects included with the digital content. An object view generation module is also configured to generate object views depicting the objects included in the digital content. The object views, once rendered, support an ability to view positioning of objects within the hierarchy.
COMPOSITOR LAYER EXTRAPOLATION
In one embodiment, a method may obtain, from an application, (a) an image and (b) a layer frame having a first pose in front of the image. The method may generate, for a first viewpoint associated with a first time, a first display frame by separately rendering the image and the layer frame having the first pose into a display buffer. The method may display the first display frame at the first time. The method may determine an extrapolated pose for the layer frame based on the first pose of the layer frame and a second pose of a previously submitted layer frame. The method may generate, for a second viewpoint associated with a second time, a second display frame by separately rendering the image and the layer frame having the extrapolated pose into the display buffer. The method may display the second display frame at the second time.
INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD
An information processing apparatus divides a plurality of objects into objects of interest and objects of non-interest, and generates a projection image of the objects of non-interest integrated into one image and a projection image of the objects of interest. The information processing apparatus composes the projection image of the objects of interest and the projection image of the objects of non-interest, and displays a resultant composed image on a display device.
Automatic composition of composite images or videos from frames captured with moving camera
A processing device generates composite images from a sequence of images. The composite images may be used as frames of video. A foreground/background segmentation is performed at selected frames to extract a plurality of foreground object images depicting a foreground object at different locations as it moves across a scene. The foreground object images are stored to a foreground object list. The foreground object images in the foreground object list are overlaid onto subsequent video frames that follow the respective frames from which they were extracted, thereby generating a composite video.
LIGHT FIELD VOLUME RENDERING SYSTEM AND METHODS
A system and method for volume rendering a light field, wherein the light field data is subjected to a layering scheme introducing a partitioning of the hogels into subsets. Each subset corresponding to a sub-volume of the layer volume, corresponds to the sub-region of the layer. Novel partitioning of the data combined with an efficient local memory caching technique, plenoptic downsampling strategies to reduce memory bandwidth requirements and volume rendering algorithm to produce a rendered light field image. A reduction in the total number of samples required can be obtained while still maintaining the quality of the resulting image. A method is also provided to order memory accesses aligned with ray calculations in order to maximize access coherency. Real-time layered scene decomposition can be combined with surface rendering method to create a hybrid real-time rendering method that supports rendering of scenes containing superimposed volumes and surfaces.
SYSTEM AND METHOD FOR PROVIDING AUGMENTED VIRTUALITY
A system and method for providing an augmented virtuality solution. A real environment video frame of a human and background is captured. The human is removed from the real environment frame using an RGBA array, a depth array, and a stencil array and composited onto a virtual environment frame using occlusion depth array and refined depth mask array.
METHOD AND APPARATUS FOR VIRTUAL SPACE CONSTRUCTING BASED ON STACKABLE LIGHT FIELD
The electronic apparatus includes a memory stored with a multiple light field unit (LFU) structure in which a plurality of light fields is arranged in a lattice structure, and a processor configured to, based on a view position within the lattice structure being determined, generate a 360-degree image based on the view position by using the multiple LFU structure, and the processor is configured to select an LFU to which the view position belongs from among the multiple LFU structure, allocate a rendering field-of-view (FOV) in predetermined degrees based on the view position, generate a plurality of view images based on a plurality of light fields comprising the selected LFU and the allocated FOV, and generate the 360-degree image by incorporating the generated plurality of view images.