G06T2210/52

IMAGE RENDERING METHOD AND APPARATUS, COMPUTER DEVICE, AND STORAGE MEDIUM
20230033306 · 2023-02-02 ·

Provided is an image rendering method performed by a computer device, including: selecting, from a plurality of pre-compiled shaders associated with a rendering engine, a target shader corresponding to a graphic interface of an application program; acquiring scene data corresponding to a render target according to a rendering instruction triggered by the application program, and writing the scene data into a cache block; determining a plurality of render passes corresponding to the render target; merging, in the cache block, the plurality of render passes into a merged render pass based on the target shader; and performing image rendering on the scene data in the cache block in the merged render pass to obtain an image rendering result in the cache block.

METHOD, APPARATUS AND DEVICE FOR PROCESSING SHADOW TEXTURE, COMPUTER-READABLE STORAGE MEDIUM, AND PROGRAM PRODUCT

A method for processing a shadow texture can compute data concurrently to improve computation density, reduce data transmission batches, and improve shadow texture processing efficiency. Model data and light source information of at least one object in a virtual scene is acquired. A shadow with a first resolution corresponding to each object is acquired. A second resolution is determined based on the model data of each object. A shadow edge formed by each object under the influence of the light source information is determined in parallel by utilizing rasterized pixels of the model data. The distance from the rasterized pixels to the shadow edge corresponding to each object is computed, and the corresponding distance data is stored.

Virtual object display method and apparatus, electronic device, and storage medium

The present disclosure discloses a display method and apparatus for a virtual object, an electronic device, and a storage medium, and is related to the field of computer technologies. The method includes: obtaining a plurality of animation frames corresponding to each of a plurality of virtual objects and a weight of each animation frame; blending a plurality of animation frames corresponding to the plurality of virtual objects in parallel through an image processor according to the weight of each animation frame, to obtain target position and pose data of each bone in bone models of the plurality of virtual objects; and displaying the plurality of virtual objects in a graphical user interface according to the target position and pose data of each bone in the bone models of the plurality of virtual objects.

Multi-sample stereo renderer

An embodiment of a parallel processor apparatus may include a sample pattern selector to select a sample pattern for a pixel, and a sample pattern subset selector communicatively coupled to the sample pattern selector to select a first subset of the sample pattern for the pixel corresponding to a left eye display frame and to select a second subset of the sample pattern for the pixel corresponding to a right eye display frame, wherein the second subset is different from the first subset. Other embodiments are disclosed and claimed.

NEURAL OPACITY POINT CLOUD
20230071559 · 2023-03-09 ·

A method of rendering an object is provided. The method comprises: encoding a feature vector to each point in a point cloud for an object, wherein the feature vector comprises an alpha matte; projecting each point in the point cloud and the corresponding feature vector to a target view to compute a feature map; and using a neural rendering network to decode the feature map into a RGB image and the alpha matte and to update the feature vector.

LATENCY COMPENSATION FOR IMAGE PROCESSING DEVICES
20230069292 · 2023-03-02 ·

The present invention relates to latency compensation for image processing devices. In order to assign overlay data to frames of a raw image stream (210), past frames within a selection of frames of the raw image stream are considered, which past frames already underwent image processing. A current frame of the raw image stream (210) is compared to the past frames contained in the selection of frames, and that one of the past frames is identified that is most similar to the current frame. Overlay data from the identified past frame are chosen and assigned to the current frame. Thus, the current frame can be presented together with the assigned overlay data chosen from the most similar past frame and without the need to wait for the result of a computationally expensive and time-consuming image processing of the current frame.

GRAPHICS PROCESSOR AND INFORMATION PROCESSING SYSTEM
20230066833 · 2023-03-02 · ·

A graphics processor having a command processor and a geometry engine and connected to a memory and another graphics processor includes a bus fabric that delivers and receives data to and from the memory connected thereto, and a first interconnect that is connected to the command processor and the geometry engine and that delivers and receives data to and from a command processor and a geometry engine of the other graphics processor. Via a second interconnect, the bus fabric delivers and receives data to and from a bus fabric of the other graphics processor, and is accessibly connected to a memory connected to the other graphics processor.

Fully-fused neural network execution

A fully-connected neural network may be configured for execution by a processor as a fully-fused neural network by limiting slow global memory accesses to reading and writing inputs to and outputs from the fully-connected neural network. The computational cost of fully-connected neural networks scale quadratically with its width, whereas its memory traffic scales linearly. Modern graphics processing units typically have much greater computational throughput compared with memory bandwidth, so that for narrow, fully-connected neural networks, the linear memory traffic is the bottleneck. The key to improving performance of the fully-connected neural network is to minimize traffic to slow “global” memory (off-chip memory and high-level caches) and to fully utilize fast on-chip memory (low-level caches, “shared” memory, and registers), which is achieved by the fully-fused approach. A real-time neural radiance caching technique for path-traced global illumination is implemented using the fully-fused neural network for caching scattered radiance components of global illumination.

Power efficient attribute handling for tessellation and geometry shaders
11663767 · 2023-05-30 · ·

Attributes of graphics objects are processed in a plurality of graphics processing pipelines. A streaming multiprocessor (SM) retrieves a first set of parameters associated with a set of graphics objects from a first set of buffers. The SM performs a first set of operations on the first set of parameters according to a first phase of processing to produce a second set of parameters stored in a second set of buffers. The SM performs a second set of operations on the second set of parameters according to a second phase of processing to produce a third set of parameters stored in a third set of buffers. One advantage of the disclosed techniques is that work is redistributed from a first phase to a second phase of graphics processing without having to copy the attributes to and retrieve the attributes from the cache or system memory, resulting in reduced power consumption.

HYBRID HIERARCHY OF BOUNDING AND GRID STRUCTURES FOR RAY TRACING

Methods and ray tracing units are provided for performing intersection testing for use in rendering an image of a 3-D scene. A hierarchical acceleration structure may be traversed by traversing one or more upper levels of nodes of the hierarchical acceleration structure according to a first traversal technique, the first traversal technique being a depth-first traversal technique; and traversing one or more lower levels of nodes of the hierarchical acceleration structure according to a second traversal technique, the second traversal technique not being a depth-first traversal technique. Results of traversing the hierarchical acceleration structure are used for rendering the image of the 3-D scene. The upper levels of the acceleration structure may be defined according to a spatial subdivision structure, whereas the lower levels of the acceleration structure may be defined according to a bounding volume structure.