Method and apparatus for enhanced graphics rendering in a video game environment
11534688 · 2022-12-27
Assignee
Inventors
- Alexandre Hadjadj (Edinburgh, GB)
- Raymond Kerr (Oceanside, CA, US)
- Steve Reed (San Diego, CA, US)
- Tyler Robertson (Carlsbad, CA, US)
- Owen Shepherd (Edinburgh, GB)
- Flavius Alecu (Carlsbad, CA, US)
- Rick Stirling (Edinburgh, GB)
Cpc classification
A63F13/57
HUMAN NECESSITIES
A63F13/65
HUMAN NECESSITIES
G06T17/20
PHYSICS
International classification
G06T17/20
PHYSICS
A63F13/65
HUMAN NECESSITIES
Abstract
A system and method for improved graphics rendering in a video game environment.
Claims
1. A method for graphics rendering dynamic terrain areas in a gaming environment, the method comprising: accessing, via a processor at runtime, a world level map of the gaming environment, wherein the world level map correlates each of multiple locations of the gaming environment with a corresponding dynamic terrain type; determining, via the processor, for a given location within the gaming environment, a corresponding dynamic terrain type based on the world level map of the gaming environment; storing for a later time, via the processor, where the given location is not proximate a virtual camera, record of marks left on a surface of the given location within the gaming environment by characters within the gaming environment; updating, via the processor, where the given location within the gaming environment is subsequently determined to be proximate the virtual camera, a trail map of the given location within the gaming environment, wherein the updating utilizes the stored record, and wherein the updating is based on the determined dynamic terrain type; selecting, via the processor, where the given location within the gaming environment is subsequently determined to be proximate the virtual camera, for the given location within the gaming environment, a surface shader based on the determined dynamic terrain type; and rendering, via the processor, where the given location within the gaming environment is subsequently determined to be proximate the virtual camera, the given location within the gaming environment with the selected surface shader, wherein the rendering includes a depiction of the trail map, and wherein the given location within the gaming environment is rendered with a first surface shader where the world level map specifies a first dynamic terrain type for the given location, and wherein the given location within the gaming environment is rendered with a second surface shader where the world level map specifies a second dynamic terrain type for the given location.
2. The method of claim 1, wherein said determining the corresponding dynamic terrain type comprises determining a deep snow surface terrain type; said updating the trail map comprises updating a medium detail trail map with large footprints; said selecting the surface shader comprises selecting a deep snow surface shader; and said rendering the given location within the gaming environment with the selected surface shader comprises: generating, via the processor, a tessellation map; and adding, via the processor, new vertices to a base mesh that is displaced based on the updated trail map via tessellation of a graphics processing unit.
3. The method of claim 2, wherein said updating the medium detail trail map comprises updating a trail map that is projected over 96 meters of the gaming environment using a 2048×2048 trail map texture.
4. The method of claim 1, wherein said determining the corresponding dynamic terrain type comprises determining a shallow mud surface terrain type; said updating the trail map comprises updating a high detail trail map with decals from footprints; said selecting the surface shader comprises selecting a shallow mud surface shader; and said rendering the given location within the gaming environment with the selected surface shader comprises: generating, via the processor, a parallax map based on querying the updated trail map; and displacing, via the processor, texture coordinates at a selected point on a base mesh by a function of a view angle and a value of the updated trail map at the selected point.
5. The method of claim 4, wherein said updating the high detail trail map comprises updating a trail map that is projected over 48 meters of the gaming environment using a 4096×4096 trail map texture.
6. The method of claim 1, wherein said rendering further comprises blurring to soften the marks left on the surface.
7. The method of claim 1, wherein said determining the corresponding dynamic terrain type comprises determining whether the dynamic terrain type is at least one of scree, snow, quicksand, and mud.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
(19)
(20)
(21)
(22)
(23)
(24)
(25)
(26)
(27)
(28)
(29)
(30)
(31)
(32)
(33)
(34)
(35)
(36)
(37)
(38)
(39)
(40)
(41)
(42)
(43)
(44)
(45)
(46)
(47)
(48)
(49) It should be noted that the figures are not drawn to scale and that elements of similar structures or functions are generally represented by like reference numerals for illustrative purposes throughout the figures. It also should be noted that the figures are only intended to facilitate the description of the preferred embodiments. The figures do not illustrate every aspect of the described embodiments and do not limit the scope of the present disclosure.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
Dynamic Terrain System
(50) The present disclosure describes a number of methods and computerized systems for graphics rendering. In some embodiments, a shader system 200 is shown in
(51) At runtime, the world level map 210 can be used to see if any specific dynamic terrain is active (e.g., will be displayed to the user). If so, it will generate a specific trail map 250 for the dynamic terrain that will affect the terrain. If not, then no trail map will be generated and simple shaders can be used having no trail effects, thereby reducing overhead for those effects that are not needed at runtime. The trail map 250 records any marks left on the surface by players, non-player characters (NPCS), and/or vehicles in the game. As players goes through the game world, they are shown trails relevant to them. The type of the trail map 250 is dependent on the surface type. Trail maps are a top down projected map located around the camera. Trail maps have different detail levels or resolutions. Blurring further can be applied to soften the trail or marks left on the surface. If this is done over multiple frames, the trail will flatten out like a liquid with the viscosity dependent on the strength of the blur. This effect is referred to as the ooze effect.
(52) Turning to
(53) If the player is outside of a trail map area, the trail map 250 for that player is stored in a curve so it can later be added to the trail map 250 if the camera is within range. To render into the trail map, ‘decals’ are rendered where a player foot or animal hoof lands where it imprints the offset.
(54) In some embodiments, trails are aged over time to allow for certain terrain which are soft such as snow or soft mud. For example, as described above, blurring can be applied to soften the trail or marks left on the surface of snow or soft mud. When done over multiple frames, the trail will flatten out like a liquid with the viscosity dependent on the strength of the blur. In other words, the trail map 250 includes a blur shader applied to each frame. Since this is repeatedly applied, the blur spreads over several frames to age and soften the trails.
(55) As shown in
(56) In some embodiments, surface shaders receive the trail map texture, the trail map normal map texture, the location of the trail area, and displacement height for trails.
(57) By way of example, two surface shaders can be used for a typical terrain with the associated trail maps.
(58) Shallow Mud Surface Shader (or Parallax Surface Shader)—A shallow mud surface shader can be used for displacements that are visibly a maximum of 5 inches. The shallow mud surface shader uses parallax mapping to create the illusion that the surface is displaced. In other words, parallax surface shaders can render a displacement or difference in the apparent position of an object viewed along two different lines of sight, which is measured by the angle or semi-angle of inclination between those two lines.
(59) With reference to
(60) Additionally and/or alternatively, multiple samples can be taken along the view direction in texture space. For example, after the parallax map 310 displaces the texture coordinate at a selected point, a new sample is taken at the selected point to determine whether the sample intersects the ray along the view direction. If the sample does not intersect, additional samples can be taken. Parallax mapping can be used to create the illusion of high-detail deformation where footprints are on the surface of the terrain. Although the surface is warped so it visibly looks as though it is displaced, warping is applied in the pixel shader so no actual movement of the polygon is applied. Advantageously, the shallow mud surface shader can use very high detailed displacements up to a predetermined height.
(61) Deep Snow Surface Shader—The deep snow surface shader can use the tessellation hardware of the GPU to add new vertices to the mesh which are displaced using the contents of the trail map. A tessellation of a flat surface is the tilling of a plane using one or more geometric shapes (tiles) with no overlaps and no gaps. This has the most interaction with the scene, but is the most expensive to render. The deep snow surface shader uses adaptive tessellation to add more detail where the trail map 250 has detail and is close to the camera. Tessellation cannot support as high detail as parallax as it causes artifacts and is prohibitively expensive at those detail levels.
(62) In some embodiments, the tessellation surface shader generates a tessellation map, such as a tessellation map 320 shown in
(63) Both shaders support per vertex control to reduce the amount of displacement for parallax and tessellation. This allows artists to paint out the displacement at the end of the trail areas, between different trail types, and between simple shader without any displacement. This gives a seamless transition to and from trail areas in the terrain.
(64) The shallow mud surface shader and deep snow surface shader can render a scene in any means described herein, such as by an exemplary process 4000, shown in
(65) If the determined terrain is closest to shallow mud, at decision block 4030, the trail map 250 is updated, with decals from footprints added, at 4031. The shallow mud surface shader then runs with parallax, at 4032. Specifically, parallax mapping can query the trail map 250, at 4033, to determine the displacement of the terrain.
(66) If the determined terrain is neither shallow mud nor deep snow, at decision blocks 4020 and 4030, the terrain is rendered with a standard terrain shader, at 4040.
(67) By way of example, a virtual world including deep snow can be rendered with a medium detail trail map 250 and the deep snow surface shader. In some embodiments, a medium detail trail map 250 is projected over 96 meters using a 2048×2048 trail map texture. The medium detail trail map 250 provides a texel for every 5 cm. Similarly, a high detail trail map 250 is projected over 48 meters using a 4096×4096 trail map texture, resulting in a texel every 1 cm. Additionally and/or alternatively, a virtual world including deep snow can be rendered with a low detail trail map 250 where a medium detail trail map 250 would take too long to generate. The low detail trail map 250 is projected over 96 meters using a 1024×1024 trail map texture.
(68) An exemplary video game scene 501 showing a rendered deep snow scene is shown in
(69) For an extra layer of fine snow, a snow map can be used to apply a layer of procedural snow over the terrain. The snow map adds an extra pass to the terrain where a fine snow layer is added where snow is painted in a world sized snow map. This allows for changing seasons by reloading the snow map rather than changing the terrain.
Ambient Occlusion
(70) In some embodiments, the disclosed graphics system provides an advanced efficient rendering system with ambient mask volumes (AMV). These AMVs advantageously describe how large objects in the environment occlude the ambient light, thereby efficiently creating a more realistic lighting model.
(71) An ambient mask volume stores ambient occlusion in values in a three dimensional texture, such as an ambient mask volume cube 600 shown in
(72) For large areas, such as a terrain, a large ambient mask volume is projected onto the surface of the terrain, such as shown in
(73) In other words, the world height map can represent a two-dimensional top-down map of height over the entire terrain in the virtual world. For each point on the surface of the on-screen terrain, the cell location height is equal to the world position height subtracted by the world height map at that selected location. The ambient cube sample is equal to the terrain ambient mask as a function of the world position and the cell location height.
(74) By way of example, projecting the terrain ambient mask volume as shown in
(75) For each cell in a texture, since the ambient occlusion is dependent on direction, the direction is stored in an ambient cube basis, such as shown in
(76) In some embodiments, the ambient cube basis is generated offline where the scene at each point in an ambient mask volume is rendered. For each point, a sky visibility cube map is rendered where the surfaces are black and the sky is white. This allows for a simple convolution to get the final value for each side of the ambient cube stored at each cell. The final values are stored in multiple ambient mask volumes. This process is distributed on multiple machines in tiles to allow generation of high detail ambient mask volumes within a predetermined timeframe.
(77) By way of example, for a large terrain, a 3D texture is stored in the ambient cube basis having six values. Each side of the cube stores the sky visibility at the location center for that cell. In other words, each value is stored for each axial direction (e.g., +X, −X, +Y, −Y, +Z, −Z).
(78) In a preferred embodiment, for a terrain ambient mask volume that follows the terrain, 2.4 billion points are generated with 1 probe every 2 meters.
(79) With reference to
(80) Ambient mask volumes can also be applied in any manner described herein. For example, when generating the runtime ambient occlusion per pixel values, the ambient mask volumes can be applied in layers, such as shown in
(81) For dynamic objects, such as vehicles, a baked ambient mask volume is placed over the dynamic object and moves with the object. For opening and closing doors a new baked ambient is placed over the area of the door as it opens.
(82) In some embodiments, blend ambient mask volumes can be used for crossing two different building areas which have a harsh transition.
(83) In a preferred embodiment, the layers of the ambient mask volumes can be applied to the top layer first and exclude the layers below. Accordingly, only one layer is applied per pixel except for dynamic ambient mask volumes or blend ambient mask volumes which are blended with the previous layer. This allows for a plurality of layers while maintaining a cost of roughly one layer per pixel.
(84) When the camera is moved, for any close-by interior, building, or high resolution terrain, ambient mask volumes are streamed in while ambient mask volumes no longer visible and released from the system.
(85) By way of additional examples, an interior mask volume can be applied to interiors only for each interior ambient mask volume. Similarly, simple ambient mask volumes can be applied for each building. For all exterior pixels left and pixels close to a camera, high level of detail terrain ambient mask volumes can be rendered. For all exterior pixels left, a low level of detail terrain ambient mask volume is rendered.
(86) For each blendable building ambient mask volume, ambient mask volumes are blended. For each vehicle or doorway, a transformed ambient mask volume is blended.
Global Illumination
(87) The present disclosure can efficiently account for light reflected from the ground. In some embodiments, the system uses a bounce map that is projected in top-down fashion to determine reflected light. The bounce map is converted into a texture that provides an approximation for the expected bounce back of light. The bounce map provides per frame, dynamic shadows, intensity and brightness affecting the surfaces of objects in the scene. Accordingly, the bounce map advantageously simulates the effect that would be achieved rendering the multiple passes of lighting to account for the natural bounce reflections.
(88) In some embodiments, the bounce lighting can be integrated into a lighting pipeline as an extra full screen pass or extra shader code, such as shown in
(89) Turning now to
(90) By way of example, with reference to
(91) To allow for finding the height of the of the terrain at a point when rendering the top down bounce map, the height of each texel is stored in a depth map (referred to herein as a “bounce depth map”) at the same resolution as the bounce map. Rendering to a depth map can be supported by GPU hardware for efficient rendering. In some embodiments, the GPU efficiently renders a depth only surface by rendering scene geometry and storing only the distance to the geometry. For example, the pixel rendering stage of the pipeline shown in
(92) As shown in
(93) Multiple layers of blurred textures can then be generated for this texture (e.g., a mipmap to prefilter a texture), each layer at different sizes of blur and lower resolutions. Generating multiple layers of blurred textures optimizes efficiency to allow for a reduced number of samples later when the texture map is accessed.
(94) Following the first pass for generating a map storing lighting on the ground, the process 1200 continues with a second pass of determining the bounced light per pixel from the ground, at 1220. This is a pass over every pixel on the screen that uses the pixel position and the normal to the surface at that point. For this pixel, the light reflected from the bounce map is determined to approximate the light reflected from the ground. To get the height of this pixel position over the ground, the bounce depth map is used to compare to the height of the current pixel. The rest of the ground is treated as being a horizontal plane at this height. This approximation allows for very simple ray intersection test to determine how light hits the surface after being reflected off the ground.
(95) Following the second pass for approximating bounced light per pixel, at 1220, the bounce map is sampled for access lighting, at 1230. Sampling the bounce map can occur at any predefined frequency dependent on quality and level of GPU resources. For example, the bounce map can be sampled once for an approximate solution, thereby using few GPU resources. Or the bounce map can be sampled four to sixteen times for a more physically accurate solution, each sample including a calculation of a location and a texture sampling.
(96) For example,
(97) The intersection of the ray along this direction is determined with the ground plane. As previously discussed, this is easily identifiable due to the plane approximation. This intersection point is converted to a location on the bounce map. By sampling the bounce map at this point, an approximate value for the bounced light hitting the surface is calculated. With the blurring added in the first pass, at 1210, both the selected point and nearby areas are illuminated.
(98) Turning to
(99) In an even further embodiment, the single area sample of
(100)
(101)
(102) As yet another example,
(103) Exemplary shader code for implementing bounce lighting is found below:
(104) TABLE-US-00001 Sample Shader Code for single sample float3 CalculateBounceLookup( float3 worldPos, float3 normal) { // cull if normal is facing upwards, careful with threshold as can create seam if (normal.z > 0.7) return 0.0; // Find the ground height float2 uvPos = mul(float4(worldPos.xyz+ normal*.5, 1.0f), GIViewProjMtx).xy; uvPos.y = 1.0 − uvPos.y; float groundHeight = GetGroundHeight(uvPos); // Take a sample along the bounce normal float3 col =0.; 11 Gather additional samples using importance sampling float3 sampleDir = BendNormalDownwards(normal); if (sampleDir.z < 0.0){ float t = abs((worldPos.z − groundHeight) / sampleDir.z); // use larger mips on worldPos.z col += SingleRayBounce(worldPos, sampleDir, t)*abs(sampleDir.z); } } return col }
Material Tinting
(105) The present disclosure provides systems and methods for creating several in game object variants from a single model.
(106) Conventionally, per-vertex tinting is limited to changing the tint selection at a three-vertex position on a triangle. Additionally and/or alternatively to vertex material tinting, per-pixel tinting can be used, such as shown in
(107) An added benefit is that by changing control textures, multiple variations can be included. Depending on the object, multiple layers of per-pixel material tinting can be used by using multiple channels in the control texture or multiple control textures.
(108) Furthermore to add more variations, the per-pixel palettes now not only modifies an object colors but other material properties, such as metalness, lighting parameters, or additional layers such as mud, snow or dust.
(109) The disclosed embodiments are susceptible to various modifications and alternative forms, and specific examples thereof have been shown by way of example in the drawings and are herein described in detail. It should be understood, however, that the disclosed embodiments are not to be limited to the particular forms or methods disclosed, but to the contrary, the disclosed embodiments are to cover all modifications, equivalents, and alternatives.