Method For Generating An Acoustic Environment Model
20240367046 ยท 2024-11-07
Assignee
Inventors
- Loic Couthier (London, GB)
- Alexei Smith (London, GB)
- Nick Ward-Foxton (London, GB)
- Atsushi Suganuma (London, GB)
- Christopher Buchanan (London, GB)
- Lewis Thresh (London, GB)
- Michael Eder (London, AU)
- Lewis Barn (London, GB)
Cpc classification
A63F13/54
HUMAN NECESSITIES
A63F2300/538
HUMAN NECESSITIES
H04S5/00
ELECTRICITY
International classification
Abstract
A computer-implemented method for generating an acoustic environment model for a video game level is provided. The method comprises: obtaining graphical data for visually rendering a gameplay environment of the video game level, the graphical data comprising a three-dimensional geometrical model for the gameplay environment, and generating, based on the graphical data, an acoustic environment model corresponding to the gameplay environment and adapted for the simulating of gameplay sounds therein.
Claims
1. A computer-implemented method for generating an acoustic environment model for a video game level, the method comprising: obtaining graphical data for visually rendering a gameplay environment of the video game level, the graphical data comprising a three-dimensional geometrical model for the gameplay environment; and generating, based on the graphical data, an acoustic environment model corresponding to the gameplay environment and adapted for simulating of gameplay sounds in the gameplay environment.
2. The method according to claim 1, wherein the generating of the acoustic environment model comprises: identifying, in the three-dimensional geometrical model, one or more boundaries of a region of the gameplay environment; and defining, based on the identified boundaries, a set of surfaces arranged to enclose a three-dimensional region within the acoustic environment model.
3. The method according to claim 2, wherein the generating of the acoustic environment model comprises assigning, to the three-dimensional region, one or more audio characteristic parameters.
4. The method according to claim 3, wherein the one or more audio characteristic parameters comprise one or more of: a surface texture parameter; a reverberation parameter; a room acoustics parameter; a room size parameter; or an environment type parameter.
5. The method according to claim 3, further comprising generating the one or more audio characteristic parameters based on the graphical data.
6. The method according to claim 5, wherein the generating an audio characteristic parameter is based on one or more of: analysing the spatial arrangement of features of the three-dimensional geometrical model, inferring material properties of features of the three-dimensional geometrical model, inferring atmospheric conditions, or predicted ambient sound.
7. The method according to claim 2, wherein the identifying of one or more boundaries comprises: applying a boundary prediction function to the three-dimensional geometrical model to predict therefrom one or more further boundaries; and defining at least one further surface of the set of surfaces for each predicted further boundary.
8. The method according to claim 7, wherein the boundary prediction function is configured to: detect boundary structures present in the three-dimensional geometrical model; and based on the detected boundary structures, predict one or more further boundary structures that should exist and are absent from the three-dimensional geometrical model.
9. The method according to claim 7, wherein the boundary prediction function is configured to: identify one or more environment transition zones, each corresponding to an interface between in-game areas having different acoustic properties; and predict one or more further boundaries for each identified transition zone.
10. The method according to claim 1, wherein the generating of the acoustic environment model comprises applying a simplification function to the three-dimensional geometrical model to produce a second three-dimensional geometrical model corresponding to and having a lower level of geometric detail than the first geometrical model, wherein the second three-dimensional geometrical model is used in generating the acoustic environment model.
11. The method according to claim 10, wherein the simplification function comprises applying a low-pass spatial filter to the first three-dimensional geometrical model.
12. The method according to claim 10, wherein the simplification function is configured to exclude, from the second three-dimensional geometrical model, features present in the first three-dimensional geometrical model that have a level of geometric detail greater than a predetermined detail threshold.
13. The method according to claim 10, wherein the simplification function is configured to include, in the second three-dimensional geometrical model, geometrically simplified approximations of objects present in the first three-dimensional geometrical model that have a size greater than a predetermined threshold size.
14. The method according to claim 1, further comprising: applying a portal detection function to the three-dimensional geometrical model, the portal detection function being configured to identify therefrom one or more acoustically transmissive portions for inclusion in the acoustic environment model; and modifying the acoustic environment model to include the identified acoustically transmissive portions.
15. The system configured to perform the method of claim 1, the system comprising: an obtaining unit configured to obtain graphical data for visually rendering a gameplay environment of the video game level, the graphical data comprising a three-dimensional geometrical model for the gameplay environment; and a generating unit configured to generate, based on the graphical data, an acoustic environment model corresponding to the gameplay environment and adapted for the simulating of gameplay sounds in the gameplay environment.
Description
BRIEF DESCRIPTION OF DRAWINGS
Examples of the present invention will now be described, with reference to the accompanying drawings, in which like features are denoted by like reference signs, and in which:
[0067]
[0068]
[0069]
[0070]
[0071]
DETAILED DESCRIPTION
[0072] With reference to the abovementioned figures, examples of methods according to the invention are now described.
[0073] Video game development is a complex process that typically involves the following stages: concept and design, pre-production, production, testing, and release. The placement within the game environment of soundboxes, which can be thought of as geometric volumes in which audio should sound be behave or be perceivable in a particular way, typically occurs during the production stage.
[0074] Prior to the placement of soundboxes the production stage includes the creation of a physics and graphical model for a gameplay environment within a level. The physics model determines the behaviour of objects within the game world, including gravity, friction, collision detection, and the like. The graphical model, on the other hand, determines the visual appearance of the game world, including the placement and look of objects, terrain, and other environmental features.
[0075] Once the physics and graphical models have been created, the next stage of production is to create and configure the soundboxes themselves. Soundboxes are regions within the gameplay environment that define different audio characteristics. For example, one soundbox in an environment may define an area with reverb or echo, while another soundbox may define an area with a specific music track, and another may define a region within which a particular ambient sound or object-associated sound effect is audible.
[0076] During game development, sound designers typically use specialized software tools that allow them to place sound emitters within the game environment. These sound emitters can be placed anywhere within the game world, and can be configured to emit a wide range of audio effects, including music, sound effects, and ambient noise. The placement of soundboxes throughout a level is important as it allows a player or sound receiver location-dependent set of rules to be defined so as to determine how those sound emitters should interact with the game world and be perceived by the player or sound receiver. For example, they may define that certain sound emitters should only be active when the player is in a certain area, or that certain sounds should be modulated in different ways in dependence on the acoustic characteristics of a given in-game environment.
[0077] A first example involves a gameplay environment representing a room within a castle. The room contains stone walls, a stone floor partly covered by a rug, a door, two stained glass windows, a large bookshelf covering part of a wall, an alcove in a wall, and a candelabra in the centre of the room.
[0078] A graphical and physics model of the room is has been created, and is shown in
[0079] Data comprising the three-dimensional geometrical graphical model 106 depicted in
[0080] At step 202, the computer device applies a boundary detection algorithm to the graphical model. The algorithm is adapted to identify boundaries based on the geometrical data. In particular, it is adapted to locate, within a provided geometrical model, any boundaries that are relevant to the acoustic qualities of the space represented by the model, and to the propagation of virtual sound waves within that space. In the present example implementation, the algorithm uses a combination of geometric and topological information to detect the boundaries. The algorithm firstly identifies the set of all faces in the model. Owing to the high level of geometrical detail needed to depict the complex and realistic three-dimensional objects and surfaces, the number of faces is high, and is typically in the order of thousands of faces or greater. The algorithm then determines the connectivity between adjacent faces to establish their relation. Each face is then classified as being either an exterior face, an interior face, or a boundary face. An exterior face is a face that has no neighbouring face, whereas an interior face is bounded by other faces in the model. A boundary face is a face that separates an interior region from an exterior region.
[0081] Once the algorithm identifies the boundary faces, it determines the boundaries of the space. It does this by tracing along the boundary faces and identifying edges that belong to the same boundary. The algorithm then uses these edges to form the boundaries, which are represented as a set of closed loops.
[0082] In this example, the algorithm classifies six boundaries, namely the walls, floor, and ceiling of the room environment, three of which are shown in the rendering of
[0083] At step 203, the identified boundaries are then used, by a soundbox generation algorithm, to define a set of surfaces defining each soundbox in the environment. In the present simple example, the substantially cuboidal room in the graphical model 106 can have its acoustic properties closely approximated by a cuboid of the same dimensions and aspect ratio. The soundbox generation algorithm accordingly defines, for each of the major boundaries 140a, 140b, 150 in the graphical model 106, a planar face 160a, 160b, 170, as shown in
[0084] After the boundaries are identified, they can be used both to simulate the behaviour of sound waves within the space represented by the model, and also to define locations at which particular sounds are activated or modified in specific ways. Thus an acoustic environment model 111 is generated at step 204. The model 111, which may be referred to as a soundbox model, includes the major six major orthogonal faces identified based on the identified boundaries of the room. Additionally, the creation of the model includes further features, based on features of the input graphical model 106, that can be used to improve the simulation of sound in the game environment.
[0085] That is to say, the cuboidal soundbox region generated to represent the main boundaries of the room environment are enhanced in the acoustic environment model 111 with further audio-relevant features extracted from the graphical model 106. An algorithm is applied to the graphical model in order to identify any such features that ought to be included and modify the soundbox geometry accordingly.
[0086] The graphical model includes an alcove 141 recessed into the wall structure 140b. The algorithm identifies this feature as being suited for inclusion in the acoustic model based on its dimensions. In the present example, the depth of the alcove, that is its extent in the y-axis beyond the major face of the wall 140b, its extent in the x and z axes, and its volume overall are found by the algorithm to be sufficiently large, for example by comparison with linear signs or volume thresholds, to be relevant to acoustical properties. The threshold is preferably configured so that any volumes between faces such as this, which could reasonably represent spaces in which acoustic reverberation or similar effects could occur in the virtual space, are included. The method therefore modifies the acoustic environment model 111 so that the planar surface 160b includes a cuboidal recess 161.
[0087] The shape and dimensions of the recess are defined so as to approximate the acoustic characteristics of the corresponding feature 141 of the graphical model. However, in order to optimise the use of computational and storage resources, the shape of the void 161 is simplified so as to exclude the gothic arch detail of the geometry present in the graphical model 141, and additionally to exclude the candelabra 142 that is present in the graphical model, on the basis that these details would not substantially affect the propagation of sound waves in the virtual space.
[0088] The exclusion of these geometrical details from the modified feature 161 in the acoustic model 111 is based on a geometrical simplification function that excludes small or highly spatially variant features.
[0089] In the graphical model 106, the wall 140b also includes a bookshelf feature, in which a plurality of books 143 are rendered as three-dimensional objects, or as a set of faces representative of the combined surfaces of the arrangement of books. The acoustic model generation algorithm may recognise, for example by way of a machine learning model, or through a deterministic approach, that section 143 of the wall 140b as a distinct section suitable for being associated with different audio properties from the surrounding wall section. Accordingly, the algorithm defines a rectangular boundary 164 with a size and position on the soundbox wall 160b corresponding to those of the graphically rendered bookshelf 143 in the wall 140b.
[0090] The distinct planar region 163 of the wall that is enclosed by the boundary 164 can accordingly have acoustic properties associated with it that are different from those associated with the surrounding wall, for example corresponding to the different acoustic absorption and reflection properties of leather and stone respectively.
[0091] The generating of the acoustic environment model can also include the identification of portals within boundaries in the graphical model, that is interfaces or apertures between one in-game environment and another, such as doorways between rooms. The graphical model 106 comprises three such portals. The first of these is the doorway 155, which is a traversable portal in that it can be used by a player to travel between environments. It is desirable, therefore, for any audio effects attributable to an environment to which such a portal leads to be perceivable as though sound could propagate through that portal in a suitable manner.
[0092] Likewise, the stained glass windows 156a, 156b represented in the graphical model 106, though not configured to be player-traversable by the physics engine, will be preferably represented in the acoustic environment in a suitable way also.
[0093] The generating of the acoustic environment model 111 may therefore modify the soundbox to include a portal 175 corresponding to the doorway 155. This may be based on the inferred geometrical properties of the graphically represented door 155, for example by way of recognising through its geometry that it represents an aperture or a passageway. Additionally or alternatively the doorway 155 may be recognised as a portal by way of some data or flag identifying it as such in the graphical data or physics data.
[0094] Based on this, the soundbox of the acoustic environment model 111 is modified to include a rectangular aperture with an aspect ratio and position, and size based on those of the graphical doorway 155. The resulting acoustic portal 175 can thereafter be linked to other acoustic environment models corresponding preferably to in-game environments adjoining that from which the present acoustic environment model 111 is generated.
[0095] For example, ambient or environmental noises configured to play within an adjoining room in the castle may be audible when a player is within the soundbox 111 and at a location close to the portal 175. Suitable drop-off distances may likewise be configured to enable this location-dependent audibility.
[0096] The two windows 156a, 156b present in the graphical model 106 can be identified by the algorithm as portals in a similar manner to the doorway 155. Additionally, owing to the geometry, surface texture information, or other characteristics of the windows in the graphical information, the corresponding acoustic portals 176a, 176b can be configured to have appropriate properties, such as a lower level of sound transmission, or the application of an audio filter that muffles sounds simulated as passing through the windows, so as to represent the presence of glass windows covering the apertures 176a, 176b, in contrast with the open doorway representation 175.
[0097] Additionally, the acoustic environment model may be configured, preferably in dependence on information representative of a joining or nearby environments within the level, to include appropriate audio effects associated with the types of portals. For example, the modified soundbox may include areas proximal to the window portals 176a, 176b in which a sound effect of wind blowing or raindrops hitting glass during a storm are audible.
[0098] For any of these features, the configured audio effects may be added or modified by game developers easily, with the appropriate identification and placement of soundboxes within the acoustic environment model having been achieved by the algorithm so as to render any such sound effects more accurately and realistically, based upon the input graphical model 106.
[0099] It can be seen from
[0100] However, the algorithm may be configured to include some features from the graphical model in spite of their high spatially variant geometry or small size, if, for example, such features are recognised as being suitable for giving rise to audio effects in the gameplay environment. For instance, the table and candelabra 179 in the centre of the room represented in the graphical model 106 is determined by the algorithm to be irrelevant to the simulation of sound wave propagation within the room, to the extent that it can be excluded from the initially produced and modified model 111. However, the graphical information may include an indication that the candles are alight, for example by way of a flickering animation associated with the graphical objects. The algorithm for producing the acoustic environment model 111 may infer from this that the object produces a sound, and may produce a corresponding feature of the soundbox 199. Once the cuboidal object 199 has been placed within the acoustic model 111, a notification may be provided to a sound designer, or the computer device itself may be configured to associate an appropriate sound effect with the object 199. For instance, the animation data for the candelabra 179 may be recognised by an algorithm as being associated with a flickering flame, and an appropriate sound effect may be sourced and associated with the corresponding region 199 in the acoustic environment model 111. In this way, when a player is in the room during gameplay, the candelabra 179 may be surrounded by a soundbox 199 that adds a warm, flickering sound effect that is audible when the player is sufficiently close to the object.
[0101] Once relevant soundbox boundaries have been defined for the acoustic environment model 111, at step 205 an audio characteristic algorithm is used to calculate the acoustic properties arising from and defined for spaces demarcated by its boundaries 160a, 160b, 170. For example, the algorithm can calculate, for the soundbox faces, reflection coefficients, transmission coefficients, and absorption coefficients based on inferences from the surfaces of the graphical model 111 that correspond to them. These properties can then be used to simulate the behaviour of sound waves within the space represented by the acoustic model 111.
[0102] In particular, the audio characteristic algorithm can be used to identify, using the graphical data 106, the various surfaces present in the room as representing walls, floors, and ceilings, and to calculate their acoustic properties accordingly. For example, the algorithm may identify, based on the texture map pattern applied to the walls 140a, 140b that those boundaries represent, or are associated with stone, and have a high reflection coefficient, while the floor, or a portion thereof comprises a carpeted area and can be ascribed a high absorption coefficient. The boundary detection algorithm can also, in some embodiments, define boundaries based on differences, discontinuities, or interfaces between different texture maps applied to surface sin the graphical model 106, and create different soundboxes, or partition soundboxes into separate portions, based thereon. For example, the carpeted area 151 can be separated from the stone floor area 152 by a soundbox boundary (not shown). That boundary can be used to define different footstep sound effect behaviour depending on whether the player is on or off the carpet, and can additionally be used to model the reverberation characteristics of the room 111 as a whole, taking account on the contrasting acoustic absorption and reflection characteristics that can be defined for the carpeted and uncovered areas.
[0103] Based on the audio characteristic calculations, the algorithm can simulate the behaviour of sound waves within the room. For example, it may predict that sound waves originating from a source in one corner of the room will reflect off the stone walls and be amplified, while sound waves originating from the same source in a different corner of the room may be absorbed by the bookshelves or a carpeted section of the floor and therefore be weaker. Such predictions can be used to design acoustic treatments for the room that will optimize its acoustic qualities for a particular application, or various in-game activities.
[0104] After the creation of soundboxes for an environment, the development process typically comprises testing their placement and making any necessary adjustments to ensure that they are placed correctly and that they produce the desired effects. This is typically done using specialized testing software that allows developers to play through the game and listen to how the soundboxes interact with the game world.
[0105] During the testing phase, developers will typically make adjustments to the placement of soundboxes and the rules that govern their behaviour based on player feedback and their own observations. The sound designer may need to adjust the size or shape of the soundboxes, as well as the specific sound effects associated with each box, based on the testing results. This iterative process ensures that the placement of soundboxes is optimized for the best possible player experience.
[0106] The soundboxes are then included in the game files and are activated by the game engine when the player enters the relevant areas within the game world during gameplay.
[0107] In the first example illustrated in
[0108] When the boundary identification algorithm is applied to the graphical model 306, therefore, the resulting soundboxes include a tunnel section 334, comprising a geometrical approximation of the tunnelled structure in the graphical and physical model 331, and an uncovered section 335, in which no ceiling boundary has been detected, and so the corresponding soundbox portion 335 includes no enclosing overhead plane or surface. It will be understood that the reverberation characteristics of these two spaces will be modelled differently by the sound engine during gameplay, resulting in appropriately different acoustic qualities when sound emitters and sound receivers are located in these two soundbox sections 334, 335.
[0109] As with the first example, in the present case, the acoustic environment model is defined as a geometrically simplified approximation of the comparatively complex geometrical model comprised by the graphical data 306. In addition to the placement of simple, planar surfaces 360, 370 partly bounding the soundbox regions, and based on the major boundaries identified in the graphical model 306, various features have been omitted. For example, the soundbox modification process has not included a representation in the acoustic environment model 311 of the windows 356a, 356b present in the graphical model 306. A determination to exclude such features may be configured based on a minimum threshold size or areas for features or portals to be included or excluded from the acoustic model, for example. This threshold may be set higher for gameplay environments in which subtle acoustic effects such as those arising as a result of the presence of small windows in a wall can be expected to have no significant effect on simulated gameplay audio, such as in a racing game. The structural details 359 of the graphical model are also excluded when the acoustic model 311 is produced. This exclusion may be effected based on either or both of the distance of the graphical features 359 from a playable or traversable part of the graphical model 306 being used to create soundboxes, and the absence of such features 359 from a physics model, despite their presence in the graphical model. That is, the towers 359 may be represented in the graphical model 306 as non-interactive features, or features that have no collision during gameplay, and this information may be used to infer that they are sufficiently far removed from an immediate gameplay area as to be substantially irrelevant to sound simulation in the environment.
[0110] In contrast to the exclusion of certain structures and boundaries from an acoustic model, and the creation of at least partly open soundboxes, in some examples, such as the third example depicted in
[0111] One such example is shown in
[0112] In order to address this issue of incomplete graphical models, in the present example the method further includes a boundary prediction function configured to predict one or more further structures that should exist and are absent from the three-dimensional geometrical model 406. This function may employ, for example, a machine learning model trained on data sets including geometrical representations of realistic architectural structures or other spaces. Based on this, the prediction function may generate a predicted set of geometrical data representing missing sections 438 of a graphical model. In the present example, as shown in
[0113] Once the appropriate predicted boundaries have been predicted, the generating of soundboxes may proceed, including predicted boundaries as well as those present in the graphical information 406. The resulting acoustic environment model 411 is shown in
[0114] The degree of geometrical simplification applied to the graphical model in order to produce the acoustic model 411 can you seen to be less than that in the preceding example. Such a configuration may be used when particularly high levels of sound wave propagation simulation are necessary, for example during a stealth game in which and in environments in which room acoustics are particularly noticeable, such as the cathedral environment depicted in
[0115] It will be understood that soundboxes need not only represent physical boundaries or enclosed spaces for the purposes of modelling acoustics. In some cases, soundboxes may be used to control locations and regions in which in-game sounds are active during gameplay. A further example environment is shown in
[0116] As shown in
[0117] The boundary detection algorithm may employ a statistical model or a machine learning model to identify these three zones as being distinct from one another. This may be based, for example, on the density of features such as trees and other landscape details, surface textures and image data used for rendering the graphical model 506, and the shapes of features present in the areas, for example recognising different types of flora. Boundaries separating the identified regions may be identified, and soundboxes 511a, 511b, 511c, 511d can be generated to define these regions, that is their extent and the boundaries between them, in the acoustic model 511. In
[0118] During gameplay, when a player enters a zone, the in-game audio may be adapted in dependence on acoustic or audio characteristics ascribed to the soundbox in which the player is located. These may be environmental sounds such as birdsong and crickets chirping, as well as different musical cues associated with the different soundboxes, as well as different acoustic characteristics based on and inferred or configured sound propagation characteristic and ground firmness and texture characteristic, for example, based on the type of terrain and features present in or characteristic of a particular type of environment corresponding to a given soundbox.