MESH PROCESSING FOR VIEWABILITY TESTING
20220410002 · 2022-12-29
Inventors
Cpc classification
International classification
G06T17/20
PHYSICS
Abstract
A computer-implemented method includes obtaining an input polygon mesh representing at least part of a three-dimensional scene and comprising a plurality of input polygons, and obtaining mapping data for mapping at least part of an image to a region of the input polygon when the three-dimensional scene is rendered. Said region extends at least partway across the plurality of input polygons. The method includes using the mapping data to generate one or more test polygons to match or approximate said region of the input polygon mesh. Each of the generated test polygons is distinct from each of said plurality of input polygons.
Claims
1. A computer-implemented method comprising: obtaining an input polygon mesh representing at least part of a three-dimensional scene and comprising a plurality of input polygons; obtaining mapping data for mapping at least part of an image to a region of the input polygon mesh when the three-dimensional scene is rendered, said region extending at least partway across said plurality of input polygons; and using the mapping data to generate one or more test polygons to match or approximate said region of the input polygon mesh, wherein each of the generated one or more test polygons is distinct from each of said plurality of input polygons.
2. The computer-implemented method of claim 1, wherein: the plurality of input polygons comprises a plurality of triangles; and the generated one or more test polygons comprises one or more quadrilaterals.
3. The computer-implemented method of claim 1, wherein generating the one or more test polygons comprises: generating an intermediate polygon which matches or approximates the plurality of input polygons; and generating a test polygon corresponding to at least a portion of the generated intermediate polygon, wherein dimensions of the at least portion and/or a location of the at least portion relative to the intermediate polygon are dependent on the mapping data.
4. The computer-implemented method of claim 3, wherein: the mapping data indicates a plurality of regions of the input polygon mesh to which respective images are to be mapped; and using the mapping data comprises generating a plurality of test quads corresponding to respective different regions of the input polygon mesh indicated by the mapping data and being respective different portions of the generated intermediate polygon.
5. The computer-implemented method of claim 3, wherein generating the intermediate polygon comprises: determining a texture coordinate bounding box for the plurality of input polygons; for each of one or more input polygons of the plurality of input polygons: determining planar coordinates of each corner of the texture coordinate bounding box with respect to the input polygon; and determining world coordinates of each corner of the texture coordinate bounding box based on the determined planar coordinates and a world space position of each vertex of the input polygon; and determining an average of the determined world coordinates of each corner of the texture coordinate bounding box, whereby to determine world coordinates of each corner of the intermediate polygon.
6. The computer-implemented method of claim 3, wherein the intermediate polygon is a first intermediate polygon, and generating the one or more test polygons comprises: determining a plurality of mutually disconnected parts of the input polygon mesh, each determined part comprising a respective plurality of connected input polygons; and generating a plurality of intermediate polygons, including the first intermediate polygon, each depending on a respective one of the determined parts of the polygon mesh, wherein each of the one or more test polygons is at least a portion of a respective intermediate polygon of the plurality of intermediate polygons.
7. The computer-implemented method of claim 6, further comprising splitting one or more of the determined parts of the input polygon mesh along any edge or chain of edges extending across one of the determined parts and having an isolated substantial angle between input polygons, thereby to determine a plurality of subparts of the input polygon mesh, wherein each of the generated plurality of intermediate polygons matches or approximates a respective one of the determined plurality of subparts.
8. The computer-implemented method of claim 3, wherein the intermediate polygon is a first intermediate polygon, and generating the one or more test polygons comprises: splitting the input polygon mesh along any edge or chain of edges extending across the input polygon mesh and having an isolated substantial angle between polygons, thereby to determine a plurality of subparts of the input polygon mesh; and generating a plurality of intermediate polygons, including the first intermediate polygon, each matching or approximating a respective one of the determined subparts of the input polygon mesh, wherein each of the one or more test polygons is at least a portion of a respective intermediate polygon of the plurality of intermediate polygons.
9. The computer-implemented method of claim 1, comprising: determining that the input polygon mesh has texture coordinates extending beyond a predetermined range, indicating a repeating pattern; and generating a plurality of test polygons corresponding to respective different instances of the repeating pattern.
10. The computer-implemented method of claim 1, further comprising discarding any test polygon smaller than a threshold fraction of said image.
11. The computer-implemented method of claim 1, further comprising grouping test polygons with matching edges to generate one or more test polygon groups.
12. The computer-implemented method of claim 11, further comprising discarding any test polygon group smaller than a threshold fraction of said image.
13. The computer-implemented method of claim 1, wherein: said region extends at least partway across each input polygon of said plurality of input polygons; and the number of generated test polygons is less than the number of input polygons in said plurality of input polygons.
14. The computer-implemented method of claim 1, further comprising: rendering the three-dimensional scene from a perspective of a virtual camera using the input polygon mesh; and processing at least one of the generated one or more test polygons to determine an extent to which said at least one of the generated one or more test polygons is visible from the perspective of the virtual camera.
15. The computer-implemented method of claim 14, wherein: rendering the three-dimensional scene from the perspective of the virtual camera comprises storing, in a depth buffer, depth map data corresponding to a depth map of at least part of the three-dimensional scene and comprising respective depth map values at pixel locations spanning at least part of a field of view of the virtual camera; and processing said at least one of the generated one or more test polygons comprises: generating a plurality of points distributed substantially evenly across a first test polygon of the generated one or more test polygons; for each point of the generated plurality of points lying within said at least part of the field of view of the virtual camera: determining a respective depth map value from the perspective of the virtual camera; and determining, using the depth map data stored in the depth buffer, whether the point is visible from the perspective of the virtual camera based on a comparison between the determined depth map value for the point and a corresponding one or more of the depth map values stored in the depth buffer; and determining an extent to which the first test polygon is visible in dependence on which of the plurality of points are determined to be visible from the perspective of the virtual camera.
16. The computer-implemented method of claim 1, performed during or after loading of the three-dimensional scene into memory for rendering.
17. A data processing system arranged to: obtain an input polygon mesh representing at least part of a three-dimensional scene and comprising a plurality of input polygons; obtain mapping data for mapping at least part of an image to a region of the input polygon mesh when the three-dimensional scene is rendered, said region extending at least partway across the plurality of input polygons; and using the mapping data to generate one or more test polygons to match or approximate said region of the input polygon mesh, wherein each of the generated one or more test polygons is distinct from each of said plurality of input polygons.
18. The data processing system of claim 17, wherein: the plurality of input polygons comprises a plurality of triangles; and the generated one or more of test polygons comprises one or more quadrilaterals.
19. The data processing system of claim 17, comprising: a rendering engine for rendering the three-dimensional scene from a perspective of a virtual camera using the input polygon mesh; and a viewability testing module for processing at least one of the generated one or more test polygons to determine an extent to which said at least one of the one or more test polygons is visible from the perspective of the virtual camera.
20. A non-transient storage medium comprising machine-readable instructions which, when executed by a computer, cause the computer to: obtain an input polygon mesh representing at least part of a three-dimensional scene and comprising a plurality of input polygons; obtain mapping data for mapping at least part of an image to a region of the input polygon mesh when the three-dimensional scene is rendered, said region extending at least partway across the plurality of input polygons; and using the mapping data to generate one or more test polygons to match or approximate said region of the input polygon mesh, wherein each of the generated one or more test polygons is distinct from each of said plurality of input polygons.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0024]
[0025]
[0026]
[0027]
[0028]
[0029]
[0030]
[0031]
DETAILED DESCRIPTION OF CERTAIN INVENTIVE EMBODIMENTS
[0032] Details of systems and methods according to examples will become apparent from the following description with reference to the figures. In this description, for the purposes of explanation, numerous specific details of certain examples are set forth. Reference in the specification to ‘an example’ or similar language means that a feature, structure, or characteristic described in connection with the example is included in at least that one example but not necessarily in other examples. It should be further noted that certain examples are described schematically with certain features omitted and/or necessarily simplified for the ease of explanation and understanding of the concepts underlying the examples.
[0033] Embodiments of the present disclosure relate to processing a polygon mesh representing a three-dimensional scene to generate test polygons for use in viewability testing when the three-dimensional scene is rendered using the polygon mesh. A test polygon has world coordinates substantially corresponding to a surface of an object of interest within the scene, for example a surface on which an advert, or part of an advert, is placed, but unlike the polygons of the polygon mesh, is not intended to be used for rendering the scene. In the present disclosure, viewability testing means determining an extent to which an object (such as an advert) is visible within a scene when the scene is rendered from a perspective of a virtual camera. In particular, embodiments described herein enable more efficient viewability testing when compared with methods making direct use of the polygons used in rendering the scene.
[0034]
[0035] The server system 104 may be a standalone server or may be a networked system of servers, and in this example is operated by a commercial entity responsible for managing the distribution of adverts to end users (gamers) on behalf of advertisers, though in other examples an equivalent or similar system may be operated directly by an advertiser.
[0036] The gaming device 102 may be arranged to store a video game 118 locally, for example after downloading the video game 118 over the network 106, or may be arranged to read the video game 118 from a removable storage device such as an optical disc or removable flash drive. The video game 118 may be purchased by a user of the gaming device 102 from a commercial entity such as a games developer, license holder or other entity, or may be obtained for free, via a subscription model, or in accordance with any other suitable revenue model. In any of these cases, the commercial entity may obtain additional revenue by selling advertising space within the video game 118 to advertising entities, either directly or via a third party. For example, a video game developer may allocate particular objects, surfaces, or other regions of a scene within the video game 118 as advertising space, such that advertisements appear within said regions when the scene is rendered during gameplay.
[0037] The rendered advertisements may be static images or videos and may be dynamically updated as the user plays the video game 118, for example in response to certain events or certain criteria being satisfied. Furthermore, the rendered advertisements may be updated over time, for example to ensure that the rendered advertisements correspond to active advertising campaigns, and/or in dependence on licensing agreements between commercial entities. The advertisements for rendering are managed at the gaming device 102 by an advert client 120, which communicates with an advert server 122 at the server system 104. For example, the advert server 122 may transmit advert data to the advert client 120 periodically or in response to predetermined events at the gaming device 102 or the server system 104.
[0038] In addition to the advert server 122, the server system 104 includes an analytics engine 124 configured to process impression data received from the gaming device 102 and other gaming devices registered with the server system 104. The impression data may include, inter alia, information regarding how long, and to what extent, an advertisement is visible to users of the gaming devices. The impression data may include information at various levels of detail, for example a simple count of advertising impressions as determined in accordance with a given metric, or more detailed information such as how long a given advertisement is visible to a user during a session, the average on-screen size of the advertisement during that time, and the proportion of the advertisement that is visible during that time.
[0039] The analytics engine may process the impression data for a variety of purposes, for example to match a number of advertising impressions with a number agreed between the distributing party and the advertiser, to trigger the advert server 122 and/or the advert client 120 to update an advert appearing within the video game 118, or to determine an amount of compensation to be paid by the advertiser. It will be appreciated that other uses of impression data are possible, though a detailed discussion of such uses is outside the scope of the present disclosure.
[0040] In order to generate impression data for processing by the analytics engine 124, the gaming device 102 includes a viewability testing module 126. The viewability testing module 126 is responsible for determining the extent to which adverts placed within a scene are visible when the scene is rendered by the rendering engine 116 from a perspective of a virtual camera. In order to do so, the viewability testing module 126 is provided with test polygons corresponding to surfaces on which the adverts are placed, where the test polygons are typically quadrilaterals (referred to hereafter as “quads”), though other types of polygon may be used without departing from the scope of the invention. In accordance with the present disclosure, the test polygons are generated in dependence on a mesh model of the scene. As will be explained in more detail hereinafter, using such test polygons reduces the complexity and processing demands of the viewability testing process, compared with directly using the polygons of the mesh model. In cases where the mesh model is determined at build time, the test polygons may be generated in advance of the game being distributed to end users, and provided as part of the video game 118. Alternatively, if the mesh model is not determined at build time (for example in the case of a procedural mesh), the viewability testing module 126 may be arranged to generate the test polygons at runtime (for example during or after when a scene is first loaded).
[0041] It should be noted that, whilst the viewability testing module 126 is shown separately from the video game 118 in
[0042]
[0043] The method 200 involves obtaining, at 202, a polygon mesh representing at least part of a three-dimensional scene within a video game, for use in rendering the scene. In this example, the polygon mesh is determined by the game developer at the build stage prior to the video game being distributed to end users. The polygon mesh may exclusively contain triangles, which are particularly convenient for rasterization-based rendering and for which most graphics hardware is optimized, though in some cases the polygon mesh may contain other polygons such as quads in addition to, or instead of, triangles.
[0044] The method 200 proceeds with processing, at 204, the obtained polygon mesh to generate a set of one or more test quads for use in viewability testing when the scene is rendered. The generated test quads may be distinct from the polygons of the obtained polygon mesh, reflecting the different purposes of the test polygons and the polygons used for rendering. For example, test quads may be generated in dependence on respective sets of connected polygons of the polygon mesh, where polygons are considered to be connected if they share at least one vertex, resulting in the number of test quads being less than the number of polygons from which the test quads are generated. The test quads correspond substantially to portions of surfaces on which adverts are intended to be placed in the scene. In cases where an advert covers the entirety of a set of polygons, a test quad may be generated to match or approximate the set of polygons. In cases where an advert covers only parts of the polygons in a set, a test quad may be generated as part of an intermediate polygon which matches or approximates the set of polygons. Exemplary methods of processing a polygon mesh to generate a set of test quads will be described in more detail hereinafter.
[0045] The method 200 proceeds with providing, at 206, the video game to one or more end users. The video game may be provided on a physical medium, downloaded onto gaming devices over a network, or provided by any other suitable means. The video game includes rendering data comprising mesh models for use in rendering individual scenes, along with test quads for any scenes in which adverts are expected or intended to be placed. It is noted that the test quads are not provided for use in rendering, but to enable efficient viewability testing when the scenes are rendered using the unprocessed mesh models.
[0046] At runtime, the gaming device loads, at 208, one or more polygon meshes representing a scene into memory, along with test quads corresponding to any adverts appear in the scene. On a scene-by-scene basis, the gaming device determines, at 210, a pose of a virtual camera along with various aspects of the scene prior to rendering (such as aspects which may vary dynamically between frames). The gaming device renders, at 212, the scene from the perspective of the virtual camera. Typically, the rendering is based on a graphics pipeline including an application stage, a geometry stage, and a rasterization stage, though alternative graphics pipelines are possible, for example incorporating ray tracing for at least some aspects of the scene. During the application stage, a set of rendering primitives is obtained for a set of models forming the scene. The rendering primitives generally include points, lines, and polygon meshes which collectively represent objects.
[0047] During the geometry stage, coordinates of the rendering primitives are transformed from “model” space to “world” space to “view space” to “clip” space, in dependence on a position and orientation (pose) of the models in the scene, and a pose of the virtual camera. Some primitives may be discarded or clipped, for example primitives falling completely or partially outside the field of view of the virtual camera or outside a predetermined guard band extending beyond the field of view of the virtual camera, along with optionally any facing away from the virtual camera, after which the coordinates of surviving primitives are scaled to “normalized device coordinates (NDC)” such that the NDC values for primitives (or portions of primitives) to be displayed within the viewport fall within a predetermined range (usually [−1;1]). Furthermore, depth bias may be introduced to certain polygons to ensure that coplanar polygons (for example representing a surface and a shadow on the surface) are rendered correctly and independently of the rendering order. The resulting output is then scaled to match the size of the viewport in which the scene is to be rendered. The viewport may correspond to the entire display of a display device, or may correspond to only a portion of a display device for example in the case of split-screen multiplayer, a viewport presented within a decorated frame, or a virtual screen within the computer-generated scene.
[0048] During the rasterization stage, discrete fragments are determined from the rendering primitives, where the size and position of each fragment corresponds to a respective pixel of a frame buffer/viewport. A depth buffer is used for determining which fragments are to be written as pixels to the frame buffer, and at least the fragments to be written to the frame buffer are colored using texture mapping techniques in accordance with pixel shader code. To avoid redundant processing, some video games use a separate initial rendering pass that writes only to the depth buffer, then perform further rasterization steps in a subsequent rendering pass, filtered by the populated depth buffer. Lighting effects may also be applied to the fragments, and further rendering steps such as alpha testing and antialiasing may be applied before the fragments are written to the frame buffer and screen thereafter.
[0049] Returning to the method 200, the gaming device performs, at 214, viewability testing for the frame rendered at 212, using test quads loaded at 208. The viewability testing may include occlusion detection based for example on ray tracing or Z-buffer depth testing. An example of an occlusion detection method based on Z-buffer depth testing involves generating a set of points distributed substantially evenly across a surface of each test quad, for each of the generated points lying within a field of view of the virtual camera, determining a respective depth map value from the perspective of the virtual camera. A comparison is then performed between the depth map value for the point and a corresponding one or more of the depth map values stored in the Z-buffer, to determine whether the point is visible from the perspective of the virtual camera. An extent to which the test quad is visible is then determined in dependence on which of the plurality of points are determined to be visible from the perspective of the virtual camera.
[0050] The extent to which a test quad is visible from the perspective of the virtual camera may refer to a proportion of the test quad that is visible, which may be computed either by (i) dividing the number of visible points on the test quad by the total number of points on the test quad, or (ii) dividing the number of visible points on the test quad by the number of points within the field of view of the virtual camera and multiplying the result by the proportion of the area of the test quad lying within the field of view of the virtual camera. Alternatively, the extent to which the test quad is visible may refer to a proportion of the viewport taken up by the test quad. If the number of points is proportional to the on-screen size of the test quad, then the proportion of the viewport taken up by the object may be calculated by dividing the number of visible points by the number of points which would fit on a surface covering the entire viewport. Alternatively, the proportion of the viewport taken up by the test quad may be determined by dividing the number of visible points by the number of points within the field of view of the virtual camera and multiplying the result by the projected area of the test quad in NDC space divided by the total area of the field of view in NDC space (which is 4, assuming NDC space is normalized to [−1,1]).
[0051] Evaluations of either of the metrics described above may be used to generate impression data or other summary data, for example to be transferred to a remote server system as described above with reference to
[0052] Following the rendering of a frame at 212 and the viewability testing at 214, the method 200 returns to 210 to update the pose of the virtual camera along with any dynamic aspects of the scene. The steps 210-214 are performed repeatedly on a frame-by-frame basis while the video game is played, thereby generating near real-time viewability data for the adverts appearing within the video game environment. In other examples, viewability testing may be performed at a lower frequency than the rendering of frames.
[0053] In the example of
[0054]
[0055] The method 400 proceeds by determining, at 404, mutually disconnected parts of the polygon mesh. Each part includes a set of polygons directly or indirectly connected to one another, where polygons are deemed to be connected if they share at least one vertex. For a polygon mesh representing several objects that are not in contact with one another, the step of determining mutually disconnected parts of the polygon involves generating a separate mesh part for each object. In cases where all of the polygons of the polygon mesh are directly or indirectly connected to one another, the polygon mesh consists of a single mesh part and the processing at 404 can be omitted. An example of an algorithm determining parts of a polygon mesh is given below (written in pseudocode, which is to be understood to be illustrative and not prescriptive). The algorithm determines parts of the polygon mesh by iteratively generating new parts, starting from no generated parts, and merging the generated parts whenever a polygon from one part is determined to be connected to a polygon from another part: [0056] for each polygon j in the polygon mesh: [0057] maintain a mesh part reference “j added to”, initialized to <none> [0058] for each generated part: [0059] if the polygon j is connected to the part: [0060] if “j added to” is not <none>: update the mesh part reference “k added to” for polygons k in the current part to match “j added to” remove the current part [0061] otherwise: set “j added to” to the current part [0062] if “j added to” is <none>: [0063] generate a new part and set “j added to” to the new part
[0064]
[0065] The method 400 proceeds by splitting, at 406, any part of the polygon mesh that includes an edge between a pair of connected polygons, or a contiguous chain of edges between respective pairs of connected polygons, which extends across said part of the polygon mesh and has an isolated substantial angle. In the present disclosure, an edge is defined as a straight line connecting two vertices. An edge or chain of edges is said to extend across a mesh part if the extrema of the edge or chain are at respective sides of the mesh part, such that the edge or chain connects the respective sides of the mesh part. As a result, one or more of the parts determined at 404 may be split into several subparts. A shared edge between two connected polygons is said to have a substantial angle if the angle between the planes of the two connected polygons is greater than a predetermined threshold value. The threshold value may be set to zero or to a small value greater than zero to account for small errors caused by limited precision computing, which may otherwise result in a substantial angle being erroneously detected. An edge having a substantial angle is said to have an isolated substantial angle if none of its neighboring edges have a substantial angle. Neighboring edges may be defined as edges which both belong to a common polygon, though other definitions are possible, for example further requiring that the edges share a vertex with one another.
[0066] The splitting at 406 ensures that for a mesh part comprising several substantially flat sections, a separate subpart will be generated for each substantially flat section. It is common for such geometries to appear in video game environments, for example where a curved surface is approximated by a series of flat sections. In the example of
[0067] An example of an algorithm for splitting a part of a polygon mesh is given below (understood to be illustrative and not prescriptive): [0068] generate polygon adjacency data indicating, for each edge of each polygon, which polygon the edge is shared with [0069] determine polygon normals (normalized cross product of two edges of the polygon, taken consistently so that the normals consistently indicate which sides of the polygons are forward-facing) [0070] determine edge angles (these can be stored directly as angles or indirectly for example as the dot product of the determined normals between polygons sharing the edge, which are equal to the cosines of the angles) [0071] generate edge splits for edges having an isolated substantial angle [0072] flood-fill the polygons into new subparts, each subpart containing a set of polygons connected to one another by unsplit edges
[0073] In the present disclosure, “flood-fill” refers to any algorithm for determining and labelling a set of mutually connected nodes. Well-known examples of algorithms that may be used for this purpose include stack-based recursive flood-fill algorithms, graph algorithms in which nodes are pushed onto a node stack or a node queue for consumption, and other connected-component labelling (CCL) algorithms.
[0074] The method 400 continues by generating, at 408, an intermediate quad for each determined part of the polygon mesh (or each subpart if splitting took place at 406). Each intermediate quad matches or approximates a set of connected polygons of the polygon mesh. The position, orientation and dimensions of the intermediate quad generally depends on the orientations and positions of the polygons forming the mesh part, as well as the texture mapping used to apply adverts and other decoration to the mesh part. The intermediate quad substantially corresponds to a region of texture space covered by the mesh part. In cases where a mesh part or subpart does not correspond to an axis-aligned rectangular region of texture space, the intermediate quad may extend beyond the outer edges of polygons forming the mesh part. The intermediate quads may be generated using an averaging process based on at least some of the polygons of the mesh part or subpart, for example as described below with reference to
[0075] In the example of
[0076] The method 400 proceeds with generating, at 410, test quads from the intermediate quads determined at 408. Each test quad is formed as a part (or whole) of a corresponding intermediate quad, and thereby corresponds to a particular region of a texture to be applied to the corresponding mesh part or subpart, for example a region corresponding to an advert (possibly clipped if the advert covers multiple subparts). In cases where an advert covers an entire mesh part and an intermediate quad matches the mesh part, the test quad may be identical to the intermediate quad. In cases where an advert does not cover the entirety of a mesh part, or where a mesh part does not match an intermediate quad, a test quad may cover only a portion of the intermediate quad. In order to determine which mesh part or subpart is covered by the advert, mapping data is provided, including for example the texture coordinates of the advert, from which the position and dimensions of the test quad can be interpolated from the vertex positions of the intermediate quad. Texture coordinates define how an image (or portion of an image) is mapped to a geometry during rendering, and typically include u and v coordinates which are orthogonal in two-dimensional texture space, and optionally w and q coordinates to enable additional texture mapping functionality. Nevertheless, other types of mapping data may be used to specify where adverts are located, for example indicating that an advert covers the entirety of a specified mesh part, or by defining offsets, rotations, scalings and the like. In any case, the information conveyed by the mapping data is resolvable to a two-dimensional texture mapping (equivalent to each vertex having UV co-ordinates), which corresponds to the texture mapping used for rendering (up to data/calculation precision limits).
[0077] In some examples, a single mesh part may include multiple adverts (or parts of adverts) corresponding to multiple regions of the texture, or multiple instances of a repeating advert. Accordingly, a given intermediate quad may result in several test quads being generated. In other examples, a mesh part may include no adverts, resulting in no test quads being generated. In the example of
[0078] In some cases, a mesh part may have texture coordinates extending beyond a default maximum range (usually [0;1]). When such a mesh part is rendered, the texture may repeat in accordance with a specified texture wrapping mode, enabling adverts which repeat in one or both directions (as is often seen, for example, on advertising boards around a stadium at a sporting event). For viewability testing purposes, it is desirable to generate a single test quad for each instance of the repeating advert. Instances of a repeating advert extending beyond an edge of the mesh part may be clipped, in which case the corresponding test quads should also be clipped. An example of a method for generating a set of test quads corresponding to instances of one or more specified texture rectangles is given below (understood to be illustrative and not prescriptive): [0079] round the texture coordinate bounding box for the intermediate quad outwards (apply floor function to the minimum values of the UV coordinates and the ceiling function to the maximum values of the UV coordinates) [0080] for each cell (1×1) of the rounded texture coordinate bounding box: [0081] for each specified texture rectangle [0082] offset the coordinates of the texture rectangle by integer values to move the texture rectangle into the cell [0083] clip the texture rectangle against the original texture coordinate bounding box for the intermediate quad [0084] if there is no overlap between the texture rectangle and the cell, ignore the current combination of cell and texture rectangle [0085] bilinearly interpolate the world coordinates of the intermediate quad's vertices to generate a test quad corresponding to the texture rectangle (the interpolation factors being determined by reverse interpolating the clipped texture rectangle against the intermediate quad's texture coordinate bounding box). [0086] calculate normalized texture coordinates for the test quad's vertices by reverse interpolating the clipped texture rectangle (moved back to the [0;1] cell), against the original texture rectangle.
[0087] Note that, in the present disclosure, “interpolate” refers to the linear interpolation function lerp(a, b, t)=a+(b−a)*t, and “reverse interpolate” refers to the inverse operation invlerp (a, b, v)=(v−a)/(b−a). For the algorithm above, the texture rectangles are assumed to be aligned with the texture coordinate axes, though in other examples this may not be the case.
[0088] The final step of the above algorithm is optional, but may be useful for two purposes. Firstly, for viewability testing, it may be important to know what fraction of an advert is covered by a given test polygon. A test quad (or group of test quads as discussed below) covering less than a predetermined fraction of an advert (for example, ½) may never generate a viewability event or impression, and may therefore be discarded from the set of test quads. The fraction of the texture rectangle covered by the test quad is given by the area enclosed by the normalized texture coordinates of the test quad's vertices (or equivalently by dividing the area of the clipped texture rectangle by the area of the original texture rectangle corresponding to the advert). Secondly, the determined normalized texture coordinates may be used for the purpose of grouping test quads, as discussed below.
[0089] The method 400 concludes with an optional step of grouping, at 412, test quads having matching edges into test quad groups. In some examples, a single advert may span multiple test quads, and in such cases, it is desirable for the viewability testing to generate data pertaining to the advert as a whole, rather than for respective portions of the advert. It is therefore advantageous for test quads to be grouped automatically to facilitate this functionality. An example of an algorithm for connecting test quads to generate test quad groups is given below (understood to be illustrative and not prescriptive): [0090] for each test quad qA: [0091] for each corner cA: [0092] for each matching corner cB of another test quad qB: [0093] check corners forward and backward for both test quads (i.e. in opposite directions around the test quads) [0094] if there are two corner matches total: add a bidirectional link between qA and qB, connecting the quads [0095] flood-fill the connected test quads to form test quad groups
[0096] This algorithm results in groups of test quad groups connected along matching edges. Corners are considered to match if their normalized texture coordinates (as calculated above) match and their world coordinates match within a predetermined tolerance (where the tolerance may be introduced to avoid errors caused by limited precision computation). In some examples, bidirectional links are not generated at edges corresponding to minimum/maximum values of the texture coordinates (typically 0 and 1), because such edges typically represent boundaries between separate adverts or between instances of a repeating advert, in which cases it is desirable for separate test quad groups to be generated. In order to avoid erroneous connecting of test quads corresponding to different adverts, the test quads may be labelled according to which advert they correspond to. Alternatively, the entire method of generating and connecting test quads may be run separately for different adverts.
[0097] Due to the implicit nested loop over test quads qA and qB needed to identify matching corners cA and cB, the above algorithm results in a computational cost proportional to the square of the number of test quads, which can be very high when a large number of test quads is generated. In order to mitigate this issue, acceleration methods can be utilized such as sweep-and-prune (also known as sort-and-sweep). In a particular example, a separate data structure is generated for each of the three coordinate axes, into which the test quad corners are sorted in order of the respective coordinate values. To find matching corners cB for a given corner cA, a binary search is performed on each axis for upper and lower bounds of a tolerated range, yielding three lists of results containing possibly different numbers of entries. Each corner appearing in the shortest list of results (or one of the shortest lists of results if there is no unique shortest list of results) is then compared with the given corner cA to determine whether that corner is a matching corner. Corners identified as matching are assessed to determine whether they are part of a matching edge.
[0098] Although in the present example it is assumed that test quads of a given test quad group are generated from a single polygon mesh (namely, the polygon mesh received at 402), in other cases a scene may be composed of several meshes, each of which is transformed by a respective mesh matrix to give the correct position, orientation and scale in the scene. In cases where an advert spans multiple meshes, the method 400 may be augmented to include transforming the separate meshes by the corresponding mesh matrices such that positions and orientations of the resulting test quads reflect the relative positions and orientations of the meshes within the scene, ensuring that the appropriate test quad groups are still generated.
[0099] As mentioned above, certain test quad groups (or individual test quads) generated using the above method may be too small to generate viewability events during viewability testing, according to predetermined criteria. An example of such a criterion is that the total fraction of an advert covered by a quad group is less than a predetermined threshold, for example ½, which would result in <50% of the advert being visible from the test quad group at all times. Such test quad groups may be discarded to avoid redundant processing being performed at the viewability testing stage.
[0100]
[0101] For each polygon of the relevant part of the polygon mesh (or for a subset comprising one or more polygons of the relevant part of the polygon mesh), the method 700 proceeds with determining, at 704, planar coordinates of each corner of the texture coordinate bounding box with respect to the polygon. This generally involves converting the cartesian UV coordinates of the bounding box corners to a coordinate system in which coordinates are specified with respect to fixed points on the polygon. For example, if the polygon is a triangle, then the planar coordinates may be barycentric coordinates. If the polygon is a quadrilateral or a higher order polygon, then the polygon may be triangulated and barycentric coordinates determined for each resultant triangle. Other types of planar coordinate systems may be used, for example a non-orthogonal coordinate system with axes aligned with diagonals of a quad.
[0102] For each polygon of the relevant part of the polygon mesh, the method 700 proceeds with determining, at 706, world coordinates of each corner of the texture coordinate bounding box, based on the determined planar coordinates of the texture coordinate bounding box and a world space positions of the vertices of the polygon. For example, if the planar coordinates are barycentric coordinates, then the world space coordinates can be determined by interpolating between the world space coordinates of the triangle vertices.
[0103] The processing at 704 and 706 may be performed for each polygon of the relevant part of the polygon mesh, or for a subset of the polygons of the mesh part (such as a predetermined subset or a randomly sampled subset). For each such polygon, the world space coordinates determined at 706 define vertices of an intermediate quad which is coplanar with the polygon (assuming the polygon is planar), and in the case of a flat mesh part, may correspond to the desired test quad. However, for a non-flat mesh part, different polygons will result in different intermediate quads, at least some of which will not be coplanar with one another. The desired intermediate quad is generated by determining, at 708, an average of the determined world coordinates for each corner of the texture coordinate bounding box. The average world coordinates determined for each corner of the texture coordinate bounding box determine world coordinates of a corresponding corner of the test quad. The average may be a simple average or may be a weighted average, for example where the weightings depend on the relative sizes of the polygons.
[0104] The method of
[0105]
[0106] The above embodiments are to be understood as illustrative examples. Further embodiments are envisaged. For example, although the above examples have discussed the generation of test quads, the methods discussed herein could be used to generate other types of test polygon, for example to cater for situations in which adverts are not adequately modelled using quadrilaterals. It is to be understood that any feature described in relation to any one embodiment may be used alone, or in combination with other features described, and may also be used in combination with one or more features of any other of the embodiments, or any combination of any other of the embodiments. Furthermore, equivalents and modifications not described above may also be employed without departing from the scope of the invention, which is defined in the accompanying claims.