VIDEO ANALYTICS SYSTEM
20230222798 · 2023-07-13
Inventors
Cpc classification
G06V10/774
PHYSICS
G06V20/46
PHYSICS
G06V20/49
PHYSICS
International classification
Abstract
A computer-implemented method for sampling and analyzing data from at least one image frame from at least one series of image frames captured by at least one sensor, comprises: defining at least one sampling model, wherein the sampling model is defined in a virtual 3D-vector space and is based on one or more predetermined shapes in the virtual 3D-vector space, applying the at least one sampling model to at least one part of the at least one image frame of the at least one series of image frames, wherein applying of the at least one sampling model defines at least one area of the at least one image frame from which data is to be extracted, extracting data from the at least one area of the at least one image frame defined by the sampling model, and analyzing the extracted data.
Claims
1. A computer-implemented method for sampling and analyzing data from at least one image frame from at least one series of image frames captured by at least one sensor, the method comprising: defining at least one sampling model, wherein the at least one sampling model is defined in a virtual 3D-vector space and is based on one or more predetermined shapes in the virtual 3D-vector space; applying the at least one sampling model to at least one part of the at least one image frame of the at least one series of image frames, wherein the applying of the at least one sampling model defines at least one area of the at least one image frame from which data is to be, extracted; extracting data from the at least one area of the at least one image frame defined by the at least one sampling model; and and analyzing the extracted data.
2. The method of claim 1, wherein the one or more predetermined shapes in the virtual 3D-vector space is selected from at least one of the following shapes: 3D-shapes, 2D-shapes, 1D-shapes or 0D-shapes.
3. The method of claim 2, wherein the 3D-shapes are parallelepipeds and/or polyhedrons and/or spheres and/or cylinders and/or wherein the 2D-shapes are planar or curved surfaces and/or parallelograms and/or wherein the 1D-shapes are line segments and/or wherein the 0D-shapes are points.
4. The method of claim 1, wherein applying the at least one sampling model to the at least one part of the at least one image frame of the at least one series of image frames comprises correlating the at least one sampling model with one or more reference points in the at least one image frame of the at least one series of image frames.
5. The method of claim 4, wherein the correlating further comprises carrying out a mapping transformation between one or more points of the at least one sampling model and the one or more reference points in the at least one image frame of the at least one series of image frames.
6. The method of claim 1, wherein the one or more predetermined shapes in virtual 3D-vector space on which the at least one sampling model is based is divided into one or more elements or blocks that constitute the one or more predetermined shapes.
7. The method of claim 1, wherein extracting data from the at least one part of the at least one image frame of the at least one series of image frames onto which the at least one sampling model was applied, comprises extracting data from image frame pixels that are in an image frame area contained in or covered by a shape of the at least one sampling model applied to the at least one part of the at least one image frame and saving the extracted data in an array.
8. The method of claim 7, further comprising extracting data from image frame pixels that are in an image frame area contained in or covered by an element or block of a shape of the at least one sampling model applied to the at least one part of the at least one image frame and storing the extracted data in an at least one array.
9. The method of claim 1, wherein the same at least one sampling model is applied to different parts of the at least one image frame of the at least one series of image frames and/or wherein the same at least one sampling model is applied to a plurality of images of the at least one series of image frames or wherein the same at least one sampling model is applied to all of the plurality of images of the at least one series of image frames.
10. The method of claim 1, wherein the extracting data from the at least one part of the at least one image frame onto which the at least one sampling model was applied, comprises transforming the data.
11. The method of claim 1, wherein the at least one image frame onto which the at least one sampling model is applied or wherein the at least one part of the at least one image frame onto which the at least one sampling model is applied, is subjected to a pre-treatment before the extracting data.
12. The method of claim 1, wherein analyzing the extracted data comprises: analyzing the extracted data to detect a desired pattern, wherein the desired pattern comprises a predetermined situation and/or movement and/or behavior and/or action of objects and/or subjects within a real 3D-scene that is represented in the at least one part of the at least one image frame of the at least one series of image frames captured by the at least one sensor, and providing a notification or alarm upon detection of said the desired pattern; and/or using the extracted data as input for a machine learning system to train the machine learning system to detect a desired pattern, wherein the desired pattern comprises a predetermined situation and/or movement and/or behavior and/or action of objects and/or subjects within a real 3D-scene that is represented in the at least one part of the at least one image frame of the at least one series of image frames captured by the at least one sensor; and/or using the extracted data as input for a trained machine learning system for detecting a desired pattern, wherein the desired pattern comprises a predetermined situation and/or movement and/or behavior and/or action of objects and/or subjects within a real 3D-scene that is represented in the at least one part of the at least one image of the at least one series of image frames captured by the at least one sensor, and providing a notification or alarm upon detection of the desired pattern.
13. The method of claim 1, wherein applying the at least one sampling model to at least one part of the at least one image frame of the at least one series of image frames takes into account a movement of the at least one sensor during capturing of image frames from the at least one series of image frames, and/or the method further comprises: sampling and analyzing data from image frames from a plurality of different series of image frames taken by a plurality of sensors with different viewpoints for capturing image frames and taking into account the different viewpoints of the plurality of sensors when applying the at least one sampling model to image frames taken by the plurality of sensors.
14. A computer-readable storage medium having stored therein instructions that, when executed by one or more processors, direct the one or more processors to perform a method according to claim 1.
15. A video analytic system, comprising: at least one sensor configured for capturing image, frames; and at least one computing system comprising one or more processors configured to carry out a method for sampling and analyzing data from image frames captured by the at least one sensor by a method according to claim 1.
16. The method of claim 6, wherein the one or more predetermined shapes in the virtual 3D-vector space on which the at least one sampling model is based is divided evenly or non-evenly in any or all its geometric dimensions into one or more elements or blocks that constitute said one or more predetermined shapes.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0095] The following figures illustrate example embodiments of the present disclosure:
[0096]
[0097]
[0098]
[0099]
[0100]
[0101]
DETAILED DESCRIPTION
[0102]
[0103]
[0104] In other words, the 2D-shape can define an exemplary sampling model for sampling and analyzing data from the image frame 100, wherein the application or projection of the sampling model, i.e., the projection of the 2D-shape, i.e., a parallelogram defined in a (virtual) 3D-vector space, defines an exemplary area 107 of the at least one image frame from which data is to be extracted, i.e., an exemplary region of interest.
[0105] Stated differently, in order to sample and analyze data from the image frame 100, data is extracted only from the image frame pixels 105 that lie within the area 107 and/or on or within the perimeter 106 of the projection 104 of the sampling model, i.e., the perimeter 106 of the projection 104 of the exemplary parallelogram 2D-shape.
[0106] For completeness, it is noted that the reference numerals 101, 102 exemplary denote possible coordinate axes, e.g., an X-axis 101 and Y-axis 102, of the image frame.
[0107]
[0108] The exemplary sampling model 210 or exemplary 3D-shape 200, i.e., the exemplary cuboid, is, for example, defined by four points, e.g., an exemplary set 209 of four reference points P.sub.1, 201, P.sub.2, 202, P.sub.3, 203, P.sub.4, 204, with coordinates P.sub.1 (x.sub.1.sup.m, y.sub.1.sup.m, z.sub.1.sup.m), P.sub.2 (x.sub.2.sup.m, y.sub.2.sup.m, z.sub.2.sup.m), P.sub.3 (x.sub.3.sup.m, y.sub.3.sup.m, z.sub.3.sup.m) and P.sub.4 (x.sub.4.sup.m, y.sub.4.sup.m, z.sub.4.sup.m), wherein x, y, z are coordinates of the orthogonal coordinate axes X, 205, Y, 206, Z, 207 and m in the superscript index denotes the exemplary sampling model 210 and the subscript indices 1, 2, 3 and 4 denote the number of the reference point.
[0109] Stated differently, the coordinates 209 of the exemplary reference points P.sub.1, 201, P.sub.2, 202, P.sub.3, 203, P.sub.4, 204 are exemplary provided in coordinated of the exemplary (virtual) 3D-vector space spanned by the exemplary orthogonal coordinate axes X, 205, Y, 206, Z, 207.
[0110] In the illustrated exemplary case, P.sub.1 (x.sub.1.sup.m, y.sub.1.sup.m, z.sub.1.sup.m) is located in the origin of the exemplary (virtual) 3D-vector space, i.e., P.sub.1 (x.sub.1.sup.m, y.sub.1.sup.m, z.sub.1.sup.m)=P.sub.1 (0, 0, 0) and the other points are located on the exemplary coordinate axes, i.e., P.sub.2 (x.sub.2.sup.m, y.sub.2.sup.m, z.sub.2.sup.m)=P.sub.2 (x.sub.2.sup.m, 0, 0), with x.sub.2.sup.m≠0, P.sub.3 (x.sub.3.sup.m, y.sub.3.sup.m, z.sub.3.sup.m)=(0, y.sub.3.sup.m, 0) with x.sub.3.sup.m≠0 and P.sub.4 (x.sub.4.sup.m, y.sub.4.sup.m, z.sub.4.sup.m)=(0, 0, z.sub.4.sup.m) with x.sub.4.sup.m≠0.
[0111] The exemplary sampling model 210 or exemplary 3D-shape 200 may, for example, represent a model of an object in real 3D-space, such as, for example, a control gate or a fare gate or a corridor or a volume in real 3D-space.
[0112] The dimensions of the exemplary sampling model 210 or exemplary 3D-shape 200 may inter alia be adjusted to better match or approximate specific dimensions and scales of different instances or realizations of the object in real 3D-space the sampling model 210 or exemplary 3D-shape 200 is supposed to represent.
[0113] This way similar objects in real 3D-space can be sampled with the same sampling model(s) and/or the same model(s) can be applied to different viewing perspectives of the same object in real 3D-space, e.g., when captured in image frames from different sensors having different points of view of the object or scene in real 3D-space. In this context, the expression of “same sampling models” can inter alia be understood as sampling models having the same topology, i.e., comprising pre-determined shapes with the same topologies.
[0114] Furthermore, the same sampling model(s) and/or the same model(s) can be applied to different objects in real 3D-space that have the same or similar shape or topology, e.g., different realizations or different instances of a control gate of fare gate at different physical location, e.g., different metro stations, captured in separate series of image frames by different sensors.
[0115] As described in general above, the pre-determined shapes in the (virtual) 3D-vector space on which the at least one sampling model can be based on, can themselves be divided into one or more elements or blocks or sub-shapes that constitute the shapes.
[0116] For example, the shapes in the 3D-vector space on which the sampling model can be based on can be divided evenly or non-evenly in any or all its geometric dimensions into one or more elements or blocks that constitute the shapes.
[0117] The elements or blocks that can constitute a shape in (virtual) 3D-vector space can also be defined as non-divisible smallest unit of a shape and may be also referred to as “shape atom” or “primitive” or “voxel.”
[0118] In the exemplary case illustrated in
[0119] This exemplary division or exemplary slicing of the exemplary 3D-shape 200 then generates 4*3*2=24 smaller 3D-shapes, i.e., smaller 3D-cuboids, i.e., exemplary voxels 208.
[0120] Each of the voxels 208 can then, for example, be associated with an element and/or value in a data structure such as a multi-dimensional array, e.g., a tensor of dimensions (4, 3, 2).
[0121] It is emphasized that the herein and above-described exemplary division or slicing of the exemplary 3D-shape 200 is just an example, and other division or slicing schemes can also be applied to divide or slice an exemplary shape of an exemplary sampling model, e.g., an exemplary 3D-shape may be divided into voxels that can be associated with an element and/or value in a data structure such as a multi-dimensional array, e.g., a tensor of dimensions (i, j, k) with i, j, k being integers greater 0.
[0122] The same holds for other shapes, e.g., 2D-shapes and/or 1D-shapes of a sampling model.
[0123] In order to apply or map or project the exemplary sampling model 210, i.e., the exemplary 3D-shape 210, i.e., the exemplary 3D-cuboid 211, and its voxels 208 to/onto a two-dimensional image frame, an exemplary procedure can comprise, for example, identifying four image reference points I.sub.1, I.sub.2, I.sub.3, I.sub.4 in the image frame(s) that are easily identifiable and can be replicated in different scenes in the real 3D-space and then determine a/the mathematical transformation from the (virtual) 3D-vector space into a/the 2D-image-frame space.
[0124] The exemplary image frame space may be spanned, for example, by exemplary orthogonal coordinate axes X.sup.f, Y.sup.f, wherein ‘f’ refers to frame.
[0125] For example, a parallel projection can be simply defined with four image reference points being located at vertices of a fare gate box (see also,
[0126] Once these four exemplary image reference points are identified and located on the two-dimensional image frame to be sampled and analyzed, their pixel coordinates can be taken or identified.
[0127] For example, let us denote the coordinates of the exemplary image reference points I.sub.1, I.sub.2, I.sub.3, I.sub.4 as (x.sub.1.sup.f, y.sub.1.sup.f), (x.sub.2.sup.f, y.sub.2.sup.f), (x.sub.3.sup.f, y.sub.3.sup.f), (x.sub.4.sup.f, y.sub.4.sup.f), where ‘f’ again refers to frame and the coordinates are provided with respect to the exemplary orthogonal coordinate axes X.sup.f, Y.sup.f of the image frame.
[0128] If we associate these exemplary image reference point coordinates in the image frame with the corresponding reference point coordinates of the corresponding reference points of the exemplary sampling model 210, i.e., the exemplary 3D-shape 200, i.e., the exemplary 3D cuboid 211, i.e., with P.sub.1 (x.sub.1.sup.m, y.sub.1.sup.m, z.sub.1.sup.m), P.sub.2 (x.sub.2.sup.m, y.sub.2.sup.m, z.sub.2.sup.m), P.sub.3 (x.sub.3.sup.m, y.sub.3.sup.m, z.sub.3.sup.m) and P.sub.4 (x.sub.4.sup.m, y.sub.4.sup.m, z.sub.4.sup.m), a parallel projection from the (virtual) 3D-vector space of the sampling model 210 into a/the two-dimensional image frame can, for example, be defined by solving the following linear equations using the four pairs of reference points mentioned above, which define a system of eight equations that allow to determine the values of the mapping or projection transformation coefficients a.sub.j and b.sub.j.
x.sub.i.sup.f=a.sub.0+a.sub.1x.sub.i.sup.m+a.sub.2y.sub.i.sup.m+a.sub.3z.sub.i.sup.m
y.sub.i.sup.f=b.sub.0+b.sub.1x.sub.i.sup.m+b.sub.2y.sub.i.sup.m+b.sub.3z.sub.i.sup.m
Herein, ‘j’ is an integer from 0 to 3 and “i” is an integer from 1 to 4 and “f” refers again to the image frame and “m” refers again to the sampling model or predetermined shape.
[0129] Hence eight variables or unknowns are to be/can be determined from the eight equations generated by the four corresponding pairs of points in the 3D-vector space of the sampling model and the 2D-image frame space.
[0130] In this example, once the values a.sub.i and b.sub.i are determined, the sampling model 210, i.e., the 3D-shape 200, i.e., the 3D-cuboid 211 and its voxels 208 can be drawn/projected onto the/a given image frame thereby defining at least one area of image frame from which data is/data values are to be extracted and can be, for example, assigned to a value of a corresponding element of a multi-dimensional array, e.g., a tensor.
[0131] In particular, each voxel of a predetermined shape, e.g., each voxel 208 of cuboid 211, can be associated to a projected voxel on the image frame and each data/data value extracted from the image frame are/from the image frame pixels covered by the projected voxel can be assigned to a value of the corresponding multi-dimensional array element, i.e., a value of the corresponding tensor element.
[0132] In other words, voxels can represent elements or be associated with elements of a multi-dimensional array, e.g., a tensor, in which data extracted from an image frame be stored and further processed for further data analysis.
[0133] As indicated before, the same or similar sampling model 210, i.e., the same or a similar 3D-shape 200 can, for example, be used to sample and analyze other fare gates of the same fare gate type or of similar geometry within the same real scene (e.g., video stream flow from the same sensor/same camera) or from other real scenes (video stream flows from other sensors/other cameras).
[0134] The data extracted from image frames with the different fare gates can then be considered as facing the same detection problem and can be solved with a single model/single modeling approach (e.g., using the same neural network, in case of machine learning), thereby bringing a general solution for many fare gates of the same type and with the same way of functioning, without the need of generating (and train, case of machine learning) additional specific solution models for additional fare gates.
[0135] This can inter alia greatly speed-up and facilitate the solving of detection problems, in particular, such as the ones described above, in video analytics.
[0136]
[0137] All of the exemplary 2D-shapes are exemplary parallelograms, with the 2D-shapes 301 and 303 being exemplary rectangles.
[0138] However, the number and form of 2D-shapes 301, 302, 303 and 304 is merely exemplary. Any other number and form of 2D-shapes orientable and positionable in an exemplary (virtual) 3D-vector space can be used as well to define/build up an exemplary sampling model 300.
[0139] The exemplary 3D-vector space in which these predetermines shapes 301, 302, 303 and 304 are positioned, is exemplary denoted with reference numeral 310.
[0140] Similar to the previous example of
[0141] The elements or blocks can also be defined as non-divisible smallest unit of a shape and may be also referred to as “shape atom” or “primitive” or “voxel.”
[0142] In the example illustrated here, each shape 301, 302, 303 and 304 is exemplary divided evenly in voxels of the same size for a given shape, e.g., 2D-shape 301 is split into 2*2 voxels, i.e., four voxels 306 of the same size, 2D-shape 302 is split into 4*4 voxels, i.e., sixteen voxels 307 of the same size, 2D-shape 303 is split into 2*4, i.e., eight voxels 308 of the same size and 2D-shape 304 is split into 2*4 voxels, i.e., eight voxels 308 of the same size.
[0143] The exemplary sampling model 300 based on 2D-shapes can inter alia be used as alternative to the exemplary sampling model 210 of
[0144] For example, the exemplary 2D-shapes of the exemplary sampling model 300 can be projected onto surfaces deemed relevant for the detection of a specific problem or situation or behavior to be detected, for example, surfaces of control gates or fare gates in order to detect tailgating or other fare evasion practices.
[0145] Herein and in general it is to be noted that surfaces or objects in the real physical scene observed by the sensor of the video analytic system onto which a sampling model is applied to/mapped to/projected onto do not necessarily have to be limited to correspond to actual physical surfaces.
[0146] For example, it is conceivable that artificial surfaces or objects or lines in the scene may be defined, that are defined based on their relationships to physical surfaces, objects or lines in the scene. For example, a plane that comprises or is parallel to the plane in which the exemplary sliding doors of a fare gate are moving, or a line that is in alignment with or parallel to an axis of an exemplary tripod turnstile of a fare gate.
[0147] It is also conceivable that a sampling model or a predetermined shape of a sampling model can be projected onto the scene captured by a sensor without having a direct relation to physical objects, objects or lines in the observed scene.
[0148] Depending on the complexity or specific geometry of a problem or situation to be detected, the exemplary sampling model 300 based on 2D-shapes may be preferred over the sampling model 210 based on 3D-shapes, since the processing of 2D-shapes typically generates multi-dimensional arrays, e.g., tensors, of smaller sizes for a given covered data extraction area (region of interest) as compared to 3D-shapes.
[0149] Hence, the processing of 2D-shapes in the context of the herein described video analytics method steps and system can inter alia be carried out faster and with less computational resources than the processing of 3D-shapes.
[0150]
[0151] For example, an exemplary real scene 410 in real 3D-space 401 may exemplary comprise real objects such as a street 405, a house 408 with garage 407 and a tree 409 and a driveway 406 to the garage 407.
[0152] A sensor, e.g., a camera, can capture 411 this exemplary real scene 410 in at least one image frame 412 from at least one series of image frames. This exemplary image frame 412 is a two-dimensional image frame in a two-dimensional projected space (that can also be referred to as “projected 3D-space”) representing a projected reality, e.g., the projection of real scene 410 or the projection of at least a part of the real scene 410. The image frame 412 is in an exemplary digital format comprising a plurality of image pixels (not shown).
[0153] Reference numeral 413 denotes an exemplary sampling model in form of an exemplary 3D-shape 414, e.g., a 3D-cuboid 415, defined in an exemplary (virtual) 3D-vector space 403.
[0154] The exemplary 3D-shape 414, i.e., exemplary 3D-cuboid 415 is exemplary subdivided or sliced or partitioned along the axes of the 3D-vector space 403 into a plurality of voxels 416, e.g., into 4*2*4=32 voxels 416.
[0155] The sampling model 413 is exemplary applied to/mapped to/projected 417 onto the image frame 412. This projection 417 may be carried out, for example, according to any of the above described steps, in particular, for example, due to identifying reference points in the image frame to be matched with reference points of the sampling model 413 and solving a system of linear equations to determine corresponding transformation coefficients for establishing a transformation between the sampling model 413 and the image frame 412.
[0156] In the exemplary case illustrated, the same sampling model 413 is applied twice, i.e., to two different parts, of the image frame 413. In this case two separate systems of linear equations to determine corresponding transformation coefficients for establishing a transformation between the sampling model 413 and the two different parts of image frame 412 are to be computed.
[0157] Hence, creating two instances or realizations 418, 419 of the sampling model 413 applied to the image frame 412 and defining two exemplary areas 421, 422 or regions of interest from which data is to be extracted for analysis to detect a specific problem or situation.
[0158] In the present case illustrated the sampling model 413 or the two instances or realizations 418, 419 of the sampling model 413 can be used to sample and extract data from two exemplary corridors or spatial volumes or segments 426, 427 along the street 405.
[0159] The to be extracted data can, for example, be extracted per projected voxel or projected voxel area 420 and can be extracted 425 into one more multi-dimensional arrays 424, 425, e.g., tensors, in an abstract or numerical data space 404 and format suitable for digital or computational processing by a processor, e.g., a graphical processing unit (GPU) or a central processing unit (CPU).
[0160] In other words, data of image frame pixels that are covered by the projected voxels 420 of the sampling model 403 can be extracted 424 into and stored in multi-dimensional arrays 424, 425, e.g., tensors.
[0161] The extracted data can then be analyzed, e.g., by applying machine learning techniques as described above, to detect a specific problem or situation or behavior.
[0162] The analysis of the extracted data may be carried out separately for different areas or regions 421, 422 of the image frame 412 covered by the sampling model 403, e.g., per instance 418, 419 of the sampling model 403, or may be carried out jointly for all areas or regions 421, 422 of the image frame 412 covered by the sampling model 403.
[0163] The extracted data can then, for example, be analyzed in order to detect the presence or transit of people and/or vehicles in the analyzed area(s) 421, 422 of the image frame 412, the direction and/or speed of such transit, individually or as a swarm (e.g., determination of flow, slowdowns, congestions and jams), the detection of objects left behind, the detection of panic situations, disorders or riots (people or vehicles moving at abnormal speeds, abnormal directions or are present in abnormal quantities) or fights, the detection of loitering or oversized objects, or speed monitoring, or estimation and/or determination of the occupancy level, or other situations.
[0164] As indicated in the general part above, the problems or situations or behaviors that can be detected based on the extracted data can be manifold and are not limited to the examples presented herein.
[0165] It is further noted, that the skilled person in view of the data extracted according to the steps and techniques described herein, is fully capable to define appropriate detection criteria for a chosen specific problem or situation or behavior.
[0166] The examples of problems or situations or behaviors that can be detected based on the extracted data merely serve to illustrate how the herein described steps and video analytics techniques for sampling and analyzing data from at least one image frame can be used to build a more accurate representation of physical three-dimensional objects as compared to state-of-the-art techniques that do not take into account the three-dimensionality of physical reality and are restricted to a representation of reality based on using a flat extraction of data from an image.
[0167] For completeness it is further noted, that it would also be possible, for example, to define a single sampling model comprising a set of two 3D-shapes, e.g., a set of two 3D-cuboids, and then apply the sampling model only once to the image frame, i.e., only establishing a single a system of linear equations to determine corresponding transformation coefficients for establishing a transformation between the sampling model 413 and the image frame 412. It is also conceivable then to store the extracted data in a single multi-dimensional array or tensor.
[0168]
[0169] Superimposed on the image frame 531 is an exemplary sampling model 510 or an exemplary instance or realization or application or projection of the sampling model applied to the image frame 531.
[0170] In this example, the sampling model 510 is based on/defined by an exemplary 3D-shape in form of an exemplary 3D-cuboid 505 defined in an exemplary (virtual) 3D-vector space 521.
[0171] As previously described in general and/or specifically, exemplary reference points P.sub.4, 509, P.sub.1, 510, P.sub.2, 511, P.sub.3, 512 of the sampling model 510 have been exemplary correlated or matched with the geometry of the exemplary fare gate 524 and projected onto the image frame 531, e.g., by establishing and solving a system of linear equations between the reference points of the sampling model and reference points in the image frame to determine corresponding mapping or projections transformation coefficients. The exemplary reference points in the image frame 531 are not explicitly shown for better readability, but can, for example, be assumed to lie at the positions marked by the exemplary reference points P.sub.4, 509, P.sub.1, 510, P.sub.2, 511, P.sub.3, 512 of the sampling model 510.
[0172] Reference numeral 504 denotes an exemplary possible voxel or projected voxel 504 of the sampling model 510. For easier readability of
[0173] Also the image pixel of the image frame 531 have not been drawn or marked explicitly, but it can be assumed that the image frame 531 is in a digital format comprising a plurality of pixels.
[0174] The sampling model 510 applied to/mapped to/projected onto the image frame 531 then exemplary defines an area 529 of the image frame 531 from which data is to be extracted.
[0175] As also previously described above in general and/or exemplary, data can then be extracted from image pixels that are covered by the sampling model 510, e.g., that are covered by the projected voxels 504.
[0176] The extracted data can be saved into multi-dimensional arrays, e.g., tensors.
[0177] Based on the extracted data an analysis can then be carried out in order to detect a specific problem or situation or behavior.
[0178] For example, an analysis can be carried out detect fraudulent access, e.g., fare evaders due to tailgating, at the fare gate 524.
[0179]
[0180] The exemplary fare gate system 533 can be identical or analog to the exemplary fare gate system 532 of
[0181] As in
[0182] Reference numerals 502, 503 denote exemplary sampling models or instances or realizations or applications or projections of a/the sampling model applied to the image frame 533.
[0183] The exemplary sampling models 502, 503, exemplary comprise exemplary 3D-cuboids 506, 507 as exemplary 3D-shapes defined in exemplary (virtual) 3D-vector spaces 522, 523 and that are shown exemplary superimposed on the image frame 530.
[0184] As previously described in general and/or specifically, exemplary reference points P.sub.1.sup.1, 513, P.sub.2.sup.1, 514, P.sub.3.sup.1, 515, P.sub.4.sup.1, 516, P.sub.1.sup.2, 517, P.sub.2.sup.2, 518, P.sub.3.sup.2, 519, P.sub.4.sup.2, 520 of the sampling models 502, 503 have been exemplary correlated or matched with the geometry of the exemplary fare gates 525, 526 and projected onto the image frame 530, e.g., by establishing and solving a corresponding system or corresponding systems of linear equations between the reference points of the sampling models and reference points in the image frame to determine corresponding mapping or projections transformation coefficients. The exemplary reference points in the image frame 530 are not explicitly shown for better readability, but can, for example, be assumed to lie at the positions marked by the exemplary reference points P.sub.1.sup.1, 513, P.sub.2.sup.1, 514, P.sub.3.sup.1, 515, P.sub.4.sup.1, 516, P.sub.1.sup.2, 517, P.sub.2.sup.2, 518, P.sub.3.sup.2, 519, P.sub.4.sup.2, 520 of the sampling models 502, 503.
[0185] For better readability of
[0186] Also the image pixels of the image frame 530 have not been drawn or marked explicitly, but it can again be assumed that the image frame 530 is in a digital format comprising a plurality of pixels.
[0187] The sampling models 502, 503 applied to/mapped to/projected onto the image frame 530 then exemplary define an area or areas 528 of the image frame 530 from which data is to be extracted.
[0188] As also previously described above in general and/or exemplary, data can then be extracted from image pixels that are covered by the sampling models 502, 503, e.g., that are covered by projected elements or blocks or voxels of the sampling models.
[0189] The extracted data can be saved into multi-dimensional arrays, e.g., tensors.
[0190] Based on the extracted data an analysis can then be carried out in order to detect a specific problem or situation or behavior.
[0191] For example, an analysis can be carried out detect fraudulent access, e.g., fare evaders due to tailgating, at both of the fare gate 525 and 526.
[0192] As indicated also previously, it is further conceivable to define single sampling model which is based on a set of predetermined shapes, e.g., on the two predetermined 3D-shapes 502, 503 instead of treating the 3D-shapes 502, 503 as separate sampling models.
[0193] It is further noted again, that for both exemplary real scenes 508 and 527 depicted in image frames 531 and 530 the same sampling model(s) can be used, wherein same can mean identical and/or having the same topology.
[0194] The following reference numerals identify the following exemplary components in the figures. [0195] 100 Exemplary Image Frame [0196] 101 Exemplary first coordinate axis, e.g., X-axis of an exemplary image frame coordinate system [0197] 102 Exemplary second coordinate axis, e.g., Y-axis of an exemplary image frame coordinate system [0198] 103 Exemplary pixels of image frame [0199] 104 Exemplary projection of a predetermined shape onto the image frame, exemplary projection of an exemplary sampling model projected onto the image frame, exemplary region of interest [0200] 105 Exemplary pixels of image frame covered by the projection of the predetermined shape [0201] 106 Exemplary perimeter of projection of predetermined shape 104 onto image frame [0202] 107 Exemplary area of the image frame from which data is to be extracted, exemplary region of interest [0203] 200 Exemplary 3D-shape in (virtual) 3D-vector space, exemplary parallelepiped, exemplary cuboid [0204] 201 Exemplary (first) reference point with exemplary reference coordinates [0205] 202 Exemplary (second) reference point with exemplary reference coordinates [0206] 203 Exemplary (third) reference point with exemplary reference coordinates [0207] 204 Exemplary (fourth) reference point with exemplary reference coordinates [0208] 205 Exemplary (first) coordinate axis, exemplary first (virtual) 3D-vector space axis [0209] 206 Exemplary (second) coordinate axis, exemplary second (virtual) 3D-vector space axis [0210] 207 Exemplary (third) coordinate axis, exemplary third (virtual) 3D-vector space axis [0211] 208 Exemplary element or block or voxel of exemplary 3D-shape in (virtual) 3D-vector space [0212] 209 Exemplary set of reference points, exemplary (vertex) points of 3D-shape in (virtual) 3D-vector space [0213] 210 Exemplary sampling model [0214] 211 Exemplary cuboid, exemplary 3D-cuboid [0215] 212 Exemplary (virtual) 3D-vector space [0216] 300 Exemplary alternative sampling model [0217] 301 Exemplary (first) 2D-shape, exemplary parallelogram, exemplary rectangle [0218] 302 Exemplary (second) 2D-shape, exemplary parallelogram [0219] 303 Exemplary (third) 2D-shape, exemplary parallelogram, exemplary rectangle [0220] 304 Exemplary (fourth) 2D-shape, exemplary parallelogram [0221] 305 Exemplary set of predetermined shapes, exemplary set of exemplary 2D-shapes [0222] 306 Exemplary voxel of (first) 2D-shape 301 [0223] 307 Exemplary voxel of (second) 2D-shape 302 [0224] 308 Exemplary voxel of (third) 2D-shape 303 [0225] 309 Exemplary voxel of (fourth) 2D-shape 304 [0226] 310 Exemplary (virtual) 3D-vector space [0227] 400 Exemplary schematic relations between spaces [0228] 401 Exemplary real 3D-space, exemplary real physical 3D-space, exemplary real space [0229] 402 Exemplary two-dimensional projected space/exemplary projected 3D-space [0230] 403 Exemplary virtual 3D-vector space, exemplary 3D-vector space [0231] 404 Exemplary abstract or numerical multi-dimensional data space [0232] 405 Exemplary street [0233] 406 Exemplary driveway [0234] 407 Exemplary garage [0235] 408 Exemplary house [0236] 409 Exemplary tree [0237] 410 Exemplary real scene [0238] 411 Exemplary act/step of capturing or recording the exemplary real scene by a sensor, e.g., camera [0239] 412 Exemplary image frame, e.g., image frame in digital format comprising a plurality of pixels [0240] 413 Exemplary sampling model [0241] 414 Exemplary predetermined shape, exemplary 3D-shape [0242] 415 Exemplary 3D-cuboid, exemplary cuboid [0243] 416 Exemplary element or exemplary block or exemplary voxel of shape 414 [0244] 417 Exemplary act/step of applying/mapping/projecting the sampling model to/onto the image frame [0245] 418 Exemplary (first) instance or realization of the sampling model 403 applied to/mapped to/projected onto image frame [0246] 419 Exemplary (second) instance or realization of the sampling model 403 applied to/mapped to/projected onto image frame [0247] 420 Exemplary projected elements or projected blocks or projected voxels, exemplary projected area of projected elements or projected blocks or projected voxels [0248] 421 Exemplary (first) area of image frame covered by the exemplary sampling model [0249] 422 Exemplary (second) area of image frame covered by the exemplary sampling model [0250] 423 Exemplary act/step of extracting data from the image frame [0251] 424 Exemplary (first) multi-dimensional data structure, e.g., multi-dimensional array, e.g., tensor [0252] 425 Exemplary (second) multi-dimensional data structure, e.g., multi-dimensional array, e.g., tensor [0253] 426 Exemplary (first) corridor or spatial volume or segment along street 405 [0254] 427 Exemplary (second) corridor or spatial volume or segment along street 405 [0255] 501 Exemplary sampling model, exemplary instance of sampling model [0256] 502 Exemplary sampling model, exemplary first instance of sampling model [0257] 503 Exemplary sampling model, exemplary second instance of sampling model [0258] 504 Exemplary voxel, exemplary projected voxel [0259] 505 Exemplary 3D-shape, exemplary 3D-cuboid [0260] 506 Exemplary (first) 3D-shape, exemplary 3D-cuboid [0261] 507 Exemplary (second) 3D-shape, exemplary 3D-cuboid [0262] 508 Exemplary real scene of a fare gate, exemplary image frame captured [0263] 509 Exemplary reference point P.sub.4 [0264] 510 Exemplary reference point P.sub.1 [0265] 511 Exemplary reference point P.sub.2 [0266] 512 Exemplary reference point P.sub.3 [0267] 513 Exemplary reference point P.sub.1.sup.1 [0268] 514 Exemplary reference point P.sub.2.sup.1 [0269] 515 Exemplary reference point P.sub.3.sup.1 [0270] 516 Exemplary reference point P.sub.4.sup.1 [0271] 517 Exemplary reference point P.sub.3.sup.2 [0272] 518 Exemplary reference point P.sub.2.sup.2 [0273] 519 Exemplary reference point P.sub.3.sup.2 [0274] 520 Exemplary reference point P.sub.4.sup.2 [0275] 521 Exemplary (virtual) 3D-vector space [0276] 522 Exemplary (virtual) 3D-vector space [0277] 523 Exemplary (virtual) 3D-vector space [0278] 524 Exemplary fare gate to be monitored [0279] 525 Exemplary fare gate to be monitored [0280] 526 Exemplary fare gate to be monitored [0281] 527 Exemplary real scene of a fare gate, exemplary image frame captured [0282] 528 Exemplary area or region in image frame defined be sampling model(s) [0283] 529 Exemplary area or region in image frame defined be sampling model [0284] 530 Exemplary image frame [0285] 531 Exemplary image frame [0286] 532 Exemplary fare gate system, with an exemplary plurality of fare gates [0287] 533 Exemplary fare gate system, with an exemplary plurality of fare gates