SYSTEM AND METHOD FOR VIDEO SURVEILLANCE OF A FOREST
20170244935 ยท 2017-08-24
Assignee
Inventors
- Ivan Sergeevich SHISHALOV (Nizhny Novgorod, RU)
- Oleg Andreevich GROMAZIN (Nizhny Novgorod, RU)
- Yaroslav Sergeevich Solovyev (Nizhny Novgorod, RU)
- Aleksandr Vladimirovich Romanenko (Nizhny Novgorod, RU)
- Ivan Vasiilevich Esin (Nizhny Novgorod, RU)
Cpc classification
H04N7/18
ELECTRICITY
H04N7/181
ELECTRICITY
G08B17/005
PHYSICS
G06V20/52
PHYSICS
International classification
Abstract
The invention relates to the forest video monitoring. A method and system are provided for automatically binding a video camera to the absolute coordinate system and determining changes in the video camera binding. In one aspect, the method comprises the steps of: in each of at least two predetermined time moments, aiming the video camera at an object a position of which in the absolute coordinate system centered in a point in which the video camera resides is known at said moment, and determining an orientation of the video camera in a native coordinate system of the video camera; and, based on the determined orientations of the video camera and positions of the object, calculating a rotation of the native coordinate system of the video camera in the absolute coordinate system. The calculated rotation of the video camera's native coordinate system is used to recalculate coordinates of an observed object from the video camera's native coordinate system into the absolute coordinate system. The technical result relates to the improved accuracy of locating the observed object
Claims
1-17. (canceled)
18. The forest fire video monitoring system comprising: at least one remotely controlled video monitoring point which includes a high-rise building and the transmitting side equipment located on the high-rise building, comprising a video camera on a rotating device and a camera control unit configured to determine the current spatial orientation of the video camera in the inherent coordinate system of the camera. at least one computer-assisted operator workstation for the operation of the said video monitoring point, and a storing device for storing the orientations of the video camera in the video camera's inherent coordinate systems, defined by the camera control unit after pointing the video camera at each of at least two distinct specified fixed objects on the terrain, and a computer-integrated module configured: to get, for each object of these objects, the current orientation of the video camera's inherent coordinate system, defined by the camera control unit after re-pointing camera at the object, and based on the current orientation of the video camera and the corresponding stored orientations of the video camera, to calculate the rotation of the video camera inherent coordinate system.
19. The system of claim 18, wherein the computer-integrated module is further configured, when comparing the current orientation of the camera, pointed at the observed object with the stored video camera orientation pointed to a previously analyzed object, to adjust this stored orientation of the camera on the basis of the calculated rotation of the video camera's inherent coordinate system.
20. In a forest fire video monitoring system comprising at least one remotely controlled video monitoring point including: a video camera with a rotating device residing on a high-rise construction; and a camera control unit configured to determine a current spatial orientation of the video camera in a native coordinate system of the video camera, a method for determining changes in binding of the video camera, the method comprising the steps of: aiming the video camera at each of at least two predetermined discernible still objects in the terrain, and determining an orientation of the video camera in the native coordinate system of the video camera; storing the determined orientations of the video camera; for each of said objects: aiming the video camera according to the stored orientation of the video camera corresponding to said object, and, in case of deviation of the video camera from the object, re-aiming the video camera at the object, and determining a current orientation of the video camera in the native coordinate system of the video camera; and based on the determined current orientations of the video camera and the respective stored orientations of the video camera, calculating a rotation of the native coordinate system of the video camera.
21. The method of claim 20, wherein when comparing the current orientation of the camera, pointed at the observed object with the stored orientation of the video camera when it is pointed to a previously analyzed object, they correct this stored orientation of the video camera based on the calculated rotation of the video camera's inherent coordinate system.
22. (canceled)
Description
BRIEF DESCRIPTION OF THE FIGURE DRAWINGS
[0057] The above and other aspects and advantages of the present invention are disclosed in the following detailed description, made with reference to the figure drawings, in which:
[0058]
[0059]
[0060]
[0061]
[0062]
DETAILED DESCRIPTION OF THE INVENTION
[0063] In the following disclosure of the present invention, reference will be made to the forest fire video monitoring system 100 of
[0064] Further, the disclosure of the invention is provided in sections corresponding to the objectives stated above. [0065] 1. Automatic binding of the video camera's inherent coordinate system to the absolute coordinate system
[0066] Referring to
[0067] In astronomy, there are known the methods to determine with high accuracy horizontal (topocentric) coordinates (i.e., the azimuth (angle from the north direction) and the angular height ((angle from the mathematical horizon)) of various celestial bodies, such as the Sun, on the basis of geographical coordinates of the observer and the exact time of day. The widespread use of these methods is known, in particular, in marine navigation.
[0068] In step 202, the video camera 115 is pointed at a known astronomical object (such as the Sun, the Moon, etc.).
[0069] The camera 115 can be pointed manually, that is, the operator detects an astronomical object when viewing the area, and points a video camera thereto, such that the center of the image received from the camera coincides with the center of the object in question. In an exemplary embodiment, this is accomplished by the interaction of the operator with the use of input devices, with the corresponding elements of the graphical user interface appearing in the display device, so that the computer of the operator workstation 120 generates control commands sent over the network 130 (possibly through the server 140 and with its direct participation) to the appropriate receiving side equipment 111, where these commands through the communication module 114 arrive in a camera control unit 118, which, basing on them, generates the control signals for driving the rotating device 117 to set a spatial orientation of the camera 115 such that this one was pointed at the corresponding astronomical object.
[0070] This procedure can also be performed automatically, that is in the automatic mode, the video camera 115 views the area and, by means of special computer vision algorithms implemented in the system 100, as noted above, detects an object with specified characteristics (for example, in the case of the Sun it will be a bright circle).
[0071] In so doing, when the video camera 115 is pointed at an astronomical object, this astronomical object is preferably zoomed in on as much as possible (for this, in the exemplary embodiment, the camera control device 118 appropriately controls the zoom 116) and the center of the image obtained from the camera is manually or automatically aligned with the center of the astronomical object.
[0072] The location of the astronomical object can be therewith estimated with an accuracy up to several tenths/hundredths of a degree. Thus, the angular size of the Sun is about 3127. Modern cameras can achieve a magnification in which the viewing angle is 1. Computer vision techniques allow determining the center position of the circle in the image with an accuracy up to a few pixels. That is, if the video camera has a resolution of one or more megapixels, the accuracy of determining the Sun direction may be about 0.05.
[0073] After pointing the camera at an astronomical object, in step 204 the orientation of the camera in its inherent coordinate system is determined, that is, the orientation of the camera in its inherent coordinate system associated with the mechanics of the video camera. As mentioned above, this functional capability is provided in modern controlled video cameras and implemented, for example, by the camera control device 118.
[0074] In step 206, on the basis of the known exact location of the video monitoring point 110, they determine the position of the astronomical object in the video camera's absolute coordinate system at the current time. As mentioned above, in astronomy, there are known methods and formulas based on which and knowing the geographic coordinates of the observer and the exact time of day, it is possible to determine the azimuth and the altitude of the astronomical object, that is, the coordinates of the object in the horizontal coordinate system.
[0075] The procedure in stages 202-206 should be performed several times, waiting some period of time between successive accomplishments. The minimum number of accomplishments is two, which follows from the following detailed discussion. At the same time, in order to improve accuracy, it is possible to accomplish further this procedure, since however precise the devices might be, each measurement will allow a certain error, which can be reduced by multiplying the said measurements/determinations.
[0076] It is the most convenient to carry out this procedure on the astronomical object the Sun. In this case, the video camera can be pointed at the Sun twicejust after sunrise and before sunset, when the brightness is sufficient for detection, but not too high in order to avoid the formation of the various glares on the lens and not to damage the electronics of the video camera. It should be emphasized once again that in each of these two points in time the video camera is pointed at the Sun (step 202), its orientation is determined in its inherent coordinate system (step 204), and there is determined the position of the astronomical object in the absolute coordinate system, knowing the exact time corresponding to this point in time and the precise geographical coordinates of the video camera (step 206).
[0077] In step 208, on the basis on the video camera orientations and positions of the astronomical object determined in step 204 and 206, they calculate the rotation of the video camera's inherent coordinate system in the absolute coordinate system associated with it. The calculated rotation of the video camera's inherent coordinate system allows determining the recalculation rates for the coordinates of the observed object from the video camera's inherent coordinate system in the absolute coordinate system.
[0078] In fact, the following correspondence is achieved: the vertical and horizontal (panoramic) angle in the video camera's inherent coordinate system correspond to the height of an astronomical object (for example, the Sun) from the mathematical horizon (vertical angle in the absolute coordinate system), linked to the location of the camera (observer), and the azimuth (horizontal angle in the absolute coordinate system), bound to the location of the video camera.
[0079] For further recalculation of the video camera coordinate system to the absolute (horizontal) coordinate system, it is necessary, on the basis on the correspondence data, to determine the rotation of the inherent coordinate system in the absolute coordinate system, and to this end, it is possible for example, to determine the Euler angles. The Euler angles are the angles describing the rotation of a perfectly rigid body in a three-dimensional Euclidean space.
[0080] Having defined the Euler angles, it is possible for each point in the inherent coordinate system, to get the value in the absolute coordinate system, which means that for every visible object we can recalculate the obtained object direction in the absolute coordinate system associated with the location of the camera, i.e., in fact, eliminate the effect of the above factors on the accuracy of determining the direction of the observed object.
[0081] Mathematically, this is expressed as follows.
[0082] We obtain two correspondences between two observation points in the horizontal coordinate system (azimuth (a), height (v)) and two points in the video camera's inherent coordinate system (panoramic angle (p), vertical angle (t)):
(a1, v1)-(p1, t1),
(a2, v2)-(p2, t2).
[0083] According to the example above, these points can correspond to the two pointings of the video camera to the Sun.
[0084] Based on this, it is necessary to obtain three Euler angles e1, e2, e3, that is
e1=f1(a1, v1, p1, t1, a2, v2, p2, t2),
e2=f2(a1, v1, p1, t1, a2, v2, p2, t2),
e3=f3(a1, v1, p1, t1, a2, v2, p2, t2).
[0085] Then, knowing the three Euler angles, we obtain for each (p, t) the correspondence (a, v), that is
a=f1(p, t, e1, e2, e3),
v=f2(p, t, e1, e2, e3).
[0086] Let us dwell on the problem of coordinate recalculation and determination of Euler angles.
Direct Problem
[0087] The correspondence between the coordinates of the visible object in the video camera's inherent coordinate system (p, t), where ppanoramic angle, tvertical angle, and the absolute coordinates (a, v), where aazimuth, vangular height above the mathematical horizon, is defined by the Euler angles , , , which in this case describe the rotation (orientation) of the video camera's inherent coordinate system in the absolute coordinate system.
[0088] The Euler angles allow bringing any position of the system to the current position. Let us denote the initial coordinate system as (x, y, z), and the final one as (X, Y, Z). The intersection of the coordinate planes xy and XY is called the nodal line N, where: [0089] angle angle between the axis x and the nodal line. [0090] angle angle between the axes z and Z. [0091] angle angle between the axis X and the nodal line.
[0092] The rotations of the coordinate system by these angles are called precession, nutation and intrinsic rotation (spin). These rotations are non-commutative, and the final position of the system depends on the order in which the rotations are performed. In the case of Euler angles, it is a sequence of 3, 1, 3 (Z, X, Z) (See
[0093] Having determined the Euler angles, the rotation matrix is calculated, based on which the angles (a, v) are uniquely determined for each pair (p, t). The rotation matrix of Cartesian coordinates for the Euler angles is as follows:
[0094] To use this matrix, the angles (p, t) must be converted to a Cartesian coordinate system. The angle p corresponds to the angle in spherical coordinates, and the angle =/2+t. The radius of the sphere is not important, because its size does not change when rotating, so r=1. The coordinates in the Cartesian coordinate system are defined as:
[0095] In multiplying the rotation matrix to a column vector
the column vector
is determined. Based on a specific column vector, new spherical coordinates are identified:
[0096] Thus the azimuth of point corresponds to .sub.1, and the angle of inclination corresponds to
[0097] That is, when knowing p and t and Euler angles , , , , v are identified.
Inverse Problem
[0098] In order to determine the Euler angles basing on the known relations of orientations in the inherent and absolute coordinate systems, it is necessary to solve the problem which is inverse to the above direct problem.
[0099] For the inverse problem, namely for finding the Euler angles, let us describe the rotation of the system by use of quaternions. In this case, a quaternion is a quadruple of numbers (x, y, z, w), where (x, y, z)vector, and wscalar. In this representation of the quaternion, it is obtained that the first three components are the vector, which lies on the axis of rotation, the length of the vector depending on the angle of rotation. The fourth component depends only on the angle of rotation. The dependence is quite simpleif we take the unit vector for the axis of rotation and the angle alpha for rotation about this axis, then the quaternion representing this rotation, can be written as follows:
q=[V*sin(alpha/2), cos(alpha/2)].
[0100] The initial data of the inverse problem are the two pairs of mutually related vectors, namely (p.sub.1, t.sub.1).fwdarw.(a.sub.1, v.sub.1) and (p.sub.2, t.sub.2).fwdarw.(a.sub.2, v.sub.2), where (p, t)coordinates in the video camera's inherent coordinate system, and (a, v)coordinates in the video camera's absolute coordinate system. Let us translate each of these vectors into the Cartesian coordinate system, and we obtain the corresponding vectors: (x.sub.1, y.sub.1, z.sub.1).fwdarw.(u.sub.i, v.sub.i, w.sub.i), i=1, 2.
[0101] Now we will find the rotation quaternion of the camera initial coordinate system, transforming the vector (x.sub.1, y.sub.1, z.sub.1) into (u.sub.1, v.sub.1, w.sub.1). This quaternion will describe the rotation within the shortest distance. To find it, we use the following formulas: [0102] (x.sub.1, y.sub.1, z.sub.1)(u.sub.1, v.sub.1, w.sub.1)=(a, b c)cross product of the input vector to the final vector; [0103] q=[a, b, c, (x.sub.1, y.sub.1, z.sub.1)(u.sub.1, v.sub.1, w.sub.1)]Required quaternion (here (x.sub.1, y.sub.1, z.sub.1)(u//1/, v.sub.1, w.sub.1)scalar product of vectors).
[0104] And in the last step let us normalize the quaternion q, for this, we will divide x, y, z, w, included in it, by n={square root over (x.sup.2+y.sup.2+z.sup.2+w.sup.2)}.
[0105] The quaternion, obtained in this step, sets the rotation of the Cartesian coordinate system, translating (x.sub.1, y.sub.1, z.sub.1) into (u.sub.1, v.sub.1, w.sub.1).
[0106] Let us rotate the coordinate system, for this, we use the rotation: V=qvq.sup.1, where voriginal vector, and Vvector after the rotation. Thus, after the rotation of the Cartesian coordinate system quaternion q, we have two vectors: (u.sub.1, v.sub.1, w.sub.1), obtained from (x.sub.1, y.sub.1, z.sub.1), and a certain vector (u.sub.p, v.sub.p, w.sub.p), obtained from (x.sub.2, y.sub.2, z.sub.2). Now, it is necessary to rotate the Cartesian coordinate system such that the vector (u.sub.1, v.sub.1, w.sub.1) remains in place, and the vector (u.sub.p, v.sub.p, w.sub.p) passes into a vector (u.sub.2, v.sub.2, w.sub.2) (in general, it is possible not for any vectors, but as we take their values from the real system, we believe that such a rotation is possible).
[0107] Obviously, for such a rotation it is necessary that the axis of rotation passes through the point (u.sub.1, v.sub.1, w.sub.1) and the origin of coordinates. To find the rotation angle, we will find the angular distance between two points on a sphere (u.sub.p, v.sub.p, w.sub.p) and (u.sub.2, v.sub.2, w.sub.2). To this end, let us translate them into a spherical coordinate system: .fwdarw.(.sub.1, .sub.1) and (.sub.2, .sub.2), respectively. Then we use the formula for finding the angular distance (this formula is widely used, for example, in astronomy):
where difference between the coordinates of longitude, that is, the difference in the angle coordinates , and angular difference.
[0108] Using the obtained corner and knowing the rotation axis, we get a rotation quaternion transforming the vector (u.sub.p, v.sub.p, w.sub.p) into (u.sub.2, v.sub.2, w.sub.2): q.sub.2=[(u.sub.1, v.sub.1, w.sub.1)*sin(/2), cos(/2)]
[0109] Thus, we have got two of the rotation quaternions, q and q.sub.2, whose consistent application transforms the original points (x.sub.1, y.sub.1, z.sub.1) into (u.sub.1, v.sub.1, w.sub.1), i=1, 2. According to the definition and properties of quaternions, the rotation quaternion, which is equivalent to two consecutive applications q and q.sub.2, is equal to q.sub.2*q. We denote it as Q=[X, Y, Z, W].
[0110] Next, we use the familiar formula for translating the quaternion into the Euler angles and we obtain:
[0111] This calculation can be done with sufficient accuracy, since the location of an astronomical object can be estimated to within a few tenths/hundredths of a degree, according to the above.
[0112] It should be noted that the above-described method of automatic binding does not have to rely solely on an astronomical object (s). From the above description, it should be apparent to those skilled in the art that the role of the astronomical object, in fact, may be performed by any perceptible object, whose position in the video camera's absolute coordinate system is known or can be determined at a given point (s) of time with sufficient accuracy. Any immovable object, visible in the area, may serve as such object, which example is given below.
[0113] The above procedure 200 is implemented, at least partially (i.e., at least, its steps 206, 208), in the form of a computer-integrated module. The computer-integrated module is preferably a software that can be, in a local or distributed manner, stored in memory units and run by processors at the server 140, and/or the computer of the operator workstation 120, and/or the transmitting side equipment 111, depending from the design of the system 100. At the same time, the computer-integrated module may also be a hardware unit or a software and hardware unit (e.g., blade server in a blade server bay or a single computing device), which should be obvious for those skilled in the art. The above software can be developed using an appropriate technology of programming chosen from a plurality of available technologies. [0114] 2. Correction to the video camera binding
[0115] To be able to sieve out the false objects when automatic detection algorithms are actuated, it is necessary to determine accurately the direction of the detected object as it is detected again. This task is complicated by the fact that the camera, for various reasons, changes its binding, that is, there is changed the orientation of the video camera's inherent coordinate system (rotating device) in the absolute (horizontal) coordinate system, related to the video camera location.
[0116] To eliminate the influence of this effect on the accuracy of determining the direction of the object when it is detected again, it is necessary to determine how the orientation of the video camera's inherent coordinate system in the absolute coordinate system changed for the period elapsed between the repeated detections.
[0117] To this end, it is proposed to use the objects visible on the terrain, of which it is known for certain that they do not change their position or change it slightly (i.e. they are substantially fixed). These objects can be any visible stationary objects (e.g., window shutters, road signs, etc.).
[0118] Below with reference to
[0119] In step 402, the video camera is pointed at each of at least two specified distinct fixed objects on the terrain and the orientation of the video camera in the video camera's inherent coordinate system is determined.
[0120] In step 404, the video camera orientations, determined in step 402, are stored. Certain orientation can be stored in the storing device comprised by the server 140 and/or the computer of the operator workstation 120.
[0121] If a check is necessary, the following sub-steps are performed in step 406 for each object of the said fixed objects. In sub-step 406-1, the video camera is pointed according to the video camera corresponding object orientation stored in step 404. In sub-step 406-2, if during the period between the pointings at the object, the video camera has moved (for example, due to the above possible causes), then the video camera is re-pointed at the object and its current orientation in the video camera's inherent coordinate system is determined.
[0122] In step 408, on the basis of the video camera current orientations determined in sub-step 406-2, and the video camera relevant orientations stored in step 404, they calculate the rotation of the video camera's inherent coordinate system, which determines the aforementioned adjustment for the video camera moving. In fact, they determine the rotation of the video camera's inherent coordinate system at the time of making the step 406 relative to the video camera's inherent coordinate system at the time of making the step 402.
[0123] Subsequently, for example, when the video camera detects a potentially dangerous object, her current orientation is compared with the orientation determined on the basis of previously stored orientation of the video camera to the object marked before as non-hazardous, and on the basis of the calculated rotation of the video camera's inherent coordinate system.
[0124] The possible implementation of steps 402-408 by means of the components of the system 100 is indicated above or follows from its description in an obvious manner.
[0125] For those skilled in the art it is evident that when implementing the above procedure 400, more than two stationary objects can be used. Their number is generally determined by the required accuracyso, if the number of objects increases, we increase the number of independent measurement results, which leads, with appropriate processing, to an increased accuracy of direction determination.
[0126] As described above, for determining the rotation, it is necessary to determine the Euler angles, which in this case will describe the rotation of the video camera's inherent coordinate system for the current moment relative to the video camera's inherent coordinate system fixed during the preceding procedure of saving the orientation.
[0127] As regards the essence of the matter and the complexity, the mathematical problem is identical to that described above.
[0128] Suppose that the camera moved slightly, that is, the visible object coordinates in their video camera's inherent coordinate systems, namely (p11, t11) (which had previously been saved), became different, namely (p12, t12). We obtain the correspondences:
(p11, t11)-(p12, t12),
(p21, t21)-(p22, t22)
[0129] On the basis of these correspondences, similar to the above, we define the Euler angles, that is,
e1=f1(p11, t11, p12, t12, p21, t21, p22, t22),
e2=f2(p11, t11, p12, t12, p21, t21, p22, t22),
e3=f3(p11, t11, p12, t12, p21, t21, p22, t22).
[0130] After finding the Euler angles (e1, e2, e3) for each of the previously marked objects (such as non-hazardous), and having the video camera orientation to this object saved (p1, t1), it is possible to determine the adjusted orientation of the video camera to the given object (pn, tn), that is, the orientation adjusted for the video camera deviation is as follows:
pn=f1(e1, e2, e3, p1, t1),
tn=f2(e1, e2, e3, p1, t1)
according to the method described above, when compared to the current orientation of the camera.
[0131] For the most accurate operation of the system, we can perform the procedure 400 of binding before each patrolling; this procedure is expected to take no much time (a few seconds).
[0132] Referring to
[0133] During the patrolling, the video camera inspects the territory; the automatic algorithm for detection of dangerous objects identifies a potentially dangerous object (step 501) in the video image. Upon this, the current orientation of the camera is identified (step 502). Further, the detected object is validated in any mannerfor example, the operator himself performs it, which may consist in a visual representation of the potentially dangerous object to the operator who should confirm or refute the danger of the object detected (step 503). After that, the system continues to patrol along the specified path (step 504).
[0134] After some time, determined by the class of fire danger and the requirements for the rate of detection (for example, every 15 minutes), the system starts, once again, travelling the path (step 506). Before this, there is performed the procedure of determining the changes in binding of the video camera according to the above method 400 (step 505).
[0135] When a potentially dangerous object is detected, the orientation of the video camera to it is determined and compared with the stored video camera orientation to the object (s) that during the past patrolling (s) was designated as non-hazardous, the orientation being adjusted on the basis of the computed rotation of the video camera inherent coordinate system (step 507). If the orientations coincide with a specified accuracy determined by the system settings (e.g., up to 0.01), then the system does not create a new alarm for the operator and does not require validation, and continues to patrol (step 508). This method can significantly reduce the operator workload, increase the number of cameras per operator and reduce the number of false alarms.
[0136] The above procedure 200 is implemented, at least partially (i.e. at least, its steps 206, 208), in the form of a computer-integrated module. The computer-integrated module is preferably a software that can be, in a local or distributed manner, stored in memory units and run by processors at the server 140, and/or the computer of the operator workstation 120, and/or the transmitting side equipment 111, depending from the design of the system 100. At the same time, the computer-integrated module may also be a hardware unit or a software and hardware unit (e.g., blade server in a blade server bay or a single computing device), which should be obvious for those skilled in the art. The above software can be developed using an appropriate technology of programming chosen from a plurality of available technologies.
[0137] The invention has been described above with reference to specific embodiments thereof. For those skilled in the art other embodiments of the invention can be obvious, without departing from the spirit thereof, as it is disclosed herein. Accordingly, the invention should be considered as limited in scope only by the following claims.