Forest fire video monitoring system and method
09686513 ยท 2017-06-20
Assignee
Inventors
- Ivan Sergeevich SHISHALOV (Nizhny Novgorod, RU)
- Oleg Andreevich GROMAZIN (Nizhny Novgorod, RU)
- Yaroslav Sergeevich Solovyev (Nizhny Novgorod, RU)
- Aleksandr Vladimirovich Romanenko (Nizhny Novgorod, RU)
- Ivan Vasilievich Esin (Nizhny Novgorod, RU)
Cpc classification
H04N7/18
ELECTRICITY
H04N7/181
ELECTRICITY
G08B17/005
PHYSICS
G06V20/52
PHYSICS
H04N23/69
ELECTRICITY
International classification
G08B17/12
PHYSICS
H04N7/18
ELECTRICITY
Abstract
The invention relates to the forest video monitoring. A method and system are provided for automatically binding a video camera to the absolute coordinate system and determining changes in the video camera binding. In one aspect, the method comprises the steps of: in each of at least two predetermined time moments, aiming the video camera at an object a position of which in the absolute coordinate system centered in a point in which the video camera resides is known at said moment, and determining an orientation of the video camera in a native coordinate system of the video camera; and, based on the determined orientations of the video camera and positions of the object, calculating a rotation of the native coordinate system of the video camera in the absolute coordinate system. The calculated rotation of the video camera's native coordinate system is used to recalculate coordinates of an observed object from the video camera's native coordinate system into the absolute coordinate system. The technical result relates to the improved accuracy of locating the observed object.
Claims
1. A forest fire video monitoring system comprising: at least one remotely controlled video monitoring point which includes a high-rise construction and a transmission-side equipment residing on the high-rise construction, the transmission-side equipment comprising: a video camera on a rotating device; and a camera control unit configured to determine a current spatial orientation of the video camera in a native coordinate system of the video camera; at least one computerized operator workstation for operating said video monitoring point; and a computer-implemented module configured: in each of at least two predetermined time moments, to obtain an orientation of the video camera aimed at a known astronomical object, said orientation determined by the camera control unit in the native coordinate system of the video camera, and to determine, based on a predetermined location of the video monitoring point and said at least two predetermined time moments, a position of the astronomical object in an absolute coordinate system centered in a point in which the video camera resides, and to calculate, based on the determined orientations of the video camera and positions of the astronomical object, a rotation of the native coordinate system of the video camera in the absolute coordinate system.
2. The system of claim 1, wherein the transmission-side equipment of said video monitoring point further comprises a communication device, wherein the system further comprises a server, and wherein the video monitoring point, the server, and the operator workstation are communicatively connected to each other.
3. The system of claim 2, wherein the video camera of the transmission-side equipment of said video monitoring point is equipped with a zoom.
4. The system of claim 3, wherein the video camera is aimed at the astronomical object by manually matching the center of an image obtained from the video camera with the center of the astronomical object.
5. The system of claim 3, further comprising a computer-implemented intelligent subsystem configured, based on computer vision technologies, to aim the video camera at the astronomical object by automatically detecting the astronomical object based on analysis of an image obtained from a video camera, and automatically matching the center of the image obtained from the video camera with the center of the astronomical object.
6. The system of claim 4, wherein, when aiming the video camera at the astronomical object, the zoom is used to zoom in the astronomical object to the maximum possible extent.
7. The system of claim 5, wherein, when aiming the video camera at the astronomical object, the zoom is used to zoom in the astronomical object to the maximum possible extent.
8. The system of claim 2, wherein said computer-implemented module resides at the server, and/or said operator workstation, and/or the transmission-side equipment of said video monitoring point.
9. The system of claim 1, wherein the astronomical object is Sun.
10. The system of claim 1, wherein the native coordinate system of the video camera is defined by a manufacturer of the video camera.
11. The system of claim 1, wherein the location of said video monitoring point is defined by its geographical coordinates, and the position of the astronomical object is defined by its azimuth and angular altitude above the horizon.
12. The system of claim 1, wherein the calculated rotation of the native coordinate system of the video camera is used to recalculate coordinates of an observed object from the native coordinate system of the video camera into the absolute coordinate system.
13. In a forest fire video monitoring system comprising at least one remotely controlled video monitoring point comprising: a video camera on a rotating device residing on a high-rise construction; and a camera control unit configured to determine a current spatial orientation of the video camera in a native coordinate system of the video camera, a method for automatically binding the native coordinate system of the video camera to an absolute coordinate system, the method comprising the steps of: in each of at least two predetermined time moments aiming the video camera at a known astronomical object, and determining an orientation of the video camera in the native coordinate system of the video camera, and determining, based on a predetermined location of the video monitoring point and said at least two predetermined time moments, a position of the astronomical object in the absolute coordinate system centered in a point in which the video camera resides; and calculating, based on the determined orientations of the video camera and positions of the astronomical object, a rotation of the native coordinate system of the video camera in the absolute coordinate system.
14. The method of claim 13, further comprising, based on the calculated rotation of the native coordinate system of the video camera, recalculating coordinates of an observed object from the native coordinate system of the video camera into the absolute coordinate system.
15. The method of claim 13, wherein the video camera is aimed at the astronomical object by manually matching the center of an image obtained from the video camera with the center of the astronomical object.
16. The method of claim 13, wherein the video camera is aimed at the astronomical object by automatically detecting the astronomical object based on analysis of an image obtained from the video camera and automatically matching the center of the image obtained from the video camera with the center of the astronomical object, based on computer vision technologies.
17. A forest fire video monitoring system comprising: at least one remotely controlled video monitoring point which includes a high-rise construction and a transmission-side equipment residing on the high-rise construction, the transmission-side equipment comprising: a video camera on a rotating device; and a camera control unit configured to determine a current spatial orientation of the video camera in a native coordinate system of the video camera; at least one computerized operator workstation for operating said video monitoring point; and a computer-implemented module configured: in each of at least two predetermined time moments, to obtain an orientation of the video camera in the native coordinate system of the video camera, wherein the orientation is determined by the camera control unit when the video camera is aimed at a known astronomical object, wherein a position of the astronomical object in an absolute coordinate system centered in a point in which the video camera resides is known at said at least two predetermined time moments, to calculate, based on the determined orientations of the video camera and positions of the astronomical object, a rotation of the native coordinate system of the video camera in the absolute coordinate system.
18. In a forest fire video monitoring system comprising at least one remotely controlled video monitoring point comprising: a video camera with a rotating device residing on a high-rise construction; and a camera control unit configured to determine a current spatial orientation of the video camera in a native coordinate system of the video camera, a method for automatically binding the native coordinate system of the video camera to an absolute coordinate system, the method comprising the steps of: in each of at least two predetermined time moments, aiming the video camera at an object a position of which in the absolute coordinate system centered at a point in which the video camera resides is known at said at least two predetermined time moments, and determining an orientation of the video camera in the native coordinate system of the video camera; and based on the determined orientations of the video camera and positions of the object, calculating a rotation of the native coordinate system of the video camera in the absolute coordinate system.
Description
BRIEF DESCRIPTION OF THE FIGURE DRAWINGS
(1) The above and other aspects and advantages of the present invention are disclosed in the following detailed description, with reference to the drawings in which:
(2)
(3)
(4)
(5)
(6)
DETAILED DESCRIPTION OF THE INVENTION
(7) In the following disclosure of the present invention, reference will be made to the forest fire video monitoring system 100 of
(8) Furthermore, the disclosure of the present invention is presented in subsections corresponding to the objectives stated above.
(9) 1. Automatic Binding of the Video Camera's Native Coordinate System to the Absolute Coordinate System
(10) Referring to
(11) In astronomy, the methods are known to determine, with high accuracy, horizontal (topocentric) coordinates (i.e., azimuth (the angle from the north direction) and angular altitude (the angle from the mathematical horizon)) of various celestial bodies, such as Sun, based on geographical coordinates of the observer and the exact time of day. The widespread use of these methods is known, in particular, in marine navigation.
(12) In step 202, the video camera 115 is aimed at a known astronomical object (such as Sun, Moon, etc.).
(13) The camera 115 can be aimed manually, that is, the operator detects the astronomical object when viewing the area and points the video camera thereto, such that the center of the image acquired from the camera matches the center of said object. In an exemplary embodiment, this is accomplished by interaction of the operator, using input devices, with the respective elements of the graphical user interface displayed in the display device, so that the computer of the operator workstation 120 generates control commands sent over the network 130 (possibly through the server 140 and with its direct participation) to the appropriate transmission-side equipment(s) 111, where those commands are delivered through the communication module 114 to the camera control unit 118 which generates based on said commands the control signals for driving the rotating device 117 to set such a spatial orientation of the camera 115 that it is aimed at the respective astronomical object.
(14) This procedure can be also performed automatically, that is, in the automatic mode the video camera 115 views the area and, by means of special computer vision algorithms implemented in the system 100, as noted above, detects an object with characteristics known in advance (for example, in the case of Sun it will be a bright circle).
(15) When aiming the video camera 115 at the astronomical object, the astronomical object is preferably zoomed in to the maximum possible extent (to this end, in the exemplary embodiment, the camera control device 118 appropriately controls the zoom 116) and the center of the image obtained from the camera is manually or automatically aligned with the center of the astronomical object.
(16) The location of the astronomical object can be estimated with the accuracy of up to several tenths/hundredths of a degree. In particular, the angular size of Sun is about 3127. Modern cameras can achieve magnification at which the viewing angle is V. Computer vision techniques allow to determine the center position of the circle in the image with the accuracy of up to few pixels. That is, if the video camera has the resolution of one or more megapixels, then the accuracy of determination of the direction to Sun may be about 0.05.
(17) Upon aiming the camera at the astronomical object, step 204 determines the orientation of the camera in its native coordinate system, i.e. the direction of the camera in its native coordinate system associated with the mechanics of the video camera. As mentioned above, such functionality is provided in modern controlled video cameras and implemented, for example, by the camera control device 118.
(18) In step 206, based on the known exact location of the video monitoring point 110, the position of the astronomical object is determined in the video camera's absolute coordinate system at the current time. As mentioned above, in astronomy, there are known methods and formulas which enable to determine, from the known geographic coordinates of the observer and the exact time of day, azimuth and altitude of the astronomical object, that is, the coordinates of the object in the horizontal coordinate system.
(19) The procedure according to steps 202-206 should be performed repeatedly, where successive performances are separated by a certain period of time. The minimum number of performances is two, which follows from the detailed discussion presented hereinbelow. At the same time, in order to improve accuracy, said procedure may be performed more times, since, despite possibly high precision of the devices, each measurement assumes a certain error which can be reduced by repeatedly performing said measurements/determinations.
(20) It is mostly convenient to carry out this procedure on such astronomical object as Sun. In this case, the video camera can be aimed at Sun twicejust after sunrise and before sunset, when brightness is sufficient for detection, but not too high for causing various glares on the lens and damaging electronics of the video camera. It should be emphasized once again that in each of these two time moments the video camera is aimed at Sun (step 202), its orientation is determined in the native coordinate system (step 204), and the position of the astronomical object is determined in the absolute coordinate system, knowing the exact time corresponding to said time moment and the precise geographical coordinates of the video camera (step 206).
(21) Based on the video camera orientations and positions of the astronomical object, as determined in steps 204 and 206, step 208 calculates a rotation of the video camera's native coordinate system in the absolute coordinate system associated with it. The calculated rotation of the video camera's native coordinate system allows to determine recalculation coefficients for coordinates of an observed object from the video camera's native coordinate system into the absolute coordinate system.
(22) In fact, the following correspondences are achieved: vertical and horizontal (panoramic) angle in the video camera's native coordinate systemaltitude of the astronomical object (for example, Sun) above the mathematical horizon (vertical angle in the absolute coordinate system) that is bound to the location of the camera (observer), and azimuth (horizontal angle in the absolute coordinate system) bound to the location of the video camera.
(23) For further recalculation from the video camera coordinate system into the absolute (horizontal) coordinate system, it is necessary, based on the correspondence data, to determine the rotation of the native coordinate system in the absolute coordinate system, and, to this end, the Euler angles can be determined, for example. The Euler angles are the angles describing rotation of a perfectly rigid body in the three-dimensional Euclidean space.
(24) Upon determination of the Euler angles, it is possible to obtain, for each point in the native coordinate system, the value in the absolute coordinate system, which means that for every visible object we can recalculate the obtained direction to said object in the absolute coordinate system associated with the location of the camera, i.e., in fact, eliminate the influence of the above factors on the accuracy of determination of the observed object direction.
(25) Mathematically, this is expressed as follows.
(26) We obtain two correspondences of two observation points in the horizontal coordinate system (azimuth (a), altitude (v)) to two points in the video camera's native coordinate system (panoramic angle (p), vertical angle (t)):
(a1,v1)(p1,t1),
(a2,v2)(p2,t2).
(27) According to the example given above, these points can correspond to the two aimings of the video camera at Sun.
(28) Based on this, it is necessary to obtain three Euler angles e1, e2, e3, i.e.
e1=f1(a1,v1,p1,t1,a2,v2,p2,t2),
e2=f2(a1,v1,p1,t1,a2,v2,p2,t2),
e3=f3(a1,v1,p1,t1,a2,v2,p2,t2).
(29) Then, knowing the three Euler angles, we obtain for each (p, t) the correspondence (a, v), i.e.
a=f1(p,t,e1,e2,e3),
v=f2(p,t,e1,e2,e3).
(30) Let us dwell on the problem of coordinate recalculation and determination of the Euler angles.
(31) Direct Problem
(32) The correspondence between the visible object coordinates in the video camera's native coordinate system (p, t), where ppanoramic angle, tvertical angle, and the absolute coordinates (a, v), where aazimuth, vangular altitude above the mathematical horizon, is defined by the Euler angles , , which in this case describe the rotation (orientation) of the video camera's native coordinate system in the absolute coordinate system.
(33) The Euler angles allow to map any position of the system to the current position. Let us denote the initial coordinate system as (x, y, z), and the final one as (X, Y, Z). The intersection of the coordinate planes xy and XY is called the nodal line N, where:
(34) angle is the angle between the axis x and the nodal line.
(35) angle is the angle between the axes z and Z.
(36) angle is the angle between the axis X and the nodal line.
(37) The rotations of the coordinate system by these angles are called precession, nutation and intrinsic rotation (spin). These rotations are non-commutative, and the final position of the system depends on the order in which the rotations are performed. In the case of the Euler angles, it is represented by the sequence 3, 1, 3 (Z, X, Z) (See
(38) Upon determination of the Euler angles, the rotation matrix is calculated, based on which the angles (a, v) are uniquely determined for each pair (p, t). The matrix of rotation of the Cartesian coordinates for the Euler angles is as follows:
(39)
(40) To use this matrix, the angles (p, t) must be converted to the Cartesian coordinate system. The angle p corresponds to the angle in the spherical coordinate system, and the angle =/2+t. The radius of the sphere is not important, because its size does not change when rotating, so r=1. The coordinates in the Cartesian coordinate system are determined as:
(41)
(42) In multiplying the rotation matrix to the column vector
(43)
the column vector is
(44)
determined. Based on the determined column vector, new spherical coordinates are determined as:
(45)
(46) Thus, the azimuth of the point corresponds to .sub.1, and the angle of inclination corresponds to
(47)
That is, a, v are determined from the known p and t and Euler angles , , .
(48) Inverse Problem
(49) In order to determine the Euler angles based on the known relations of orientations in the inherent and absolute coordinate systems, it is necessary to solve the problem which is inverse to the above direct problem.
(50) For the inverse problem, namely for finding the Euler angles, let us describe the rotation of the system by using quaternions. In this case, quaternion is a quadruple of numbers (x, y, z, w), where (x, y, z) is a vector, and w is a scalar. In this representation of the quaternion, the first three components represent a vector which belongs to the rotation axis, where the vector length depends on the rotation angle. The fourth component depends only on the value of the rotation angle. The dependence is quite simpleif we take the unit vector V for the rotation axis and the angle alpha for rotation around this axis, then the quaternion representing this rotation can be written as follows:
q=[V*sin(alpha/2),cos(alpha/2)].
(51) The initial data of the inverse problem are two pairs of mutually related vectors, namely (p.sub.1, t.sub.1).fwdarw.(a.sub.1, v.sub.1) and (p.sub.2, t.sub.2).fwdarw.(a.sub.2, v.sub.2), where (p, t) are the coordinates in the video camera's native coordinate system, and (a, v) are the coordinates in the video camera's absolute coordinate system. Let us translate each of these vectors into the Cartesian coordinate system, and we obtain the corresponding vectors: (x.sub.i, y.sub.i, z.sub.i).fwdarw.(u.sub.i, v.sub.i, w.sub.i), i=1, 2.
(52) Now we will find the rotation quaternion of the camera initial coordinate system that transforms the vector (x.sub.1, y.sub.1, z.sub.1) into (u.sub.1, v.sub.1, w.sub.1). This quaternion will describe the shortest distance rotation. To find it, we use the following formulas:
(53) (x.sub.1, y.sub.1, z.sub.1){circle around ()}(u.sub.1, v.sub.1, w.sub.1)=(a, b, c) is the cross product of the input vector by the final vector;
(54) q=[a, b, c, (x.sub.1, y.sub.1, z.sub.1).circle-solid.(u.sub.1, v.sub.1, w.sub.1)] is the sought quaternion (here (x.sub.1, y.sub.1, z.sub.1).circle-solid.(u.sub.1, v.sub.1, w.sub.1) is the scalar product of the vectors).
(55) And in the last step let us normalize the quaternion q: to this end, we divide x, y, z, w included thereby by n={square root over (x.sup.2+y.sup.2+z.sup.2+w.sup.2)}.
(56) The quaternion obtained in this step defines the rotation of the Cartesian coordinate system that translates (x.sub.1, y.sub.1, z.sub.1) into (u.sub.1, v.sub.1, w.sub.1).
(57) Let us rotate the coordinate system, and, to this end, we use the rotation formula: V=qvq.sup., where v is the original vector, and V is the vector after the rotation. Thus, after the rotation of the Cartesian coordinate system by the quaternion q we have two vectors: (u.sub.1, v.sub.1, w.sub.1) obtained from (x.sub.1, y.sub.1, z.sub.1), and a certain vector (u.sub.p, v.sub.p, w.sub.p) obtained from (x.sub.2, y.sub.2, z.sub.2). Now, it is necessary to rotate the Cartesian coordinate system in such a way that the vector (u.sub.1, v.sub.1, w.sub.1) remains in place, and the vector (u.sub.p, v.sub.p, w.sub.p) transitions into a vector (u.sub.2, v.sub.2, w.sub.2) (in general, it is possible not for any vectors, but as we take their values from the real system, we believe that such a rotation is possible).
(58) Obviously, in order to perform such a rotation, it is necessary that the rotation axis passes through the point (u.sub.1, v.sub.1, w.sub.1) and the origin of coordinates. To find the rotation angle, we will find the angular distance on the sphere between two points (u.sub.p, v.sub.p, w.sub.p) and (u.sub.2, v.sub.2, w.sub.2). To this end, let us translate them into the spherical coordinate system: .fwdarw.(.sub.1, .sub.1) and (.sub.2, .sub.2), respectively. Then, we use the formula for finding the angular distance (this formula is widely used, for example, in astronomy):
(59)
(60) where is the difference between the coordinates in longitude, that is, the difference between the coordinates in the angle , and is the angular difference.
(61) Using the obtained angle and knowing the rotation axis, we obtain the rotation quaternion that transforms the vector (u.sub.p, v.sub.p, w.sub.p) into (u.sub.2, v.sub.2, w.sub.2): q.sub.2=[(u.sub.1, v.sub.1, w.sub.1)*sin(/2), cos(/2)]
(62) Thus, we have obtained two rotation quaternions, q and q.sub.2 which, when applied sequentially, transform the original points (x.sub.i, y.sub.i, z.sub.i) into (u.sub.i, v.sub.i, w.sub.i), i=1, 2. According to the definition and properties of quaternions, the rotation quaternion, which is equivalent to two successive applications q and q.sub.2, is equal to q.sub.2*q. We denote it as Q=[X, Y, Z, W].
(63) Next, we use the known formula for translating the quaternion into the Euler angles to obtain:
(64)
(65) This calculation can be done with sufficient accuracy, since the astronomical object position can be estimated with the accuracy of up to few tenths/hundredths of a degree, according to the aforesaid.
(66) It should be noted that the above-described automatic binding method does not have to rely solely on an astronomical object(s). From the above description, it should be apparent to those skilled in the art that substantially any discernible object, whose position in the video camera's absolute coordinate system is known or can be determined at a given time(s) with sufficient accuracy, can act as the astronomical object. Any discernible still object visible in the terrain may serve as such object, which is exemplified below.
(67) The above procedure 200 is implemented, at least partially (i.e., at least, its steps 206, 208), in the form of a computer-implemented module. The computer-implemented module is preferably software that can be, in a local or distributed manner, stored in memory units and run by processors at the server 140, and/or the computer of the operator workstation 120, and/or the transmission-side equipment 111, depending on the design of the system 100. At the same time, the computer-implemented module may also represent a hardware or firmware unit (e.g., a blade server in a blade server chassis or a single computing device), which should be obvious for those skilled in the art. The above software can be developed using an appropriate programming technology chosen from a plurality of available technologies.
(68) 2. Correction to the Video Camera Binding
(69) In order to provide the capability of filtering out false objects upon response from automatic detection algorithms, it is necessary to accurately determine the direction to a previously detected object as it is detected again. This task is complicated by the fact that the camera binding changes for various reasons, that is, the orientation of the native coordinate system of the video camera (rotating device) changes in the absolute (horizontal) coordinate system associated with the video camera location.
(70) In order to eliminate the influence of this effect on the accuracy of determining the direction to an object when it is detected again, it is necessary to determine how the orientation of the video camera's native coordinate system in the absolute coordinate system has changed for the period elapsed between the repeated detections of the object.
(71) To this end, it is proposed to use the objects visible in the terrain which are certainly known as not changing their position or changing it slightly (i.e. they are substantially still). These objects can be any visible stationary objects (e.g., window shutters, road signs, etc.).
(72) Below with reference to
(73) In step 402, the video camera is aimed at each of at least two predefined discernible still objects in the terrain, and an orientation of the video camera in the video camera's native coordinate system is determined.
(74) In step 404, the video camera orientations determined in step 402 are stored. The determined orientations can be stored in the storage device comprised by the server 140 and/or the computer of the operator workstation 120.
(75) If verification is necessary, the following sub-steps are performed in step 406 for each of said still objects. In sub-step 406-1, the video camera is aimed according to the video camera orientation stored in step 404 that corresponds to said object. In sub-step 406-2, if during the period between the aimings at the object the video camera has displaced (for example, due to the possible reasons indicated above), then the video camera is re-aimed at the object and its current orientation in the video camera's native coordinate system is determined.
(76) In step 408, based on the video camera current orientations determined in sub-step 406-2 and the respective video camera orientations stored in step 404, the rotation of the video camera's native coordinate system is calculated that defines the correction for the aforementioned video camera displacement. In fact, the rotation of the video camera's native coordinate system at the time of performing step 406 relative to the video camera's native coordinate system at the time of performing step 402 is substantially determined.
(77) Subsequently, for example, when the video camera detects a potentially dangerous object, the current camera orientation is compared to the orientation determined based on the previously stored camera orientation to the object previously marked as non-dangerous and based on the calculated rotation of the video camera's native coordinate system.
(78) The possible implementation of steps 402-408 by means of the components of the system 100 is indicated above or obviously follows from the description of said system.
(79) It should be obvious to those skilled in the art that in implementation of the above procedure 400 more than two stationary objects can be used. The number of such objects is generally defined by the required accuracyin particular, if the number of the objects increases, we increase the number of independently obtained measurement results, which leads, with appropriate processing, to the increased direction determination accuracy.
(80) As described above, in order to determine the rotation, it is necessary to determine the Euler angles which in this case will describe the rotation of the video camera's native coordinate system for the current time moment relative to the video camera's native coordinate system fixed during the preceding procedure of saving the orientation.
(81) The mathematical problem is identical to that described above, both in essence and in the sense of complexity.
(82) Let us suppose that the camera has slightly displaced, i.e. a visible object that has had the one coordinates in the video camera's native coordinate systems, namely (p11, t11) (which have been preliminarily saved), now has the other coordinates, namely (p12, t12). We thus obtain the correspondences:
(p11,t11)(p12,t12),
(p21,t21)(p22,t22)
(83) Based on these correspondences, the Euler angles are determined similar to the above, i.e.
e1=f1(p11,t11,p12,t12,p21,t21,p22,t22),
e2=f2(p11,t11,p12,t12,p21,t21,p22,t22),
e3=f3(p11,t11,p12,t12,p21,t21,p22,t22).
(84) After finding the Euler angles (e1, e2, e3) for each of the previously marked objects (e.g. as non-dangerous), given the stored video camera orientation to this object (p1, t1), the adjusted orientation of the video camera to said object (pn, tn), i.e. the orientation adjusted for the video camera displacement, can be determined as follows:
pn=f1(e1,e2,e3,p1,t1),
tn=f2(e1,e2,e3,p1,t1)
(85) according to the method described above, when compared to the current orientation of the camera.
(86) For the most accurate operation of the system, the binding procedure 400 can be performed before each patrolling; this procedure is expected not to take much time (few seconds).
(87) Referring to
(88) During the patrolling, the video camera inspects the territory; the automatic algorithm for detection of dangerous objects identifies a potentially dangerous object (step 501) in the video image. Upon this, the current orientation of the camera is identified (step 502). Further, the detected object is validated in some mannerfor example, the operator himself performs the validation, which may consist in visual representation of the potentially dangerous object to the operator who should confirm or refute danger of the detected object (step 503). After that, the system continues to patrol along the specified path (step 504).
(89) After some time defined by the class of fire danger and requirements for the detection rate (for example, every 15 minutes), the system once again starts travelling the path (step 506). Before this, the procedure of determining changes in binding of the video camera is performed according to the above method 400 (step 505).
(90) When a potentially dangerous object is detected, the orientation of the video camera to said object is determined and compared with the stored video camera orientation to an object(s) which was designated as non-dangerous in the past patrolling(s) that is adjusted based on the computed rotation of the video camera's native coordinate system (step 507). If the orientations coincide with the pre-specified accuracy defined by the system settings (e.g., up to 0.01), then the system does not generate a new alarm for the operator and does not require validation, and continues to patrol (step 508). This method can significantly reduce the operator workload, increase the number of cameras per operator, and reduce the number of false alarms.
(91) The above procedure 400 is implemented, at least partially (i.e. at least, its step 408), in the form of a computer-implemented module. The computer-implemented module is preferably software that can be, in a local or distributed manner, stored in memory units and run by processors at the server 140, and/or the computer of the operator workstation 120, and/or the transmission-side equipment 111, depending on the design of the system 100. At the same time, the computer-implemented module may also be a hardware or firmware unit (e.g., a blade server in a blade server chassis or a single computing device), which should be obvious for those skilled in the art. The above software can be developed using an appropriate programming technology chosen from a plurality of available technologies.
(92) The invention has been described above with reference to specific embodiments thereof. For those skilled in the art other embodiments of the invention should be obvious that fall within the spirit and scope of the present invention, as it is disclosed herein. Accordingly, the invention should be regarded as limited in scope only by the following claims.