Method for calibrating a camera
10425566 ยท 2019-09-24
Assignee
Inventors
Cpc classification
H04N23/633
ELECTRICITY
G06T7/80
PHYSICS
H04N23/695
ELECTRICITY
International classification
H04N17/00
ELECTRICITY
G06T7/80
PHYSICS
H04N3/22
ELECTRICITY
G06F3/048
PHYSICS
Abstract
A method calibrates a camera having image sensor having a center position. An image view projected onto the image sensor by a lens is captured. At least one image view boundary of the captured image view is detected. A projection center position corresponding to a center of the projected image view is determined in at least one dimension. Offset between the projection center position and a sensor center position is determined, defined in at least one dimension, corresponding to the center of the image sensor capturing the projected image view. The image sensor is moved in relation to the lens based on the offset in order to arrive at a substantially zero offset in at least one dimension between the center of image sensor and the center of the projected image view.
Claims
1. A method for calibrating offset between a selected position in an overview image and a center position in a detailed image, the overview image being captured by an overview camera and the detailed image being captured by a detailed view camera, these two cameras being different cameras, the method comprising: capturing the overview image by the overview camera; displaying the captured overview image with a predetermined overlay pattern that comprises a plurality of overlaid indicators, wherein each overlaid indicator represents a suggested selection area, to provide data for offset calibration; and for each suggested selection area: receiving a selection of a point in the suggested selection area at a feature in the overview image, transforming coordinates representing the point into a pan angle and a tilt angle for the detailed view camera to view a detailed view of the scene, the pan angle comprising an angle that the detailed view camera is panned with respect to a rotational axis of the detailed view camera, the tilt angle comprising an angle that the detailed view camera is tilted with respect to a tilt axis of the detailed view camera, panning and tilting the detailed view camera in accordance with the pan angle and the tilt angle, displaying a live detailed image captured using the detailed view camera after the detailed view camera has been panned and tilted in accordance with the pan angle and the tilt angle, while displaying the live detailed image adjusting the pan angle and the tilt angle, so that after adjusting the pan angle and the tilt angle, the feature is substantially in the center position of a detailed image captured by the detailed view camera to obtain an adjusted pan angle and an adjusted tilt angle, and storing information indicating a difference between the transformed pan and tilt angles and the adjusted pan and tilt angles as an offset value for calibrating offset between the selected position in the overview image and the center position in the live detailed image; wherein the method further comprises estimating functions associated with the stored offset values, the functions representing a pan error and a tilt error for the detailed view camera, wherein the steps of receiving, transforming, panning and tilting, displaying, adjusting, and storing are performed for each of the suggested selection areas.
2. The method of claim 1, further comprising adjusting orientation of the plurality of overlaid indicators in relation to the overview image.
3. The method according to claim 1, further comprising adjusting orientation of at least one of the plurality of overlaid indicators, in relation to the overview image by repositioning the at least one of the plurality of overlaid indicators by uniformly moving the at least one of the plurality of overlaid indicators closer to or further away from a center point of the overview image.
4. The method according to claim 2, wherein the step of adjusting orientation of the plurality of overlaid indicators is performed as a drag and drop operation on the overlay pattern.
5. The method according to claim 1, wherein the number of overlaid indicators is equal to or less than ten.
6. The method according to claim 1, wherein the number of indicators is equal to or higher than four.
7. The method according to claim 1, further comprising estimating the pan error function based on the pan angle transformed from the coordinates of selected points and the resulting offset and estimating the tilt error function based on the tilt angle transformed from the coordinates of the selected points and the resulting offset.
8. The method according to claim 2, wherein the step of adjusting orientation of the overlaid indicators includes rotation of the overlaid indicators around a center point of the overview image.
9. The method according to claim 3, wherein the step of adjusting orientation of the at least one of the plurality of overlaid indicators is performed as a drag and drop operation on the overlay pattern.
10. The method of claim 1, wherein the plurality of indicators that are uniformly distributed around a center point of the overview image.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) Other features and advantages of the present invention will become apparent from the following detailed description of a presently preferred embodiment, with reference to the accompanying drawings, in which
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
(19)
(20)
(21) Further, in the figures, like reference characters designate like or corresponding parts throughout the several figures.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
(22) The present invention relates to calibration of the positioning of a camera head in a monitoring camera. Referring to
(23) The dome camera further comprises a wide angle lens 20 mounted on the transparent dome cover 14 and extending from the dome cover 14 and away from the camera head 12. The wide angle lens 20 is mounted in a direction making the optical axis 22 of the wide angle lens substantially coincide with a rotational axis 24 around which the camera head 12 is turned during panning, hereinafter referred to as panning axis 24. The viewing angle of the wide angle lens 20 is wider than the viewing angle of the lens 18 in the camera head 12. In one embodiment, the viewing angle of the wide angle lens 20 is substantially wider than the viewing angle of the lens 18 of the camera head 12. The viewing angle of the wide angle lens may be more than 180 degrees. However, depending on the application, the viewing angle may be less or more. The angle should at least be selected to provide a reasonable overview image.
(24) Accordingly, the wide angle lens 20 is mounted so that the optical axis 26 of the camera head 12 is aligned with the optical axis 22 of the wide angle lens 20 when the camera head 12 is directed for capturing an image through the wide angle lens 20.
(25) Due to the positioning of the wide angle lens 20 and the fact that the camera head 12 is moveable, it is possible to capture overview images through the wide angle lens 20 as depicted in
(26) In one embodiment, the viewing angle or the focal length of the lens 18 of the camera head 12 may be selected so that the images captured by the camera head 12, when not captured through the wide angle lens 20, are adequate for providing relevant surveillance information. Examples of relevant surveillance information may for instance be the registration number of a car, an identifiable face of a person, detailed progress of an event, etc. The viewing angle of the wide angle lens 20 may be selected so that the camera head 12 may capture an image view of at least the floor of an entire room in which the monitoring camera is installed when directed to capture images through the wide angle lens 20.
(27) Alternatively, the viewing angle of the wide angle lens 20 is selected so that the camera head 12 will capture an overview image of the monitored area when the camera head 12 is directed to capture images through the wide angle lens 20. Then an operator or an image analysis process may identify events or features of interest in the overview and redirect the camera head 12 for direct capture of the scene including the event or feature of interest. Direct capture in the above sentence should be understood as capturing an image by means of the camera head 12 when not directed to capture images through the wide angle lens 20.
(28) In order to facilitate the understanding of the function of the camera, an example scenario will be described below. In this example scenario, a monitoring camera 10 according to one embodiment of the invention is installed in the ceiling of a room 30, see
(29) According to one embodiment, see
(30) The image sensor 50 may be any known image sensor able to capture light representing an image view and convert the light to electrical signals, which then may be processed into digital images and or digital image streams by the image processing unit 52. Thus, the image sensor 50 may be arranged to capture visible light or infrared light, depending on the application of the camera. The image data from the image sensor 50 is sent to the image processing unit 52 via connection 70. The image processing unit 52 and the general processing unit 54 may be the same device, may be implemented as separate units on the same chip, or may be separate devices. Moreover, many functions described below as being performed in the image processing unit 52 may be performed in the general processing unit 54 and vice versa.
(31) The processing units 52, 54 are connected to the volatile memory 56 for use as a work memory via for instance a bus 72. Moreover, the volatile memory 56 may be used as temporary data storage for image data during processing of the image data and the volatile memory 56 may therefore be connected to the image sensor 50 as well. The non-volatile memory 58 may store program code required for the processing units 52, 54 to operate and may store settings and parameters that are to be preserved for a longer time period and even withstand power outages. The processing units 52, 54 are connected to the non-volatile memory 58 via, for instance, the bus 72.
(32) The network interface 60 includes an electrical interface to the network 74, which the monitoring camera is to be connected to. Further, the network interface 60 also includes all logic interface parts that are not implemented as being executed by the processing unit 54. The network 74 may be any known type of LAN (Local Area Network), WAN (Wide Area Network), or the Internet. The person skilled in the art is well aware of how to implement a network interface using any of a plurality of known implementations and protocols.
(33) The panning motor 62 and the tilting motor 66 are controlled by the processing 54 unit via each motor controller 64, 68. The motor controllers are arranged to convert instructions from the camera position controller 61 into electrical signals compatible with the motors. The camera position controller 61 may be implemented by means of code stored in memory 58 or by logical circuitry. The tilt motor 66 may be arranged within or very close to a panable/tiltable camera head 12 and the pan motor 62 are in many cases arranged further away from the camera head 12, in particular in the cases where the joint for panning is the second joint, counted from the camera head 12. Control messages for pan and tilt may be received via the network 74 and processed by the processing unit 54 before forwarded to the motor controllers 64, 68.
(34) Other implementations of the monitoring camera 10 are evident to the person skilled in the art.
(35) The above described function of redirecting the camera head from capturing overview images to capturing detailed images of positions indicated in an image captured in overview mode may be implemented by transforming the coordinates of the indicated position within the overview image to pan and tilt angles for positioning the camera in detailed mode to capture an image of the indicated position.
(36) Now referring to
(37)
(38) Using the above equations, the pan angle is simply calculated by applying trigonometry to the Cartesian coordinates, see Equation 3. The tilt angle , according to this specific embodiment, is an approximation in which the calculation approximates the features of the image captured as being positioned on a spherical surface 702, thus arriving at the Equation 2, in which the tilt angle is calculated by applying the distance ratio of the distance d and the distance r to the total view angle . This embodiment is not necessarily limited to this transformation scheme, but any like transformation scheme may be used.
(39) Depending on the application of the camera, the level of precision required of the transformation may vary. However, even in applications not requiring very high precision, a user will expect the point selected in the overview to appear quite close to the center of the image in the detailed view. The resulting transformation from a system offering less quality transformations is illustrated in
(40) The offset, e.sub.x and e.sub.y, may result from the act of mounting the wide angle lens 20 on the dome 14, the mounting of the dome 14 on the dome base 16, the mounting of the camera head 12 in the housing, etc. The camera assembly 10 requires tight mechanical tolerances in order not to introduce offset problems. For example, offset problems may occur if a mounting screw for any one of the dome, the camera, the wide angle lens, etc. is over tightened a bit too much. Hence one common problem is that the optical axis 22 of the wide angle lens 20 and the optical axis 26 of the camera head 12 is offset, m.sub.off, when the camera head is arranged having its optical axis 26 coinciding with the rotational axis 24 for panning the camera head, see
(41) One example of the image sensor not being effectively utilized is depicted in
(42) According to one embodiment, the problem of the image sensor not being effectively utilized is addressed by calibrating the camera system. The calibration in this embodiment includes the act of tilting the camera head 12 and moving the center point 1002 of the projected image view 1008 to a position on an imaginary center-line 1006, dividing the image sensor in two halves, see
(43) As shown in
(44) This center CCy of the projected image is then related to the center of the image censor CSy, also along the y-axis, and the process checks, step 1112, if the center CCy of the projected image along the y-axis is substantially centered on the image sensor, i.e., if CCy=CSytolerance. In step 1114, a determination is made as to if the center CCy of the projected image is determined to correspond to the center CSy of the image sensor in the y-direction, or if the counter C counting the number of times this check and repositioning of the camera head has been performed n times. According to one embodiment, the value of n is three. The value n defines how many times the repositioning and checking of the camera head is allowed to be performed. One reason for this parameter is to avoid that the system get stuck in a loop, such as for instance, if the system for some reason is not able to determine the center. Hence, the value of n may be any number as long as it is small enough not to result in a perception of deteriorated performance.
(45) If the check in step 1114 is false, the camera head is tilted based on the offset of between the center CCy of the projected image along the y-axis and the center of the image sensor along the y-axis, step 1116. According to one embodiment, an angle .sub.tilterr that the camera head 12 is to be tilted is calculated based on equation 4 below:
(46)
where .sub.tilterr: the angle that the camera head should be tilted .sub.totch: the total viewing angle from camera head alone e.sub.y: distance from the center CSy of the image sensor to center of captured circular image CCy h: height of image sensor
Then the counter C is incremented, step 1118, and the process is returned to step 1106 for capturing a new image at the new position of the camera head and for checking if the new position is more accurate.
(47) If the check in step 1114 is true, then the position of the camera head is stored in memory in order to be used as the position to return the camera head to when entering overview mode, step 1120, i.e., the overview mode coordinates or angles are set for the camera head. Then, when the overview position for the camera head has been determined, the calibration process for positioning of the camera head in overview mode is concluded.
(48) According to one embodiment, the center of the projected image is found using a Hough Transform and the projected image is substantially circular. Further information of the Hough Transform may be found in Computer Vision, by Shapiro, Linda and Stockman, Prentice-Hall, Inc. 2001.
(49) In order to apply the Hough Transform for finding the parameters of the circle represented by the image projection, the boundary of the image projection has to be determined. This may be performed using any standard edge detection algorithm. When the edge of the projected image has been detected, the Hough Transform is applied to the detected edge. The processing using the Hough Transform may be accelerated by defining a limited range of possible circle radiuses for the analysing by means of the Hough transform. This may be achieved as a result of it being possible to have a quite accurate idea of the size of the circle shaped image projection. In one particular embodiment, the radius used in the Hough transform is in the range of 0.25-0.35 times the width of the image sensor. For a system having a sensor that has a width of 1280 pixels, the radius is selected in the range of 320 pixels-448 pixels.
(50) In order to speed up the process even more and still get reliable results the radius may be selected from the range of 0.2921875-0.3125 times the width of the image sensor in pixels. For a system having a sensor that has a width of 1280 pixels the radius may be selected in the range of 374 pixels-400 pixels.
(51) The above process of centering the image projection on the images sensor may be part of a calibration scheme for decreasing the offset between a selected position in the overview image and the center position in the detailed image view, see discussion relating to
(52) A calibration scheme for decreasing the offset is described in the flowchart of
(53) Thereafter, the camera is panned and tilted in accordance with the pan angle and the tilt angle, entering the camera into detailed mode, and detailed image is captured when camera is in detailed mode after panning and tilting in accordance with the pan angle and the tilt angle is completed, step 1212.
(54) In the system not calibrated for transition errors, the transition to detailed mode seldom results in the selected feature occurring in the center of the captured image in the detailed image view. The operator may manually adjust pan angle and tilt angle until the camera presents the point of the feature selected in the overview image substantially in the center of the image captured in detailed mode, step 1214. The camera is capturing images frequently during this panning and tilting in order to present the result of the panning and tilting to the operator. Information indicating the difference between the manually adjusted pan angle and tilt angle and the pan angle and the tilt angle resulting from the transforming of the position of the feature is now saved, step 1216, for later use. The steps of selecting features and registering the error in the transition from overview mode to detailed mode may then be repeated for at least two further selection areas, i.e. n=3 in step 1218. The value of n may be any number equal to or greater than three. The value of n should however not be too large, because then the calibration will be quite time consuming. In order to balance accuracy and speed the value of n may be in the range of 4-10.
(55) When steps 1208-1216 have been performed n times, then the process estimates a function .sub.err(.sub.calc), representing a pan error, in which .sub.calc is the pan angle directly transformed from the overview coordinates, and a function .sub.err(.sub.calc), representing a tilt error, which also is depending on the pan angle .sub.calc. These functions .sub.err(.sub.cal) and .sub.calc(.sub.calc) represent the errors based on the saved information, step 1220, and the calibration process is ended. The estimated function may now be used for compensating in transformations between positions in the overview mode and pan and tilt angles in detailed mode, i.e., the function may be applied on operations including transforming coordinates from the overview image to pan and tilt angles.
(56) In
(57) According to another embodiment, see
(58) On the other hand, if .sub.max is not smaller than the predetermined threshold .sub.thresh value, step 1520, then a suggested selection area is positioned between, in circumferential direction, the two positions presenting the largest angle .sub.max between them. The new suggested selection area is positioned substantially at an angle of .sub.max/2 from any one of the two adjacent positions. Then the process returns to step 1508 for selection of another calibration position.
(59)
(60) Then the operator may select a new calibration position 1604b over a suitable feature. It should be noted that the suggested selection areas 1602a-d are not necessary the only areas possible to select calibration positions within but rather indicates suitable areas for selecting calibration positions. After selection of the second calibration position 1604b, the selection process continues and eventually presents a new suggested selection area 1602c for the operator. The position of this new suggested selection area 1602c is once more calculated as half the angle .sub.max/2 of the largest angle .sub.max between two circumferentially neighbouring selected calibration positions. In
(61) The next suggested selection area 1602d is then positioned substantially at an angle marginally less than 90 degrees from the lines 1606 or 1612 forming the largest angle .sub.max. The operator may then select the next calibration position 1604d and the process continues, unless this is the last calibration position to be recorded.
(62) From the above discovered positional errors when directing the camera towards a calibration position, the error functions may be estimated. The error functions .sub.err(.sub.calc) and .sub.err(.sub.calc) may be estimated by a trigonometric function or a polynomial. The advantage of using a polynomial estimation is that a Linear Least Square Estimation (LLSE) may be used to find the coefficients of the functions. How to estimate a polynomial function using LLSE is well known to the person skilled in the art.
(63) When the error functions has been estimated, an error estimation of the pan-tilt angle at any calculated pan-tilt position may be retrieved using the values produced by the transform from coordinates, x, y, to the pan tilt position .sub.calc, .sub.calc. Accordingly, the error in each pan-tilt position may be calculated according to Equations 5 and 6:
e.sub.=.sub.(.sub.calc)Equation 5
e.sub.=.sub.err(.sub.calc)Equation 6
Then, the error compensated pan-tilt position , may be calculated according to Equations 7 and 8:
=.sub.calce.sub.Equation 7
=.sub.calc+e.sub.Equation 8
(64) The resulting pan-tilt position, i.e., the one determined by adjusting using the error function, is of substantially higher accuracy than a position calculated without using the error compensation.
(65) Transformations from a pan-tilt position , to an overview position x, y, may also be necessary. One case in which such transformation may be of interest is if an object in the detailed view has to be masked and the operator returns the camera to the overview mode. In this case the masked object should be masked in the overview mode as well as in the detailed mode. In order to achieve such result the position in the detailed mode has to be transformed to a position in the overview.
(66) According to one embodiment, the transformation from pan-tilt positions , to overview positions x,y, is based on equations inverting the transformation described in relation to Equations 1-3. According to one embodiment, the transformation includes calculating the distance d in the overview, see
(67)
(68) In order to make true overview coordinates of the values of dx and dy the process further adjusts for the coordinates of the center point x.sub.c, y.sub.c, used as reference for the pan angle , for the distance d and for dx and dy. Moreover, in this transform, as well as in the transform for transformation from overview to detailed view, misalignment errors are present and therefore error functions for x and y, respectively, has to be determined. In one embodiment, these functions are estimated from substantially the same calibration process and data as described above. More specifically, the calibration process may be identical to any of the calibration processes described in connection with
(69) Hence, the error in the transformation operation for a calibrated system may be expressed as:
e.sub.x=.sub.x()Equation 12
e.sub.y=.sub.y()Equation 13
(70) Accordingly the transformed and compensated coordinates in overview mode may be expressed as:
x=x.sub.c+dx+e.sub.xEquation 14
y=y.sub.c+dy+e.sub.yEquation 15
(71) As previously mentioned, this transformation from the detailed mode to the overview mode may be used in correctly positioning masks. For instance, masks for the pan-tilt cameras are often defined in pan-tilt angles and by introducing a more precise transformation method theses masks may simply be recalculated for the overview instead of being re-entered for the overview coordinates and thus facilitating the setting up of the monitoring camera. Another use of this transformation is that it enables improvements to the interface. For example, when the camera is in detailed mode, depicted in