SHADOW REMOVAL METHOD AND SYSTEM FOR A MOBILE ROBOT CONTROL USING INDOOR SURVEILLANCE CAMERAS
20170249517 · 2017-08-31
Inventors
Cpc classification
G06V20/58
PHYSICS
G06V20/52
PHYSICS
International classification
Abstract
A mobile robot to which a shadow removal method and system for surveillance camera-based mobile robot control according to the present invention is applied acquires images from two closely installed surveillance cameras indoors and performs shadow removal at an improved speed compared to the conventional speed, to recognize the obstacle in the image to avoid it and travel.
It is anticipated that through the mobile robot using the surveillance camera-based shadow removal method and system of the present invention, the practical use of an intelligent image surveillance system that can automatically analyze images and recognize a dangerous situation to take quick action may be accelerated.
Claims
1. A shadow removal system for surveillance camera-based mobile robot control comprising, a control unit for recognizing objects and obstacles in a surveillance camera image transmitted through a shadow removal technique, generating a movement path for traveling so the objects and obstacles are avoided, and generating control signals according to the movement paths to control components other than itself; a sensor unit for detecting a steering angle and a rotation of a driving motor to transmit to the control unit; a traveling unit for generating a driving force by the control signal; a steering unit operated to move along the movement path by the control signal; a communication unit for transmitting the image acquired from the surveillance camera to the control unit; and a power supply unit for supplying electric power to components other than itself.
2. The system of claim 1, further comprising a proximity sensor or a distance sensor for detecting the proximity of the obstacle or for measuring the distance to the obstacle.
3. The system of claim 1, wherein the traveling unit is provided with a BLDC motor and the steering unit is provided with a stepping motor.
4. The system of claim 1, wherein a communication protocol applied to the communication unit is a Zigbee wireless communication protocol.
5. The system of claim 1, wherein the communication unit is characterized in that a plurality of wireless communication protocols are used simultaneously.
6. The system of claim 1, wherein the traveling unit is provided with a wheel or a caterpillar or a leg for walking as a moving means.
7. The system of claim 1, wherein the sensor unit further comprises a camera or a visual sensor.
8. The system of claim 2, wherein the sensor unit further comprises a camera or a visual sensor.
9. The system of claim 3, wherein the sensor unit further comprises a camera or a visual sensor.
10. The system of claim 4, wherein the sensor unit further comprises a camera or a visual sensor.
11. The system of claim 5, wherein the sensor unit further comprises a camera or a visual sensor.
12. The system of claim 6, wherein the sensor unit further comprises a camera or a visual sensor.
13. A shadow removal method for surveillance camera-based mobile robot control comprising, detecting a binary image wherein a background from an original image acquired from indoor surveillance cameras installed adjacently has been removed; detecting an object region in a resultant image of the binary image detecting step using an HSV color space; and detecting a final object region where noise of the object region detected through shadow removal and threshold application is reduced.
14. The method of claim 13, wherein in the binary image detection step, the background is removed from the original image acquired from the indoor monitoring cameras using the following Equation 1,
15. The method of claim 14, wherein in the object region detecting step, the object region is detected using the S space and the V space in the HSV color space, as well as the following Equation 2 and Equation 3,
16. The method of claim 15, wherein the object region is compensated by applying Equations 4 and 5 to the image that has undergone the object region detection step,
17. The method of claim 16, wherein after a shadow mask image SM(x, y) is obtained by applying the following Equation 6 to the object region compensated image, a median filter is applied to the shadow mask image SM(x, y),
SM(x,y)=D(x,y)−Object Area (x,y) [Equation 6].
18. The method of claim 17, wherein in the final object region detection step, a shadow removed image is obtained through the following equation 7, and then the background image is removed from the shadow removed image through the following equation 8, and noise of the object region image is reduced by applying a threshold value,
OD(x,y)=|SR(x,y)−I.sub.B(x,y)|≧th.sub.OD [Equation 8].
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0031]
[0032]
[0033]
[0034]
[0035]
[0036]
[0037]
[0038]
[0039]
[0040]
DETAILED DESCRIPTION OF THE INVENTION
[0041] Exemplary embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
[0042] Before describing the present invention in detail, terms and words used herein should not be construed in an ordinary or dictionary sense and should not be construed as limiting the invention to the inventors of the present invention in the best way possible, and it is to be understood that the concepts of various terms can be properly defined and used, and further, these terms and words should be construed as meaning and concept consistent with the technical idea of the present invention.
[0043] That is, the terms used herein are used only to describe preferred embodiments of the present invention, and are not intended to specifically limit the contents of the present invention, and it should be noted that this is a defined term considering that many possibilities of the present invention.
[0044] Also, in this specification, the singular forms “a”, “an”, and “the” include plural referents unless the context clearly dictates otherwise, and similarly it should be understood that even if they are expressed in plural they may include singular meaning.
[0045] Where a component is referred to as “comprising” another component throughout this specification, unless specified otherwise, this means the component does not exclude any other element but may further include any other element.
[0046] Further, when it is stated that an element is “inside or connected to another element”, this element may be directly connected to another element or may be installed in contact with it, or may be installed spaced apart with a predetermined distance, and in the case where a component is installed to be spaced apart with a predetermined distance, a third component or means for fixing or connecting the component to another component may be present, and it should be noted that the description of the third component or means may be omitted.
[0047] On the other hand, it should be understood that there is no third component or means when an element is described as being “directly coupled” or “directly connected” to another element.
[0048] Likewise, other expressions that describe the relationship between the components, such as “between” and “right between ˜”, or “neighboring to” and “directly adjacent to” and such should be understood in the same spirit.
[0049] Further, in this specification, when terms such as “one surface”, “other surface”, “one side”, “other side”, “first”, “second” and such are used, it is to clearly distinguish one component from another, and it should be understood that the meaning of the component is not limited by such term.
[0050] It is also to be understood that terms related to positions such as “top”, “bottom”, “left”, “right” in this specification are used to indicate relative positions in the drawings for the respective components, and unless an absolute position is specified for these positions, it is not to be understood that these position-related terms refer to absolute positions.
[0051] Furthermore, in the specification of the present invention, the terms “part”, “unit”, “module”, “device” and the like mean a unit capable of handling one or more functions or operations, and may be implemented as a hardware or software, or a combination of hardware and software.
[0052] In addition, in this specification, the same reference numerals are used for the respective constituent elements of the drawings, and the same constituent elements are denoted by the same reference numerals even if they are shown in different drawings, that is, the same reference numerals indicate the same components throughout this specification.
[0053] It is to be understood that the size, position, coupling relationships and such, of each component constituting the present invention in the accompanying drawings, may be partially exaggerated or reduced or omitted to be able to sufficiently clearly convey the scope of the invention or for convenience of describing, and therefore the proportion or scale thereof may not be rigorous.
[0054] Also, in the following description of the present invention, a detailed description of a configuration that is considered to unnecessarily obscure the gist of the present invention, for example, a known technology including the prior art, may be omitted.
[0055]
[0056] As shown in
[0057] The control unit 100 is a component that controls the shadow removal system 1000 to drive and avoid obstacles by controlling the sensor unit 200, the traveling unit 300, the steering unit 400, and the communication unit 500, from the camera 500, that receives an image captured on a surveillance camera (not shown) from the communication unit 500 and removes the shadow included in the image, and recognizes an obstacle included in the image to configure a traveling path that enables the shadow removal system 1000 to travel so the obstacles are avoided to control the driving unit 450.
[0058] The sensor unit 200 refers to an encoder (not shown) provided in the traveling unit 300 for counting the rotation of a driving motor 360, and a turret encoder 420 provided in the steering unit 400 for measuring the steering angle. As an additional component of the sensor unit 200, it may further include a proximity sensor, a distance sensor, or the like.
[0059] The traveling unit 300 is a portion in which the shadow removal system 1000 becomes the moving means and in the preferred embodiment of the present invention, it is configured to include a motor driver 310 and driving wheel 320, driving pulleys 330 and 350, a drive belt 340, and a drive motor 360 (see
[0060] Here, the motor driver 310 controls rotation direction and the rotation amount of the drive motor 360 under the control of the control unit 100, and
[0061] the driving wheel 320 which is directly coupled to the driving pulley 330 receives the driving force from the driving pulley 350 directly coupled to the driving motor 360 via the driving belt 340, thereby moving the shadow removal system 1000.
[0062] The steering unit 400 is configured to include a turret pulley 410, a turret encoder 420 and a turret motor (not shown)(see
[0063] The driving unit 450 refers to the driving unit 300 and the steering unit 400 and refers to the portion that functions to move the shadow removal system 1000.
[0064] The communication unit 500 functions to receive an image taken by a surveillance camera (not shown) from the surveillance camera and transfers it to the control unit 100. Here, it is preferable to use a Zigbee method as a communication protocol between the surveillance camera and the communication unit 500. However, a communication protocol other than Zigbee may be applied depending on the purpose of the embodiment, the embodiment environment, and the like.
[0065] The power unit 600 functions to supply power to the components other than itself, that is, the control unit 100, the sensor unit 200, the traveling unit 300, the steering unit 400, and the communication unit 500. The control unit 100 and the sensor unit 200, the steering unit 400 and the communication unit 500 are directly connected to the power unit 600 to receive power. The traveling unit 300 is not directly connected to the power unit 100, and is implemented to be connected to the steering unit 400 to receive power. Of course, depending on the embodiment, the driving unit 300 may be implemented to be directly connected to the power supply unit 100 to receive power.
[0066] The shadow removal system 1000 includes a control unit 100, a sensor unit 200, a traveling unit 300, a steering unit 400, a communication unit 500, and a power unit 600, and preferably will be implemented in the form of a mobile robot that can travel while avoiding obstacles.
[0067] In addition, through a configuration further including a proximity sensor or a distance sensor on a side surface of the exterior of the shadow removal system for surveillance camera-based mobile robot control according to a preferred embodiment of the present invention, the distance between the shadow removal system and the mobile robot and the obstacle may be measured, or may be configured to detect that the shadow removal system and the mobile robot are close to the obstacle.
[0068] Further, it is possible to increase the service life of the traveling unit and to improve the fine steering capability of the steering unit through the provision of a BLDC motor in the traveling unit and a stepping motor in the steering unit.
[0069] Further, in the embodiment of the present invention, the shadow removal system for surveillance camera-based mobile robot control according to the preferred embodiment of the present invention has been described and shown as a using wheels as its means of moving, but the moving means of a robot with the shadow removal method and system of the present invention applied thereto is not limited to wheels, and the shadow removal method and system of the present invention may be applied to robots using other moving means such as a caterpillar or legs for walking.
[0070] Further, although the control unit 100 is shown to not send a control signal to the power unit 600 in
[0071] In addition, the active range of the shadow removal system and the mobile robot according to the present invention can be extended not only indoors but also outdoors through a configuration further including a camera or a visual sensor in the sensor unit 200.
[0072] In addition, referring to the lines connecting the respective components in
[0073] the lines that do not have arrows at one end are power lines, which represent lines for supplying power generated by the power unit to each component,
[0074] the lines having arrows at one end are lines indicating the direction of transmission of a control signal or the direction of data transmission generated in the control unit 100 and indicates that out of the two end parts of a line, the control signal or data is transmitted from a component connected to an end part of the line without an arrow, to a component connected to the end part with the arrow.
[0075] Hereinafter, a shadow removal method and system for surveillance camera-based mobile robot control according to a preferred embodiment of the present invention will be described in detail with reference to
[0076] The shadow removal method for surveillance camera-based mobile robot control according to a preferred embodiment of the present invention includes three steps of a background removed binary image detection step, an object region detection step, and a final object region detection step
[0077]
[0078] First, a background removed binary image D(x, y) is obtained as shown in
[0079] I.sub.B and I.sub.C are respectively the background image and the original image by the indoor surveillance camera. Here, 50 was applied as the threshold value th, which was determined from previous experiments, and according to objective and needs, a value other than 50 may be applied.
[0080]
[0081] The second step of the shadow removal method for the surveillance camera-based mobile robot control according to the preferred embodiment of the present invention is the step of detecting the object region by using a HSV color space. In the HSV color space, the H, S, and V spaces are divided into corresponding images. Since the image from the H space is unstable, the S and V attributes are used for shadow detection.
[0082] F.sub.C.sup.S refers to an S channel of an input image, and F.sub.B.sup.S refers to an S channel of the background image.
[0083] F.sub.C.sup.V refers to a V channel of the input image, F.sub.B.sup.V refers to a V channel of the background image.
[0084] The SS(x, y) of Equation 2 is a binary image obtained through the threshold operation of Equation 2 on the background image of
[0085] The value region of the HSV color space is a binary image obtained through a threshold operation of Equation 3 related to the ratio between the background image and the value region.
[0086] The images of object regions of S and V channels are as shown in
[0087] Here, the threshold value th.sub.S of Equation 2 used for the S image of
[0088]
[0089] Equation 4 and Equation 5 are equations for compensating an object region, and Equation 4 and Equation 5 are applied to obtain the result as shown in
[0090] A shadow mask image SM(x,y) is an image as shown in the following Equation 6, obtained by removing the object region image of Equation 5 from D(x,y) of Equation 1.
SM(x,y)=D(x,y)−Object Area (x,y) [Equation 6]
[0091]
[0092] The shadow mask image is in a state where the object image is removed but noise is still included. By removing the noise element, and for obtaining a stable shadow region, applying a 7×7 median filter to the shadow mask image, a shadow mask image like
[0093]
[0094] The third step is a step of shadow removal in which an input image from the surveillance camera is restored, wherein an object region that may finally be reliable is detected, and shadow removal is performed through the following Equation 7.
[0095] I.sub.B and I.sub.C are respectively grayscale images of the background image and the original image obtained from the surveillance camera.
[0096] The following Equation 8 is an equation for minimizing the noise of the object region of
OD(x,y)=|SR(x,y)−I.sub.B(x,y)|≧th.sub.OD [Equation 8]
[0097]
[0098]
[0099] Further, according to a shadow removal method and system for surveillance camera-based mobile robot control according to a preferred embodiment of the present invention, the time required for one cycle is about 55 milliseconds, which means 18 frames per second.
[0100] The experiment is to show that a shadow removal method for controlling a mobile robot by using two indoor surveillance cameras is effective according to a preferred embodiment of the present invention. The size of the floor area observed by two neighboring cameras is 2.2 meters wide and 6 meters long.
[0101]
[0102] In addition, the surveillance camera has a resolution of 320×240 pixels and three channels of RGB, and the neighboring camera is used for the position recognition of the mobile robot. However, it should be understood by those skilled in the art that the specification of a surveillance camera that can be used in the practice of the present invention is not necessarily limited to the above.
[0103]
[0104] At this time, through the shadow removal method for surveillance camera-based mobile robot control, which is described with reference to
[0105]
[0106] the error boundary between a path planned by the mobile robot to which the shadow removal method and system for surveillance camera-based mobile robot control according to a preferred embodiment of the present invention is applied and the path that the mobile robot actually moves is ±5 centimeters.
[0107] While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is evident that many alternatives, modifications and variations will be apparent to those skilled in the art. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.
[0108] In addition, since the present invention can be embodied in various forms, and should not be construed as being limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete and will fully convey the concept of the invention to those skilled in the art, and the present disclosure will only be defined by the appended claims.