Moving robot capable of recognizing carpet
12259732 ยท 2025-03-25
Assignee
Inventors
Cpc classification
G06V20/58
PHYSICS
A47L2201/06
HUMAN NECESSITIES
A47L11/4061
HUMAN NECESSITIES
A47L9/009
HUMAN NECESSITIES
G06T7/521
PHYSICS
A47L9/2852
HUMAN NECESSITIES
A47L2201/04
HUMAN NECESSITIES
A47L11/4011
HUMAN NECESSITIES
International classification
G05D1/246
PHYSICS
A47L9/00
HUMAN NECESSITIES
A47L9/28
HUMAN NECESSITIES
G05D1/00
PHYSICS
Abstract
There is provided a moving robot including a light projector, an image sensor and a processing unit. The light projector projects a vertical light segment and a horizontal light segment toward a moving direction. The image sensor captures, toward the moving direction, an image frame containing a first light segment image associated with the vertical light segment and a second light segment image associated with the horizontal light segment. The processing unit recognizes a plush carpet in the moving direction when a vibration intensity of the second light segment image is higher than a predetermined threshold, and an obstacle height calculated according to the first light segment image is larger than a height threshold.
Claims
1. A moving robot, comprising: a light projector, configured to project a vertical light segment toward a moving direction; an image sensor, configured to capture an image frame containing a light segment image associated with the vertical light segment; and a processing unit, configured to search a broken point which separates the light segment image into two parts in the image frame, calculate a transverse distance between the two parts of the light segment image, obtain an obstacle height corresponding to the transverse distance, and output a flag signal indicating a plush carpet being confirmed in the moving direction when the obstacle height is smaller than an upper threshold and larger than a lower threshold.
2. The moving robot as claimed in claim 1, wherein the processing unit is further configured to compare a width of the light segment image with a width threshold in confirming the plush carpet.
3. The moving robot as claimed in claim 1, wherein the processing unit is further configured to calculate a segment height of one of the two parts of the light segment image closer to an edge of the image frame, and obtain a distance from the plush carpet corresponding to the segment height.
4. The moving robot as claimed in claim 3, wherein the processing unit is further configured to increase a moving speed of the moving robot when the distance is smaller than or equal to a first predetermined distance.
5. The moving robot as claimed in claim 3, wherein the processing unit is further configured to increase a suction force when the distance is smaller than or equal to a second predetermined distance.
6. The moving robot as claimed in claim 3, wherein the processing unit is further configured to deactivate a brush rotation when the distance is smaller than or equal to a third predetermined distance.
7. The moving robot as claimed in claim 1, further comprising another light projector configured to project another vertical light segment toward the moving direction, wherein the image sensor is configured to capture the image frame containing another light segment image associated with the another vertical light segment, and the processing unit is configured to confirm the plush carpet in the moving direction further according to another obstacle height calculated according to the another light segment image.
8. A moving robot, comprising: a first light projector, configured to project a vertical light segment toward a moving direction; a third light projector, configured to project a horizontal light segment toward the moving direction; an image sensor, configured to capture an image frame containing a first light segment image associated with the vertical light segment and a second light segment image associated with the horizontal light segment; and a processing unit, configured to recognize that there is no plush carpet in the moving direction when a vibration intensity of the second light segment image is not higher than a predetermined threshold.
9. The moving robot as claimed in claim 8, wherein the processing unit is further configured to calculate an obstacle height by searching a broken point which separates the first light segment image into two parts in the image frame, calculating a transverse distance between the two parts of the first light segment image, and obtaining the obstacle height corresponding to the transverse distance, and recognize the plush carpet according to the obstacle height.
10. The moving robot as claimed in claim 8, wherein the processing unit is configured to calculate the vibration intensity by comparing an amplitude of the second light segment image with a baseline in the image frame, and counting a number of times that the amplitude of the second light segment image changing from above the baseline to below the baseline as well as from below the baseline to above the baseline as the vibration intensity.
11. The moving robot as claimed in claim 8, wherein the processing unit is configured to calculate a standard deviation of the second light segment image as the vibration intensity.
12. The moving robot as claimed in claim 8, wherein the processing unit is configured to calculate the vibration intensity by comparing an amplitude of the second light segment image with a first threshold above a baseline in the image frame as well as a second threshold below the baseline in the image frame, and counting a number of times that the amplitude of the second light segment image changing from above the first threshold to below the second threshold as well as from below the second threshold to above the first threshold as the vibration intensity.
13. The moving robot as claimed in claim 8, wherein the processing unit is configured to recognize the plush carpet further by comparing a width of the first light segment image with a width threshold.
14. The moving robot as claimed in claim 8, wherein the processing unit is further configured to search a broken point which separates the first light segment image into two parts in the image frame, calculate a segment height of one of the two parts of the first light segment image closer to an edge of the image frame, and obtain a distance from the plush carpet corresponding to the segment height.
15. The moving robot as claimed in claim 14, wherein the processing unit is further configured to increase a moving speed of the moving robot when the distance is smaller than or equal to a first predetermined distance.
16. The moving robot as claimed in claim 14, wherein the processing unit is further configured to increase a suction force when the distance is smaller than or equal to a second predetermined distance.
17. The moving robot as claimed in claim 14, wherein the processing unit is further configured to deactivate a brush rotation when the distance is smaller than or equal to a third predetermined distance.
18. The moving robot as claimed in claim 9, further comprising a second light projector configured to project another vertical light segment toward the moving direction, wherein the image sensor is configured to capture the image frame containing another light segment image associated with the another vertical light segment, and the processing unit is configured to recognize the plush carpet further according to another obstacle height calculated according to the another light segment image.
19. The moving robot as claimed in claim 18, wherein the processing unit is configured to recognize the plush carpet further according to a width of the another light segment image.
20. The moving robot as claimed in claim 8, wherein the processing unit is further configured to search a broken point which separates the first light segment image into two parts in the image frame, and recognize that there is no plush carpet in the moving direction upon no broken point in the first light segment image being found.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) Other objects, advantages, and novel features of the present disclosure will become more apparent from the following detailed description when taken in conjunction with the accompanying drawings.
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
DETAILED DESCRIPTION OF THE EMBODIMENT
(16) It should be noted that, wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts.
(17) The moving robot of some embodiments of the present disclosure is to accurately calculate a step distance (or referred to a cliff distance) in front of a moving direction so as to prevent falling and to accurate construct a working map on various operating surfaces.
(18) The moving robot of other embodiments of the present disclosure is to recognize a plush carpet in front by the optical method.
(19) Referring to
(20) The moving robot 100 includes at least one light projector (e.g.,
(21) It is appreciated that although
(22) Each of the first light projector 1011 and the second light projector 1012 includes a light source and a diffractive optical element (DOE). The light source is preferably a coherent light source for emitting light of an identifiable spectrum, e.g., an infrared laser diode, but not limited to. Besides, the light source is selected from a partially coherent light source or non-coherent light source. After the light emitted by the light source passes the diffractive optical element, a linear (i.e., length much larger than width) light segment is formed.
(23) The first light projector 1011 and the second light projector 1012 respectively project a vertical (corresponding to an operating surface S shown in
(24) The image sensor 103 is a CCD image sensor, a CMOS image sensor or other sensors for converting light energy to electrical signals. The image sensor 103 has a plurality of pixels arranged in a matrix and operates at a predetermined frame rate toward the moving direction. The image sensor 103 captures, with a field of view FOV, an image frame IF containing light segment images IL associated with the light segments LS1 and LS2 as shown in
(25) The processing unit 105 is, for example, a digital signal processor (DSP), field programmable gate array (FPGA) or an application specific integrated circuit (ASIC), and is electrically connected to the at least one light projector and the image sensor 103 for controlling the light source to emit light corresponding to the image capturing of the image sensor 103. The processing unit 105 receives the image frame IF outputted by the image sensor 103, and calculates an image feature and a step distance Dr according to the light segment images IL in the image frame IF associated with the light segments LS1 and LS2. The image feature is used to identify whether a current operating surface is a flat surface or not to accordingly determine a proper calculation method. It is able to obtain a corresponding step distance Dr using a look up table or an algorithm when obtaining a vertical length (referred to a segment height below) of a light segment image IL in the image frame IF.
(26) For example referring to
(27) In some aspects, the memory of the moving robot 100 pre-stores multiple different width thresholds corresponding to different operating surfaces. The processing unit (105 or an external processing unit outside the sensing chip 1035) identifies a type of an operating surface according to the width W1 of the light segment image IL in the image frame IF. The memory stores the relationship between the segment height Dd and the step distance Dr corresponding to different types of the operating surface.
(28) Referring to
(29) One way to identify the broken line is shown in
(30) In other words, the image feature of the present disclosure includes the segment width W1 of the light segment image IL (as shown in
(31) In the present disclosure, the processing unit 105 outputs a flag signal FS according to the image feature to indicate the type of an operating surface, select a suitable distance calculation algorithm, indicate a confidence level of an image frame and/or indicate a confidence level of an outputted step distance. For example, the sensing chip 1035 has an independent leg (not shown) for exclusively outputting the flag signal FS. Said independent is referred to that the leg is only for outputting the flag signal FS without outputting other signals (e.g., not outputting the step distance Dr obtained by the processing unit 105).
(32) Referring to
(33) Step S61: The processing unit 105 controls the light projectors 1011 and 1012 to emit light and controls the image sensor 103 to output an image frame IF containing light segment images IL as shown in
(34) Step S63: After receiving the image frame IF from the image sensor 103, the processing unit 105 identifies whether the light segment image IL in the image frame IF is too wide or is a broken line (i.e. having predetermined feature) so as to identify a confidence level of the image frame IF or the obtained step distance Dr.
(35) S65-S67: These two steps have several implementations. In one aspect, no matter whether the image frame IF includes a predetermined feature, the processing unit 105 firstly calculates and outputs a step distance Dr. The processing unit 105 also outputs a digital signal having at least one bit to indicate whether the obtained step distance Dr is confident or not. It is the external processing unit (e.g., the CPU or MCU of the moving robot 100) outside the sensing chip 1035 to determine whether to use the step distance Dr calculated by the processing unit 105. If the step distance Dr is not used (low confidence level), it means that the moving robot 100 adopts another algorithm or uses a look up table to determine a current step distance. As mentioned above, different look up tables are constructed corresponding to different types of the operating surface.
(36) In another aspect, although the processing unit 105 is arranged to always calculate a step distance Dr, the calculated step distance Dr is outputted only when a high confidence level is identified. When identifying a low confidence level, the processing unit 105 does not output the calculated step distance Dr.
(37) In an alternative aspect, the processing unit 105 calculates a step distance Dr only when a high confidence level is identified. The processing unit 105 does not calculate the step distance Dr when a low confidence level is identified.
(38) As mentioned above, the processing unit 105 identifies a segment height (e.g., Dd in
(39) In one non-limiting aspect, the moving robot 100 further includes a memory (outside the sensing chip 1035) for previously storing a look up table containing the relationship between multiple segment heights of the light segment image IL and multiple step distances regarding special operating surfaces. When the sensing chip 1035 outputs a flag signal FS indicating a low confidence level (i.e. indicating a special operating surface), the external processing unit outside the sensing chip 1035 (e.g., 109 shown in
(40) In another non-limiting aspect, the moving robot 100 further includes a second sensor (e.g., 107 in
(41) That is, the concept of the present disclosure is in that although the processing unit 105 may calculate a step distance Dr in all conditions, the calculated step distance Dr has a shift from an actual distance to cause an error when the moving robot 100 is operating on a special surface. Therefore, the processing unit 105 further identifies a segment feature of the light segment image IL to determine whether to calculate a correct current step distance using other ways, e.g., using another sensor or a predetermined look up table.
(42) In the present disclosure, before calculating the image feature of the light segment image IL, the processing unit 105 further digitizes the image frame IF based on a digitizing threshold (e.g., setting the pixel position having a gray value larger than the digitizing threshold as 1 and setting the pixel position having a gray value smaller than the digitizing threshold as 0, or vice versa) to facilitate the calculation of the light segment image IL, image feature and the segment height Dd, e.g., a region that is set as 1 is identified as a light segment image IL.
(43) In addition, in an aspect using two light projectors and before calculating the image feature of the light segment images IL, the processing unit 105 further divides the image frame IF into a left image frame and a right image frame respectively containing one light segment image IL. The processing unit 105 further calculates one image feature and one step distance of the one light segment image IL respectively in the left image frame and the right image frame. That is, the processing unit 105 calculates two image features and two step distances according to one image frame IF. The processing unit 105 identifies the confidence level according to the two image features. The processing unit 105 outputs two step distances or one average of the two distances according to different applications.
(44) Because the gray level of a front end of the light segment image IL in the image frame IF changes due to noises and environment, the calculated step distance has jitters due to the change. To solve this problem, the moving robot 100 of the present disclosure further has a memory for storing a gray level threshold TH2 for identifying a pixel position of the front end of the light segment image IL. The gray level threshold TH2 is a predetermined fixed value, or a varied value determined according to gray value sums of one pixel column or one image frame not containing the light segment image IL. In one aspect, the gray level threshold TH2 is determined according to white noises in the image frame.
(45) When the image sensor 103 outputs an image frame IF as shown in
(46) It is seen from
(47) It should be mentioned that although
(48) Similarly, when two light projectors are used, the processing unit 105 respectively performs the pixel interpolation on pixels corresponding to two front ends of two light segment images IL to respectively obtain two segment heights Dd having a sub-pixel level to accordingly calculate two corresponding step distances Dr.
(49) The irregular surface herein includes different types of surface such as the plush carpet and the mosaic tile. The moving robot 100 is arranged to operate in a different way, e.g., speeding up, increasing a suction force and/or deactivating a brush rotation (or water valve), if a plush carpet is detected in front. Therefore in an alternative embodiment of the present disclosure, the processing unit (e.g., 105 or 109) further identifies an obstacle height according to the vertical light segment image so as to distinguish a plush carpet from other irregular surfaces (e.g., mosaic tile) since the plush carpet is generally higher than the mosaic tile.
(50) In this alternative embodiment, the light projector 1011 projects a vertical light segment LS1 toward a moving direction as shown in
(51) The processing unit (e.g., 105 or 109) recognizes (e.g., outputting a flag signal FS) a plush carpet in the moving direction when a width W1 of the light segment image IL1 is wider than a width threshold, and an obstacle height calculated according to the light segment image IL1 is larger than a height threshold. As mentioned above, the light segment image IL1 becomes wider when the vertical light segment LS1 is projected on the plush carpet.
(52) The width threshold and the height threshold are previously stored and recorded in the moving robot 100, e.g., in a memory thereof.
(53) In one aspect, the processing unit (e.g., 105 or 109) calculates the obstacle height by searching a broken point which separates the light segment image IL1 into two parts, e.g.,
(54) In this alternative embodiment, the moving robot 100 stores and records the relationship between transverse distances W2 the actual obstacle heights such that when the processing unit obtains a transverse distance W2, a corresponding obstacle height is obtainable based on the relationship. Preferably, the relationship is previously constructed (e.g., based on the arrangement of the light projector and light projecting angle 1011 as well as a field of view of the image sensor 103) and stored in the moving robot 100 before shipment.
(55) Preferably, the obstacle height is between a predetermined range for specifying the plush carpet. That is, the obstacle height which is too large or too small is not recognized as the plush carpet. For example, the processing unit identifies whether the obtained obstacle height is smaller than an upper threshold and larger than a lower threshold as a condition to confirm the plush carpet.
(56) To improve the user experience, the processing unit (e.g., 105 or 109) further calculates a distance from the obstacle (e.g., the plush carpet) to perform the corresponding control. In one aspect, the processing unit searches a broken point which separates the light segment image IL1 into two parts as shown in
(57) In the case that the light segment image IL1 does not have the broken point, the processing unit confirms there is no plush carpet in front.
(58) In one aspect, the processing unit (e.g., 105 or 109) further increases a moving speed of the moving robot 100 when the distance from the plush carpet is smaller than or equal to a first predetermined distance so as to be able to climb up the plush carpet.
(59) In another aspect, the processing unit (e.g., 105 or 109) increases a suction force (and turns off a sprinkler if included) when the distance is smaller than or equal to a second predetermined distance such that the moving robot 100 is suitable to operate on the plush carpet.
(60) In an alternative aspect, the processing unit (e.g., 105 or 109) deactivates a brush rotation (functioned to collect dusts and fragments) when the distance is smaller than or equal to a third predetermined distance.
(61) In the present disclosure, the first predetermined distance, the second predetermined distance and the third predetermined distance are identical to or different from each other.
(62) As shown in
(63) The method of calculating the width of said another light segment image IL2 and the obstacle height according to said another light segment image IL2 is identical to that of calculating the width of the light segment image IL1 and the obstacle height according to the light segment image ILL and thus details thereof are not repeated herein.
(64) As mentioned above, the second light projector 1012 is arranged for increasing the detection range of the moving robot 100.
(65) Please refer to
(66) That is, the moving robot 100 includes a first light projector 1011, a third light projector 1013, an image sensor 103 and a processing unit (e.g., 105 or 109).
(67) The first light projector 1011 projects a vertical light segment LS1 toward a moving direction. The third light projector 1013 projects a horizontal light segment toward the moving direction. The third light projector 1013 also includes a light source and a diffractive optical element (DOE) similar to the first light projector 1011.
(68) The image sensor 103 captures an image frame containing a first light segment image (e.g., IL1 in
(69) The processing unit (e.g., 105 or 109) recognizes a plush carpet in the moving direction when a vibration intensity of the second light segment image IL3 is higher than a predetermined threshold, and an obstacle height calculated according to the first light segment image IL1 is larger than a height threshold. The method of calculating the obstacle height has been illustrated above, and thus details thereof are not repeated herein.
(70) This embodiment recognizes a plush carpet further according to a vibration intensity of the second light segment image IL3 based on that when the horizontal light segment is projected on an irregular surface (e.g., the plush carpet), the captured second light segment image IL3 fluctuates severely.
(71) For example,
(72) In one aspect, the processing unit (e.g., 105 or 109) compares an amplitude of the second light segment image IL3 with a baseline BL in the image frame IF, and counts a number of times that the amplitude of the second light segment image IL3 changing from above the baseline BL to below the baseline BL as well as from below the baseline BL to above the baseline BL as the vibration intensity.
(73) For example, a counted number is increased by 1 when the amplitude of the second light segment image IL3 changes from above the baseline BL to below the baseline BL or changes from below the baseline BL to above the baseline BL. For example, a counted number is increased by 1 when the amplitude of the second light segment image IL3 changes from above the baseline BL to below the baseline BL and then changes from below the baseline BL to above the baseline BL, or vice versa. The baseline BL is determined previously based on the arrangement of the third light projector 1013 and light projecting angle as well as a field of view of the image sensor 103.
(74) When the counted number of times is larger than a predetermined value, it means that the horizontal light segment is projected on an irregular surface (e.g., the plush carpet). Otherwise, the moving robot 100 does not recognize a plush carpet in front.
(75) In another aspect, the processing unit (e.g., 105 or 109) calculates a standard deviation of the second light segment image IL3 as the vibration intensity. When the calculated standard deviation is larger than a predetermined value, it means that the horizontal light segment is projected on an irregular surface (e.g., the plush carpet). Otherwise, the moving robot 100 does not recognize a plush carpet in front.
(76) In an alternative aspect, the processing unit (e.g., 105 or 109) compares an amplitude of the second light segment image IL3 with a first threshold (e.g., TH1 as shown in
(77) For example, a counted number is increased by 1 when the amplitude of the second light segment image IL3 changes from above the first threshold TH1 to below the second threshold TH2 or changes from below the second threshold TH2 to above the first threshold TH1. For example, a counted number is increased by 1 when the amplitude of the second light segment image IL3 changes from above the first threshold TH1 to below the second threshold TH2 and then changes from below the second threshold TH2 to above the first threshold TH1, or vice versa.
(78) When the counted number of times is larger than a predetermined value, it means that the horizontal light segment is projected on an irregular surface (e.g., the plush carpet). Otherwise, the moving robot 100 does not recognize a plush carpet in front.
(79) In an alternative aspect, the moving robot 100 recognizes the plush carpet further when a width W1 of the first light segment image IL1 is wider than a width threshold as mentioned above. That is, the processing unit uses a combination of three conditions, including light segment image width W1, vibration intensity and obstacle height (e.g., corresponding to D shown in
(80) In an alternative aspect, the processing unit (e.g., 105 or 109) further calculates a distance from the plush carpet to perform corresponding controls as mentioned above, e.g., including speeding up, increasing a suction force and/or deactivating a brush rotation (a water valve if included). The method of obtaining the distance from the plush carpet according to the light segment image IL1 or IL2 has been illustrated above and thus details thereof are not repeated herein.
(81) In an alternative aspect, the moving robot 100 includes a second light projector 1012 for projecting another vertical light segment (e.g., LS2 shown in
(82) Similarly, the processing unit (e.g., 105 or 109) recognizes the plush carpet further according to a width of said another light segment image IL2, similar to calculating the width of the first light segment image ILE
(83) As mentioned above, the conventional cleaning robot has the problem of unable to accurately identify a step distance on a special operating surface. Accordingly, the present disclosure further provides a moving robot (e.g.,
(84) Although the disclosure has been explained in relation to its preferred embodiment, it is not used to limit the disclosure. It is to be understood that many other possible modifications and variations can be made by those skilled in the art without departing from the spirit and scope of the disclosure as hereinafter claimed.