Control Method for Self-Moving Device and Self-Moving Device
20240415049 ยท 2024-12-19
Inventors
Cpc classification
G05D2105/15
PHYSICS
G05D1/243
PHYSICS
International classification
Abstract
A control method for a self-moving device controls the self-moving device to move in a designated area to process a predetermined object in the designated area. The method includes steps of: obtaining, during the moving process of the self-moving device, an image of an area in a forward direction of the self-moving device; and determining, according to the image, whether the predetermined object in the area in the forward direction of the self-moving device includes a specific predetermined object, so as to control a moving direction of the self-moving device. The specific predetermined object and the predetermined object have at least one different feature parameter. A related self-moving device is also disclosed.
Claims
1. A control method for a self-moving device for controlling the self-moving device to move in a designated area to process a predetermined object in the designated area, the method comprising the steps of: obtaining, during the moving process of the self-moving device, an image of an area in a forward direction of the self-moving device; and determining, according to the image, whether the predetermined object in the area in the forward direction of the self-moving device includes a specific predetermined object, so as to control a moving direction of the self-moving device, wherein the specific predetermined object and the predetermined object have at least one different feature parameter.
2. The control method according to claim 1, wherein the step of determining, according to the image, whether the predetermined object in the area in the forward direction of the self-moving device includes a specific predetermined object, includes: processing the image and determining, according to predetermined elements included in a first portion and a second portion of the processed image respectively, whether the predetermined object includes a specific predetermined object, wherein the first portion and the second portion are independent of each other and have equal area.
3. The control method according to claim 2, wherein the step of obtaining, during the moving process of the self-moving device, an image of an area in a forward direction of the self-moving device includes: obtaining, during the moving process of the self-moving device, a first image of the area in the forward direction of the self-moving device using a first image module and a second image of the area in the forward direction of the self-moving device using a second image module.
4. The control method according to claim 3, wherein the step of processing the image includes the steps of: obtaining an initial parallax image according to the first image and the second image; obtaining a relative parallax image according to the initial parallax image and a reference parallax image; and performing edge extraction on the relative parallax image to obtain an edge image.
5. The control method according to claim 4, wherein the step of determining, according to predetermined elements included in a first portion and a second portion of the processed image respectively, whether the predetermined object includes a specific predetermined object, includes the steps of: collecting statistics on quantities of pixels having values greater than 0 in the first portion and the second portion of the edge image respectively, wherein the first portion and the second portion are symmetrical about a center of the edge image; and determining whether a difference between the quantities of pixels having values greater than 0 in the first portion and the second portion satisfies a preset condition, and if so, determining that one of the first portion and the second portion that has the larger quantity of pixels having values greater than 0 includes the specific predetermined object.
6. The control method according to claim 4, wherein the processing the image includes the steps of: converting the first image or the second image into an HSV image; and binarizing the HSV image using a parameter threshold corresponding to a predetermined color to obtain a thresholded image, wherein the predetermined color is consistent with a color of the predetermined object.
7. The control method according to claim 6, wherein the step of determining, according to predetermined elements included in a first portion and a second portion of the processed image respectively, whether the predetermined object includes a specific predetermined object, includes the steps of: collecting statistics on quantities of pixels having values greater than 0 in the first portion and the second portion of the edge image respectively, and collecting statistics on quantities of pixels having values greater than 0 in the first portion and the second portion of the thresholded image respectively; determining whether a difference between the quantities of pixels having values greater than 0 in the first portion and the second portion of the edge image satisfies a preset condition, and determining whether a difference between the quantities of pixels having values greater than 0 in the first portion and the second portion of the thresholded image satisfies a preset condition; and determining that one of the first portion and the second portion that has the larger quantity of pixels having values greater than 0 includes the specific predetermined object, if the difference between the quantities of pixels having values greater than 0 in the first portion and the second portion of the edge image satisfies the preset condition, and the difference between the quantities of pixels having values greater than 0 in the first portion and the second portion of the thresholded image satisfies the preset condition.
8. The control method according to claim 5, wherein the preset condition is as follows: the quantity of pixels in one of the first portion and the second portion that has the larger quantity of pixels having values greater than 0 is a preset multiple of the quantity of pixels in the other of the first portion and the second portion that has the smaller quantity of pixels having values greater than 0.
9. The control method according to claim 5, wherein the method includes: controlling the self-moving device to move towards a direction in which one of the first portion and the second portion includes the specific predetermined object.
10. A self-moving device comprising: a visual module; at least one processor; and a memory connected to the at least one processor by communication, wherein the memory stores a computer program that can be executed by the at least one processor, and the computer program is executed by the at least one processor, so that the at least one processor can perform the control method for a self-moving device according to claim 1.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0015] In order to illustrate the technical solutions of the embodiments of the present disclosure more clearly, the accompanying drawings required for use in the embodiments will be introduced briefly below. It should be understood that the following drawings show only some embodiments of the present disclosure and therefore should not be regarded as limiting the scope, and other relevant drawings can be derived based on the accompanying drawings by those of ordinary skill in the art without any creative efforts.
[0016]
[0017]
[0018]
[0019]
[0020]
[0021]
[0022]
DETAILED DESCRIPTION OF THE EMBODIMENTS
[0023] In order to make those skilled in the art understand the solutions of the present disclosure better, the following will clearly and completely describe the technical solutions in the embodiments of the present disclosure with reference to the accompanying drawings in the embodiments of the present disclosure. Apparently, the described embodiments are merely some rather than all of the embodiments of the present disclosure. Based on the embodiments in the present disclosure, all other embodiments obtained by those of ordinary skill in the art without any creative efforts shall fall within the scope of protection of the present disclosure.
[0024] Notably, the terms first, second, and the like in the description, claims, and accompanying drawings of the present disclosure are used for distinguishing similar objects and do not need to be used for describing a specific order or sequence. Understandably, the data used in such a way are interchangeable in proper circumstances, so that the embodiments of the present disclosure described herein can be implemented in other orders than the order illustrated or described herein. In addition, the terms include, have, and any other variant thereof are intended to cover a non-exclusive inclusion. For example, a process, method, system, product, or device that includes a list of steps or units is not necessarily limited to those steps or units that are expressly listed, but may include other steps or units that are not expressly listed or are inherent to the process, method, system, product, or device.
[0025]
[0026] S1. Obtain, in the moving process of the self-moving device, an image of an area in a forward direction of the self-moving device.
[0027] In this embodiment, the self-moving device is a robot mower as an example. The designated area is an area where the robot mower needs to cut grass. The predetermined object is grass on the area where the robot mower needs to cut grass.
[0028] In this embodiment, the image is captured by a visual module on the self-moving device. The visual module on the self-moving device includes a first image module and a second image module. That is, a binocular visual module is disposed on the self-moving device in this embodiment. In the moving process of the self-moving device, a first image of the area in the forward direction of the self-moving device is obtained using the first image module, and a second image of the area in the forward direction of the self-moving device is obtained using the second image module. The images have multiple characteristics, such as hue (H), saturation(S), and value (V).
[0029] S2. Determine, according to the image, whether the predetermined object in the area in the forward direction of the self-moving device includes a specific predetermined object, so as to control the moving direction of the self-moving device, where the specific predetermined object and the predetermined object have at least one different feature parameter.
[0030] In this embodiment, the specific predetermined object is missed grass on the area where the robot mower needs to cut grass, where the missed grass is higher than surrounding grass.
[0031] Specifically, the first image and second image obtained above are processed to obtain a processed image first. Statistics on quantities of pixels with values greater than a set threshold in both the first portion and the second portion of the processed image, are collected, and whether the predetermined object includes a specific predetermined object is determined according to the statistical results. Finally, the self-moving device is controlled to move towards the direction in which one of the first portion and the second portion that includes the specific predetermined object.
[0032] The processing may include parallax processing, edge extraction, color segmentation, and the like. When it is determined that the specific object is included, the self-moving device is controlled to move directly towards the specific object for cutting or fixed-point spiral cutting.
[0033]
[0034] S21. Process the image.
[0035] Parallax processing, edge extraction, color segmentation, and the like are performed on the first image and the second image to obtain a processed image.
[0036] S22. Determine, according to predetermined elements included in a first portion and a second portion of the processed image respectively, whether the predetermined object includes a specific predetermined object, where the first portion and the second portion are independent of each other and have equal area.
[0037] Specifically, the first portion and the second portion may be two portions divided by a vertical centerline of the processed image as a dividing line. It can be understood that the first portion corresponds to an image on a left side in the forward direction of the robot mower, and the second portion corresponds to an image on a right side in the forward direction of the robot mower. The predetermined element includes a quantity of pixels having values greater than a set threshold. The feature parameters may be parameters of features such as color and contour. Statistics on quantities of pixels having values greater than the set threshold, included in the first portion and the second portion of the processed image respectively, are collected, and whether the predetermined object includes a specific predetermined object is determined according to the statistical results.
[0038] S23. Control the self-moving device to move towards a direction in which one of the first portion and the second portion includes the specific predetermined object.
[0039] If it is determined according to the statistical results that the predetermined object includes the specific predetermined object, that is, if it is determined according to the statistical results that there is missed grass on the area where the robot mower needs to cut grass, the self-moving device is controlled to move towards the direction in which one of the first portion and the second portion includes the specific predetermined object, that is, the robot mower is controlled to move towards missed grass.
[0040]
[0041] S211. Obtain an initial parallax image according to the first image and the second image.
[0042] In this step, the technical means used to obtain the initial parallax image according to the first image and the second image is a conventional technology, and will not be elaborated here.
[0043] S212. Obtain a relative parallax image according to the initial parallax image and a reference parallax image.
[0044] Specifically, the reference parallax image is first obtained, and then absolute values of differences between values of corresponding pixels in the initial parallax image and the reference parallax image are calculated to generate the relative parallax image.
[0045] The reference parallax image may be obtained by capturing, by the first image module and the second image module of the visual module, a level ground to obtain level images respectively, and then processing the level images.
[0046] S213. Perform edge extraction on the relative parallax image to obtain an edge image.
[0047] Notably, image edges are the most basic features of an image, and edges refer to image edges with abrupt changes in gray-scale information of local characteristics of the image.
[0048] Specifically, edge extraction is performed on the aforementioned relative parallax image to obtain the edge image.
[0049]
[0050] S221. Collect statistics on quantities of pixels having values greater than 0 in the first portion and the second portion of the edge image respectively, where the first portion and the second portion are symmetrical about a center of the edge image.
[0051] S222. Determine whether a difference between the quantities of pixels having values greater than 0 in the first portion and the second portion satisfies a preset condition, and if so, determine that one of the first portion and the second portion that has the larger quantity of pixels having values greater than 0 includes the specific predetermined object. The preset condition is as follows: the quantity of pixels in one of the first portion and the second portion that has the larger quantity of pixels having values greater than 0 is a preset multiple of the quantity of pixels in the other of the first portion and the second portion that has the smaller quantity of pixels having values greater than 0.
[0052] Specifically, the edge image is the aforementioned processed image, and statistics on the quantities of pixels having values greater than 0 in the first portion and the second portion of the edge image are collected respectively, where the first portion and the second portion may be divided by the vertical centerline of the processed image as a dividing line, and the first portion and the second portion are symmetrical about the center of the edge image. If the quantity of pixels in one of the first portion and the second portion that has the larger quantity of pixels having values greater than 0 is the preset multiple of the quantity of pixels in the other of the first portion and the second portion that has the smaller quantity of pixels having values greater than 0, it is determined that one of the first portion and the second portion that has the larger quantity of pixels having values greater than 0 includes the specific predetermined object. The preset multiple may be 1.5 times.
[0053]
[0054] S214. Convert the first image or the second image into an HSV (Hue, Saturation, Value) image.
[0055] In this step, any one of the first image or the second image is first converted into an HSV image.
[0056] S215. Binarize the HSV image using a parameter threshold corresponding to a predetermined color to obtain a thresholded image, where the predetermined color is consistent with a color of the predetermined object.
[0057] Specifically, the color of the predetermined object may be a green color of lawn grass, and the predetermined color consistent with the color of the predetermined object is also a green color. The parameter threshold corresponding to the color consistent with the color of the predetermined object is obtained, and then the HSV image is binarized based on the parameter threshold to obtain the thresholded image, where the values of pixels at the parameter threshold become 255, and the values of remaining pixels become 0.
[0058]
[0059] S223. Collect statistics on quantities of pixels having values greater than 0 in the first portion and the second portion of the edge image respectively, and collect statistics on quantities of pixels having values greater than 0 in the first portion and the second portion of the thresholded image respectively.
[0060] Specifically, first of all, both the edge image and the thresholded image may be used as the aforementioned processed image. Then, each image is divided into two portions according to the vertical centerline as a dividing line, the portions on the same side are regarded as a group, and the whole is divided into a first portion and a second portion.
[0061] Statistics on the quantities of pixels having values greater than 0 in the first portion and the second portion of the edge image are collected, and statistics on the quantities of pixels having values greater than 0 in the first portion and the second portion of the thresholded image are collected, respectively.
[0062] S224. Determine whether a difference between the quantities of pixels having values greater than 0 in the first portion and the second portion of the edge image satisfies a preset condition, and determine whether a difference between the quantities of pixels having values greater than 0 in the first portion and the second portion of the thresholded image satisfies a preset condition.
[0063] The preset condition is as follows: the quantity of pixels in one of the first portion and the second portion that has the larger quantity of pixels having values greater than 0 is a preset multiple of the quantity of pixels in the other of the first portion and the second portion that has the smaller quantity of pixels having values greater than 0.
[0064] S225. Determine that one of the first portion and the second portion that has the larger quantity of pixels having values greater than 0 includes the specific predetermined object, if the difference between the quantities of pixels having values greater than 0 in the first portion and the second portion of the edge image satisfies the preset condition, and the difference between the quantities of pixels having values greater than 0 in the first portion and the second portion of the thresholded image satisfies the preset condition.
[0065] It is determined that one of the first portion and the second portion that has the larger quantity of pixels having values greater than 0 includes the specific predetermined object, if the difference between the quantities of pixels having values greater than 0 in the first portion and the second portion of the edge image satisfies the preset condition, and the difference between the quantities of pixels having values greater than 0 in the first portion and the second portion of the thresholded image satisfies the preset condition.
[0066] According to the technical solutions of the embodiments of the present disclosure, an image of an area in a forward direction of the self-moving device is obtained in the moving process of the self-moving device; and whether a predetermined object in the area in the forward direction of the self-moving device includes a specific predetermined object is determined according to the image to control the moving direction of the self-moving device to process the specific predetermined object, such as cut high grass, so as to process missed grass in an effective and timely manner and prevent the missed grass from growing taller and spreading to require manual processing.
[0067]
[0068] As shown in
[0071] It should be understood that steps may be reordered, added, or deleted based on various forms of procedures shown above. For example, the steps described in the present application may be performed in parallel, sequentially, or in different orders. As long as the results desired by the technical solutions of the present disclosure can be achieved, no limitation is imposed herein.
[0072] The aforementioned specific implementations do not constitute a limitation on the scope of protection of the present disclosure. Those skilled in the art should understand that various modifications, combinations, sub-combinations, and replacements may be conducted according to design requirements and other factors. Modifications, equivalent replacements, improvements, and the like made within the spirit and principle of the present disclosure shall fall within the scope of protection of the present disclosure.