Self-localizing system operative in an unknown environment

12276986 ยท 2025-04-15

Assignee

Inventors

Cpc classification

International classification

Abstract

A system configured to operate in an unknown, possibly texture-less environment, with possibly self-similar surfaces, and comprising a plurality of platforms configured to operate as mobile platforms, where each of these platforms comprises an optical depth sensor, and one platform operates as a static platform and comprising at least one optical projector. Upon operating the system, the static platform projects a pattern onto the environment, wherein each of the mobile platforms detects the pattern or a part thereof by its respective optical depth sensor while moving, and wherein information obtained by the optical depth sensors, is used to determine moving instructions for mobile platforms within that environment. Optionally, the system operates so that every time period another mobile platform from among the plurality of platforms, takes the role of operating as the static platform, while the preceding platform returns to operate as a mobile platform.

Claims

1. A system configured to operate in an unknown environment and comprising a plurality of platforms configured to operate as mobile platforms, where each of said plurality of mobile platforms comprises an optical depth sensor, and one or more different platforms are configured to operate as static platforms, each comprising at least one optical projector, wherein upon operating said system in an unknown environment, the at least one static platform is configured to project a pattern within the unknown environment, wherein each of the plurality of mobile platforms is configured to detect said pattern or a part thereof by its respective optical depth sensor, and wherein information obtained by said optical depth sensors, is received by at least one processor and used to determine moving instructions for at least one mobile platform within the unknown environment, wherein the system comprises at least two mobile units which are mechanically linked to each other, and wherein at a given time, at least one of the at least two mobile units acts as a static platform and wherein at least one of the other at least two mobile units is configured to change its position with respect to the mobile unit acting as the static platform.

2. A system configured to operate in an unknown environment and comprising a plurality of platforms configured to operate as mobile platforms, wherein each of said plurality of mobile platforms comprises at least one optical depth sensor and at least one optical projector, wherein upon operating said system in an unknown environment, a first platform is selected from among the plurality of mobile platforms to operate as a static platform, and to project a pattern within the unknown environment, wherein each of the remaining mobile platforms is configured to detect said pattern or a part thereof by its respective optical depth sensor, wherein information obtained by said optical depth sensors, is received by at least one processor and used to determine moving instructions for at least one mobile platform within the unknown environment, and wherein said system is further adapted to select a second platform from among the plurality of mobile platforms to operate as a static platform and to project a pattern within the unknown environment, and to change mode of operation of said first platform from operating as a static platform, to a platform operating as a mobile platform.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) For a more complete understanding of the present invention, reference is now made to the following detailed description taken in conjunction with the accompanying drawings wherein:

(2) FIG. 1illustrates a schematic presentation of a system construed in accordance with an embodiment of the present invention;

(3) FIG. 2illustrates a schematic presentation of an embodiment of a central platform comprised in the system depicted in FIG. 1; and

(4) FIGS. 3A to 3Dexemplify various scenarios while carrying out an embodiment construed in accordance with the present invention.

DETAILED DESCRIPTION

(5) In this disclosure, the term comprising is intended to have an open-ended meaning so that when a first element is stated as comprising a second element, the first element may also include one or more other elements that are not necessarily identified or described herein or recited in the claims.

(6) In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a better understanding of the present invention by way of examples. It should be apparent, however, that the present invention may be practiced without these specific details.

(7) FIG. 1 illustrates a schematic presentation of a system 100 construed in accordance with an embodiment of the present invention. System 100 illustrated in this figure comprises at each point in time, a stationary robot 110.sub.1 comprising a projector 130 and a plurality of mobile robots 110.sub.2, . . . , 110.sub.n (mobile platforms), each comprising depth cameras 135 operating as optical depth sensors, wherein the plurality of mobile robots 110.sub.2, . . . , 110.sub.n are set to operate in this example for the first time within a confined texture-less space such as a warehouse (not shown in this figure), a space that has not yet been mapped. The projector of robot 110.sub.1 projects a pattern 125 by projector 130 onto the wall of the warehouse, and each of mobile robots 110.sub.2, . . . , 110.sub.n uses its capturing image sensor 135 (e.g., stereo cameras) to detect the target that comprises pattern 125 or a part thereof, and to capture its image as seen from the current spot at which each respective mobile robot is located.

(8) One embodiment of carrying out the present invention is that each of the mobile robots 110.sub.2, . . . , 110.sub.n has its own processor which is adapted to receive data associated with the capture image of pattern 125, analyze it and determine, based on that analysis, the relative position of the respective mobile robot with respect to the 3D map points derived from the target pattern 125. Once the relative position of the respective mobile robot has been established, the processor issues moving instructions for that mobile robot to enable the latter to move within the warehouse.

(9) FIGS. 1 and 2 further illustrate another embodiment of carrying out the present invention. System 100 depicted in FIG. 1 further comprises a central platform (a.k.a. a central unit) 120 that is configured to receive information associated with pattern 125 as captured by each of the mobile robots 110.sub.2, . . . , 110.sub.n, wherein the information is forwarded to central platform 120 by transmitters, each associated with a respective mobile robot. A more detailed schematic view of an example of central platform 120 is illustrated in FIG. 2.

(10) Optionally, one or more of the mobile robots 110.sub.2, . . . , 110.sub.n forwards to the central platform two or more captured images of the target pattern. In such a case after forwarding the first captured image of the target pattern to the central platform, the respective mobile robot changes its location. This location change may be either a predetermined change (for example moving 30 cm to the left) after which a further image of the target pattern is captured, or central platform 120 instructs that specific mobile robot how to change its location.

(11) Based on the information retrieved from the different captured images of mobile robots 110.sub.2, . . . , 110.sub.n, processor 210 analyze the data retrieved from the captured patterns, and determines based on that analysis, the relative position of each of the mobile robots with respect to the 3D map points derived from the target pattern 125. For example, based on the pattern images captured by each of the mobile robots, the processor is able to determine the distance of a respective mobile robot from the target pattern and its orientation (e.g., the angle which the mobile robot is located with respect to a normal extending from the target pattern). Once the relative position of a mobile robot has been established with respect to the 3D map points derived from the target pattern, processor 210 issues moving instructions for that mobile robot within the warehouse and transmit the instructions by transmitter 220 to the respective mobile robot. A similar process is carried out for each of the other mobile robots.

(12) Optionally, the process by which each of the mobile robots sends captured images of pattern 125 to central platform 120 and receives from central platform 120 updated moving instructions, is carried out every pre-defined period of time (e.g., every second). In the alternative, once central platform 120 informs a mobile robot of its initial position relative to the 3D map points derived from the target pattern, a processor comprised in that mobile robot calculates a path along which that mobile robot will be able to move within the unknown environment. Optionally, information related to the various paths calculated by the respective mobile robots' processors is forwarded to the central platform for the latter to confirm that none of the paths might cause collisions between mobile robots.

(13) Typically, for real-time navigation, the mobile robot usually estimates its position at each point in time. Yet, a path in the unknown environment can be estimated at a higher accuracy by implementing a post-processing procedure while applying any applicable filtering method that is known in the art per se.

(14) Still, in order to avoid interferences between the mobile robots communicating with the central platform, according to the present example, time slots are allocated to the various mobile robots, wherein during such a time slot at least one but less than all of the mobile robots, are allowed to communicate with the central platform. Yet, it should be understood, that there are quite a few communication protocols that are known in the art per se that can be used for this purpose such as time-division multiplexing, frequency-division multiplexing and the like. As will be appreciated by those skilled in the art, the present invention is not limited to any such specific communication protocol.

(15) FIGS. 3A to 3D exemplify various scenarios while carrying out an embodiment construed in accordance with the present invention. The underlying idea which is the basis of this example is, that when the system is operative, a first robot projects a pattern while acting as a static robot, and all other robots, being the mobile robots, move based on information derived from capturing an image of that pattern (or part thereof). Then, a second robot assumes the role of acting as a static robot, while the other robots (including the first robot that was previously the static robot) move based on information derived from capturing an image of the pattern (or part thereof) projected by the second robot. The robots may proceed within the unknown environment when each time another robot assumes the role of the static robot for a certain period, while the other robots move around, and then replaced by another robot that becomes the static one. Obviously, the replacing robot may be the first robot replacing the second robot while assuming the role of the static robot, or it can be a third robot (i.e., any robot selected from among the other robots) that assumes the role of the static robot.

(16) In the current example, two robots are demonstrated. Robots 310.sub.1 and 310.sub.2, each comprising a projector 330.sub.1 and 330.sub.2, respectively and a 3D camera 335.sub.1 and 335.sub.2, respectively. The first scenario (say, at t=t.sub.0) is illustrated in FIG. 3A, by which robot 310.sub.1 projects a pattern at the unknown environment, and by using 3D camera 335.sub.2, robot 310.sub.2 captures an image of that pattern, and forwards the captured image to a memory or to a processor. Next FIG. 3B, demonstrates that after some time (say at t=t.sub.1), robot 310.sub.2 moves to another location, while robot 310.sub.1 remains stationary during that period that extends between t.sub.0 and t.sub.1. At t.sub.1, robot 310.sub.2, that is now located at another location, captures a second image of the pattern by its 3D camera 335.sub.2, which is then processed together with the first image previously taken by robot 310.sub.2 and the results derived from the processing of the images, enable localization of robot 310.sub.2 within that environment. After some time (say at t=t.sub.2) as depicted in FIG. 3C, robot 310.sub.2 becomes the now stationary robot, and starts projecting a pattern by its projector 330.sub.2. Robot 310.sub.1 turns off its projector and uses its 3D camera to capture the image of the image projected by robot 310.sub.2. After some time (say at t=t.sub.3) as depicted in FIG. 3D, robot 310.sub.1 moves to another location and uses its 3D camera, 335.sub.1, to capture the pattern projected by robot 310.sub.2 from its new location, thereby enabling determining the current location of robot 310.sub.1. As will be appreciated by those skilled in the art, by changing the robots' functionalities (mobile/stationary) the various robots assist each other to navigate within the unknown environment, while eliminating the need to use a dedicated stationary platform for carrying out the solution of the present invention.

(17) In the description and claims of the present application, each of the verbs, comprise include and have, and conjugates thereof, are used to indicate that the object or objects of the verb are not necessarily a complete listing of members, components, elements or parts of the subject or subjects of the verb.

(18) The present invention has been described using detailed descriptions of embodiments thereof that are provided by way of example and are not intended to limit the scope of the invention in any way. The described embodiments comprise different objects, not all of which are required in all embodiments of the invention. Some embodiments of the present invention utilize only some of the objects or possible combinations of the objects. Variations of embodiments of the present invention that are described and embodiments of the present invention comprising different combinations of features noted in the described embodiments will occur to persons of the art. The scope of the invention is limited only by the following claims.