ROBOT WITH TENNIS BALL GATHERING CAPABILITIES
20260064133 ยท 2026-03-05
Assignee
Inventors
Cpc classification
A63B2047/022
HUMAN NECESSITIES
A63B47/021
HUMAN NECESSITIES
International classification
G05D1/246
PHYSICS
Abstract
A method, apparatus, and system are disclosed for a robot with tennis ball gathering capabilities. A robot is configured to collect, transport, and deposit for storage tennis balls and other light-mobile sports equipment such as are used in table tennis, badminton, squash, pickleball, golf, basketball, dodgeball, floor hockey, indoor soccer, etc., to assist a player in practice without need for additional support personnel or manual ball retrieval by the player.
Claims
1. A robotic system comprising: a robot including: a scoop; pusher pad arms with pusher pads; at least one wheel or one track for mobility of the robot; a processor; and a memory storing instructions that, when executed by the processor, allow operation and control of the robot; a base station; a plurality of ball baskets for at least one of storing balls and launching the balls; a robotic control system in at least one of the robot and a cloud server; and logic, to: execute an initialization mode, including: map, by the robot, an environment with a court, including: identify the court, court boundary lines, at least one player on the court, and ball baskets; and localize itself within the environment; receive an operating state from at least one of a user and a timing setting; on condition the operating state is a pick up ball mode: determine if a pickup threshold has been reached; on condition the pickup threshold has not been reached: determine a pickup strategy for picking up the balls; execute the pickup strategy until the pickup threshold has been reached, the pickup strategy including: navigate the environment while following ball pickup area rules; extend the pusher pads out and forward with respect to the pusher pad arms and raise the pusher pads to a grabbing height; approach a target ball, coming to a stop when the target ball is positioned between the pusher pads; push the target ball with the pusher pads onto the scoop to hold the target ball in the scoop; and raise at least one of the scoop and the pusher pads, holding the target ball, to a carrying position; determine if the pickup threshold has been reached; on condition the pickup threshold has been reached: execute a post pickup mode, including: navigate to a post pickup location to at least one of: transfer the balls into the ball baskets; and bring the balls to the at least one player.
2. The robotic system of claim 1, wherein the robot includes: a chassis; a mobility system; at least one first motor configured to actuate the mobility system; a sensing system including cameras; a scoop arm associated with a second motor to rotate the scoop arm; a third motor associated with the scoop arm and configured to rotate the scoop; a linear actuator configured to retract and extend the scoop arm; fourth motors configured to raise, lower, and extend the pusher pad arms; fifth motors configured to rotate the pusher pads horizontally; and the logic further comprising: rotate the pusher pads horizontally against the chassis through action of the fifth motors; approach one of the ball baskets, by a front of the robot, by actuating the at least one first motor and the mobility system; raise the scoop by actuating the second motor and rotating the scoop arm; extend the linear actuator to move a front edge of the scoop over a wall of the ball basket; rotate the scoop to a downward position by actuating the third motor; allow the scoop to remain in the downward position until all of the balls have been deposited in the ball basket; rotate the scoop to a horizontal position by actuating the third motor; retract the linear actuator to move a front edge of the scoop away from the wall of the ball basket; and lower the scoop by actuating the second motor and rotating the scoop arm.
3. The robotic system of claim 2, wherein the robot further includes: a trailer configured to be coupled to a rear of the chassis, wherein the trailer includes trailer wheels and is configured to hold at least one ball basket; and the logic further comprising: extend the linear actuator to move a rear wall of the scoop from an initial position to a resulting position over a wall of the ball basket closest to the chassis; raise the scoop over the chassis, from the front to the rear of the chassis, by actuating the second motor and rotating the scoop arm; rotate the scoop, in a direction lowering the rear wall of the scoop to a downward position, by actuating the third motor; allow the scoop to remain in the downward position until all of the balls have been deposited in the ball basket; lower the scoop over the chassis, from the rear to the front of the chassis, by actuating the second motor and rotating the scoop arm; rotate the scoop to the horizontal position by actuating the third motor; and retract the linear actuator to move the scoop to the initial position.
4. The robotic system of claim 3, the logic further comprising: navigate the robot to a target ball basket or a ball launcher; lift, by the robot, the target ball basket or the ball launcher, with the scoop, into the carrying position; transport, by the robot, the target ball basket or the ball launcher to the trailer; load the target ball basket or the ball launcher onto the trailer; move the robot into a trailer coupling position; and couple the trailer to the robot.
5. The robotic system of claim 2, the logic further comprising a carry basket mode, including: navigate the robot to a target ball basket or a ball launcher; lift, by the robot, the target ball basket or the ball launcher, with the scoop, into the carrying position; and transport the ball basket or the ball launcher to a new location.
6. The robotic system of claim 5, the logic further comprising a place basket mode, including: lower, by the robot, the scoop, placing the ball basket or the ball launcher, at a current location.
7. The robotic system of claim 1, the logic further comprising a go to standby location mode, including: on condition the balls have been transferred into the ball baskets or the balls have been removed by the at least one player: navigate the robot to a standby location; and map the environment.
8. The robotic system of claim 1, the logic further comprising a follow person mode, including: navigate the robot to keep a fixed distance from a target player; and pause movement when the target player stops moving or instructs the robot to stop moving.
9. The robotic system of claim 1, the logic further comprising a ready mode, including: after the initialization mode, continue to map the environment, including at least one of: a position of the at least one player; locations of the balls; points scored by the at least one player; and a stage of at least one of a game or a match.
10. The robotic system of claim 9, the logic further comprising a go to location mode, including: navigate the robot to a location on a map to reach a target location; on condition the target location is the base station: dock at a charging dock on the base station; and enter at least one of a charging mode and a sleep mode; and on condition the target location is not the base station: enter the ready mode.
11. A method comprising: executing an initialization mode by a robot, the robot comprising a scoop, pusher pad arms with pusher pads, at least one wheel or one track for mobility of the robot, and a robotic control system including a processor and a memory storing instructions that, when executed by the processor, allow operation and control of the robot, and the initialization mode including: mapping an environment with a court, including: identifying the court, court boundary lines, at least one player on the court, and ball baskets; and localizing itself within the environment; receiving an operating state from at least one of a user and a timing setting; on condition the operating state is a pick up ball mode: determining if a pickup threshold has been reached; on condition the pickup threshold has not been reached: determining a pickup strategy for picking up balls; executing the pickup strategy until the pickup threshold has been reached, the pickup strategy including: navigating the environment while following ball pickup area rules; extending the pusher pads out and forward with respect to the pusher pad arms and raise the pusher pads to a grabbing height; approaching a target ball, coming to a stop when the target ball is positioned between the pusher pads; pushing the target ball with the pusher pads onto the scoop to hold the target ball in the scoop; raising at least one of the scoop and the pusher pads, holding the target ball, to a carrying position; and determining if the pickup threshold has been reached; on condition the pickup threshold has been reached: executing a post pickup mode, including: navigating to a post pickup location to at least one of: transfer the balls into the ball baskets; and bring the balls to the at least one player.
12. The method of claim 11, further comprising: wherein the robot includes a chassis, a mobility system, at least one first motor configured to actuate the mobility system, a sensing system including cameras, a scoop arm associated with a second motor to rotate the scoop arm, a third motor associated with the scoop arm and configured to rotate the scoop, a linear actuator configured to retract and extend the scoop arm, fourth motors configured to raise, lower, and extend the pusher pad arms, and fifth motors configured to rotate the pusher pads horizontally; rotating the pusher pads horizontally against the chassis through action of the fifth motors; approaching one of the ball baskets, by a front of the robot, by actuating the at least one first motor and the mobility system; raising the scoop by actuating the second motor and rotating the scoop arm; extending the linear actuator to move a front edge of the scoop over a wall of the ball basket; rotating the scoop to a downward position by actuating the third motor; allowing the scoop to remain in the downward position until all of the balls have been deposited in the ball basket; rotating the scoop to a horizontal position by actuating the third motor; retracting the linear actuator to move a front edge of the scoop away from the wall of the ball basket; and lowering the scoop by actuating the second motor and rotating the scoop arm.
13. The method of claim 12, further comprising: wherein the robot further includes a trailer configured to be coupled to a rear of the chassis, wherein the trailer includes trailer wheels and is configured to hold at least one ball basket; extending the linear actuator to move a rear wall of the scoop from an initial position to a resulting position over a wall of the ball basket closest to the chassis; raising the scoop over the chassis, from the front to the rear of the chassis, by actuating the second motor and rotating the scoop arm; rotating the scoop, in a direction lowering the rear wall of the scoop to a downward position, by actuating the third motor; allowing the scoop to remain in the downward position until all of the balls have been deposited in the ball basket; lowering the scoop over the chassis, from the rear to the front of the chassis, by actuating the second motor and rotating the scoop arm; rotating the scoop to the horizontal position by actuating the third motor; and retracting the linear actuator to move the scoop to the initial position.
14. The method of claim 13, further comprising: navigating the robot to a target ball basket or a ball launcher; lifting, by the robot, the target ball basket or the ball launcher, with the scoop, into the carrying position; transporting, by the robot, the target ball basket or the ball launcher to the trailer; loading the target ball basket or the ball launcher onto the trailer; moving the robot into a trailer coupling position; and coupling the trailer to the robot.
15. The method of claim 12, further comprising: executing a carry basket mode by the robot, including: navigating the robot to a target ball basket or a ball launcher; lifting, by the robot, the target ball basket or the ball launcher, with the scoop, into the carrying position; and transporting the ball basket or the ball launcher to a new location.
16. The method of claim 15, further comprising: executing a place basket mode by the robot, including: lower, by the robot, the scoop, placing the ball basket or the ball launcher, at a current location.
17. The method of claim 11, further comprising: executing a go to standby location mode by the robot, including: on condition the balls have been transferred into the ball baskets or the balls have been removed by the at least one player: navigating the robot to a standby location; and mapping the environment.
18. The method of claim 11, further comprising: executing a follow person mode by the robot, including: navigating the robot to keep a fixed distance from a target player; and pause movement when the target player stops moving or instructs the robot to stop moving.
19. The method of claim 11, further comprising: executing a ready mode by the robot, including: after the initialization mode, continuing to map the environment, including at least one of: a position of the at least one player; locations of the balls; points scored by the at least one player; and a stage of at least one of a game or a match.
20. The method of claim 19, further comprising: executing a go to location mode by the robot, including: navigating the robot to a location on a map to reach a target location; on condition the target location is a base station: docking at a charging dock on the base station; and entering at least one of a charging mode and a sleep mode; and on condition the target location is not the base station: entering the ready mode.
Description
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
[0005] To easily identify the discussion of any particular element or act, the most significant digit or digits in a reference number refer to the figure number in which that element is first introduced.
[0006]
[0007]
[0008]
[0009]
[0010]
[0011]
[0012]
[0013]
[0014]
[0015]
[0016]
[0017]
[0018]
[0019]
[0020]
[0021]
[0022]
[0023]
[0024]
[0025]
[0026]
[0027]
[0028]
[0029]
[0030]
[0031]
[0032]
[0033]
[0034]
[0035]
[0036]
[0037]
[0038]
[0039]
[0040]
[0041]
[0042]
[0043]
[0044]
[0045]
DETAILED DESCRIPTION
[0046] A tennis ball retrieval robot as disclosed herein may provide a flexible and intelligent automated solution to fetching tennis balls from in and around an area of play. A tennis ball fetching robot would make it easier to play or practice tennis without having to run around picking up all the balls afterwards. Such a solution may also apply to other similar sports with balls (or similarly light and mobile equipment) such as table tennis, badminton, squash, pickleball, golf, basketball, dodgeball, floor hockey, indoor soccer, etc. In one embodiment, the robot may be constructed and configured for use over uneven terrain and across longer distances and may retrieve golf balls from a driving range or golf course.
[0047]
[0048] The chassis 102 may support and contain the other components of the robot 100. The mobility system 104 may comprise wheels as indicated, as well as caterpillar tracks, conveyor belts, etc., as is well understood in the art. The mobility system 104 may further comprise motors, servos, or other sources of rotational or kinetic energy to impel the robot 100 along its desired paths. Mobility system 104 components may be mounted on the chassis 102 for the purpose of moving the entire robot without impeding or inhibiting the range of motion needed by the capture and containment system 108. Elements of a sensing system 106, such as cameras, lidar sensors, or other components, may be mounted on the chassis 102 in positions giving the robot 100 clear lines of sight around its environment in at least some configurations of the chassis 102, scoop 110, pusher pad 116, and pusher pad arm 118 with respect to each other.
[0049] The chassis 102 may house and protect all or portions of the robotic control system 2500, (portions of which may also be accessed via connection to a cloud server) comprising in some embodiments a processor, memory, and connections to the mobility system 104, sensing system 106, and capture and containment system 108. The chassis 102 may contain other electronic components such as batteries, wireless communication devices, etc., as is well understood in the art of robotics. The robotic control system 2500 may function as described in greater detail with respect to
[0050] The capture and containment system 108 may comprise a scoop 110, a scoop arm 112, a scoop arm pivot point 114, a pusher pad 116, a pusher pad arm 118, a pad pivot point 120, and a pad arm pivot point 122. In some embodiments, the capture and containment system 108 may include two pusher pad arms 118, pusher pads 116, and their pivot points. In other embodiments, pusher pads 116 may attach directly to the scoop 110, without pusher pad arms 118. Such embodiments are illustrated later in this disclosure.
[0051] The geometry and of the scoop 110 and the disposition of the pusher pads 116 and pusher pad arms 118 with respect to the scoop 110 may describe a containment area, illustrated more clearly in
[0052] The point of connection shown between the scoop arms and pusher pad arms is an exemplary position and is not intended to limit the physical location of such points of connection. Such connections may be made in various locations as appropriate to the construction of the chassis and arms, and the applications of intended use.
[0053] In some embodiments, gripping surfaces may be configured on the sides of the pusher pads 116 facing inward toward objects to be lifted. These gripping surfaces may provide cushion, grit, elasticity, or some other feature that increases friction between the pusher pads 116 and objects to be captured and contained. In some embodiments, the pusher pad 116 may include suction cups in order to better grasp objects having smooth, flat surfaces. In some embodiments, the pusher pads 116 may be configured with sweeping bristles. These sweeping bristles may assist in moving small objects from the floor up onto the scoop 110. In some embodiments, the sweeping bristles may angle down and inward from the pusher pads 116, such that, when the pusher pads 116 sweep objects toward the scoop 110, the sweeping bristles form a ramp, allowing the foremost bristles to slide beneath the object, and direct the object upward toward the pusher pads 116, facilitating capture of the object within the scoop and reducing a tendency of the object to be pressed against the floor, increasing its friction and making it more difficult to move.
[0054]
[0055] In one embodiment, the mobility system 104 may comprise a right front wheel 136, a left front wheel 138, a right rear wheel 140, and a left rear wheel 142. The robot 100 may have front-wheel drive, where right front wheel 136 and left front wheel 138 are actively driven by one or more actuators or motors, while the right rear wheel 140 and left rear wheel 142 spin on an axle passively while supporting the rear portion of the chassis 102. In another embodiment, the robot 100 may have rear-wheel drive, where the right rear wheel 140 and left rear wheel 142 are actuated and the front wheels turn passively. In another embodiment, each wheel may be actively actuated by separate motors or actuators.
[0056] The sensing system 106 may further comprise cameras 124 such as the front cameras 126 and rear cameras 128, light detecting and ranging (LIDAR) sensors such as lidar sensors 130, and inertial measurement unit (IMU) sensors, such as IMU sensors 132. In some embodiments, front camera 126 may include the front right camera 144 and front left camera 146. In some embodiments, rear camera 128 may include the rear left camera 148 and rear right camera 150.
[0057] Additional embodiments of the robot that may be used to perform the disclosed algorithms are illustrated in
[0058]
[0059]
[0060] Pad arm pivot points 122, pad pivot points 120, scoop arm pivot points 114 and scoop pivot points 502 (as shown in
[0061]
[0062] The carrying position may involve the disposition of the pusher pads 116, pusher pad arms 118, scoop 110, and scoop arm 112, in relative configurations between the extremes of lowered scoop position and lowered pusher pad position 200a and raised scoop position and raised pusher pad position 200c.
[0063]
[0064]
[0065]
[0066] The point of connection shown between the scoop arms 112/pusher pad arms 118 and the chassis 102 is an exemplary position and is not intended to limit the physical location of this point of connection. Such connection may be made in various locations as appropriate to the construction of the chassis 102 and arms, and the applications of intended use.
[0067]
[0068] The different points of connection 402 between the scoop arm and chassis and the pusher pad arms and chassis shown are exemplary positions and are not intended to limit the physical locations of these points of connection. Such connections may be made in various locations as appropriate to the construction of the chassis and arms, and the applications of intended use.
[0069]
[0070] The robot 100 may be configured with a scoop pivot point 502 where the scoop 110 connects to the scoop arm 112. The scoop pivot point 502 may allow the scoop 110 to be tilted forward and down while the scoop arm 112 is raised, allowing objects in the containment area 210 to slide out and be deposited in an area to the front 202 of the robot 100.
[0071]
[0072] In one embodiment, the scoop 110 may be configured with flexible and/or collapsible sides. For example, the sides of scoop 110 may be constructed of elasticized netting that may expand as the scoop fills up, allowing increased ball storage, but able to collapse when the scoop is empty, facilitating interface with a ball basket such as those illustrated in
[0073] Each pusher pad 116 may be able to raise and lower through the action of the fourth motors 610 upon the pusher pad arms 118 as shown. In one embodiment, the pusher pad arms 118 may incorporate linear actuators allowing them to also extend and retract with respect to their points of attachment either to the robot chassis 102 as shown for ball collection robot 600, or the robot scoop as illustrated with respect to the ball collection robot 700 of
[0074] In one embodiment, the robotic control system 2500 may further include sensors and control logic capable of recognizing radio frequency identification (RFID) tagging or other similar configurations used to individually mark specific balls or pieces of equipment. For example, cameras may allow recognition of different colors of tennis balls or other collected objects, or specific branding logos or other identifying marks. In this manner, the ball collection robot 600 may accurately sort and store equipment based on this data.
[0075] The ball collection robot 600 may in some embodiments be configured as illustrated with respect to
[0076] As illustrated in
[0077] As indicated in
[0078] In one embodiment, as shown in
[0079]
[0080]
[0081]
[0082] According to some examples, the method includes executing an initialization mode by a robot, including mapping an environment with a court by identifying the court, court boundary lines, at least one player on the court, and ball baskets, and localizing the robot within the environment at block 902. According to some examples, the method includes receiving an operating state from at least one of a user and a timing setting at block 904.
[0083] If the operating state is pick up ball mode at decision block 906, the routine continues to decision block 908. Otherwise, the routine 900 proceeds to additional state routines.
[0084] If the pickup threshold is determined to have not been reached yet at decision block 908, proceed to block 910. If the pickup threshold has been reached, the routine 900 proceeds to block 924.
[0085] According to some examples, the method includes determining a pickup strategy for picking up the balls at block 910. According to some examples, the method includes executing the pickup strategy at block 912. The pickup strategy may be performed according to the subroutine beginning at subroutine block 914.
[0086] According to some examples, the method includes navigating the environment while following ball pickup area rules at subroutine block 914. According to some examples, the method includes extending the pusher pads out and forward with respect to the pusher pad arms and raise the pusher pads to a grabbing height at subroutine block 916.
[0087] According to some examples, the method includes approaching a target ball, coming to a stop when the target ball is positioned between the pusher pads at subroutine block 918. According to some examples, the method includes pushing the target ball with the pusher pads onto the scoop to hold the target ball in the scoop at subroutine block 920. According to some examples, the method includes raising at least one of the scoop and the pusher pads, holding the target ball, to a carrying position at subroutine block 922.
[0088] According to some examples, the method includes executing a post pickup mode at block 924. The post pickup mode may be performed according to the subroutine beginning at subroutine block 926. According to some examples, the method includes navigating to a post pickup location at subroutine block 926. According to some examples, the method includes transferring the balls into the ball baskets and/or bring the balls to the at least one player at subroutine block 928. At this point, the routine 900 may be repeated in whole or part, or additional state routines may be performed.
[0089]
[0090] The ball collection robot 600 may explore, map, and operate within the features and landmarks of the tennis court environment 1000. These may include ball baskets 1800, ball launchers, tennis balls 1602 lying on the ground in a number of locations as shown, other ball collection robots 600, and human players 1008 and other personnel. Players 1008 or other personnel, such as ball boys and ball girls, coaches, instructors, spectators, etc., may move about within the tennis court environment 1000, and may present particular challenges to conventional automated ball retrieval systems. The tennis court environment 1000 may also include a base station 1010 at which the ball collection robot 600 may dock for charging.
[0091] While a tennis court environment 1000 shows one environment the disclosed solution may operate in and the attributes that may be expected in such an environment, this specific environment is illustrated for exemplary purposes, and is not intended to limit the operation of the disclosed solution to environments configured for the sport of tennis. One of ordinary skill in the art may readily apprehend how similar gameplay environments such as basketball courts, racquetball courts, football and soccer fields, golf courses, driving ranges, etc., may be mapped and operated within by the robots disclosed herein.
[0092]
[0093] The ball collection robot 600 may perform some or all states autonomously, based on preconfigured algorithms, which may include machine learning to refine the efficiency and efficacy of the robot's operations. The ball collection robot 600 may also be configured to transition among the ball collection robot operating states 1100 through set up or real-time control using a user interface, such as the user interface 2300 configured on a mobile device as illustrated in
[0094] In the charging mode/sleep mode 1102 state, the ball collection robot 600 may be in a low power or sleep mode to conserve battery power. The ball collection robot 600 may enter this state when it is docked and charging in the charging mode. It may be well understood that, while these modes may often be entered at the same time, i.e., when the ball collection robot 600 is charging at its base station, in some embodiments they may be two separate states/modes, such that the ball collection robot 600 may remain in a normal power mode while charging, and may enter a low power or sleep mode to conserve energy while away from the charging or base station.
[0095] When a user turns the ball collection robot 600 on, or the robot is automatically activated based on programmatic or environmental conditions, the ball collection robot 600 may enter the initialization mode 1104 state. During initialization mode 1104, the ball collection robot 600 turns on and begins mapping its environment, such as the tennis court environment 1000 illustrated in
[0096] Once initial mapping is complete, the ball collection robot 600 may enter a ready mode 1106 state. In the ready mode 1106 state, the ball collection robot 600 is initialized and may remain stationary while waiting for a command from a player or other personnel. The ball collection robot 600 may continue to map features of its environment. This may include tracking the changing positions of players and other persons and balls it may later retrieve.
[0097] In one embodiment, the ball collection robot 600 may also detect and track aspects of gameplay, such as points scored, stage of the game, such as sets and matches in tennis or periods or quarters in timed sports. In one embodiment, the ball collection robot 600, the ball launcher 2200, or other apparatus that includes cameras or other sensors and controllers described herein, may detect conditions such as a ball or player out of bounds, serving faults, offsides, etc. In this manner, the ball collection robot 600 may be able to determine without additional intervention when a game is concluded, and may recognize that it may begin collecting balls.
[0098] When commanded by a user, through either a user interface 2300 such as that illustrated in
[0099] Based on a manual activation by a user or a conditional activation based on a programmatic or environmental condition detected, and provided the scoop is determined to not be full, the ball collection robot 600 may enter a pick up ball mode 1110 state. In this state, the ball collection robot 600 may navigate its environment and pick balls up off the ground while following ball pickup area rules. These rules may be set forth in preconfigured algorithms, and may depend on characteristics of the environment and gameplay of a particular sport. For example, a ball collection robot 600 configured to pick up tennis balls may be given a rule of sideline for its pickup area. The ball collection robot 600 may then navigate and operate within the sideline area, outside of the tennis court bounds.
[0100] When a pickup threshold is reached, the ball collection robot 600 may transition to a post pickup mode 1112 state. A pickup threshold may be met when the ball collection robot 600 determines, using cameras, weight measurements, or other sensor data, that its scoop is full. Alternatively, the pickup threshold may be met when the ball collection robot 600 detects no additional objects needing retrieval in its environment, or within the bounds it may operate in based on the pickup area rules. In the post pickup mode 1112 state, the ball collection robot 600 may navigate to a location where it is configured to bring balls after pickup, such as a ball basket or ball launcher, or a player, coach, or other personnel. The ball collection robot 600 may then perform a drop operation to deposit the contents of its scoop into the desired receptacle, or, if configured to go to a person, may remain in place until it determines that its scoop is empty.
[0101] Once the scoop is determined to be empty, the ball collection robot 600 may enter a go to standby location mode 1114 state. The ball collection robot 600 may in this state navigate to a preconfigured standby location, such as the sidelines, out of the way of gameplay and associated foot traffic. Once the sideline location is reached, the ball collection robot 600 may transition back to the ready mode 1106 state.
[0102] When manually activated by a user, the ball collection robot 600 may enter a follow person mode 1116 state. In this state, the ball collection robot 600 may navigate to within a predetermined distance of a particular person. For example, the ball collection robot 600 may travel to a target person, stopping at a distance of two meters from that person. The ball collection robot 600 may then pause its movement until or unless the person moves away from the robot. The robot may follow a moving person. In one embodiment, heuristics may be used to determine a distance which the target person may need to move before the robot follows, so that the robot does not expend unnecessary power tracking minor motions made by the target person. The ball collection robot 600 may remain in the follow person mode 1116 state until the state is deactivated by a user, when the ball collection robot 600 may return to the ready mode 1106 state. In one embodiment, the ball collection robot 600 may be capable of exiting the follow person mode 1116 state without manual deactivation. For example, the ball collection robot 600 may exit this state when it detects that it is running low on power, and may transition through the states needed to return to its docking station. (This may be true for any state; the ball collection robot 600 may be programmed to automatically transition through states to return to its docking station based on power level, time intervals without state change, or other programmatic or environmental conditions.)
[0103] Upon user request, the ball collection robot 600 may enter the carry basket mode 1118 state. In this state, the ball collection robot 600 may navigate to a ball basket, ball launcher, or other similar equipment. The ball collection robot 600 may pick this apparatus up with its scoop so that it is ready to be moved to a new location. Once the ball basket is picked up and ready for transport, the ball collection robot 600 may return to its ready mode 1106 state.
[0104] When commanded by a user, the ball collection robot 600 may enter a place basket mode 1120 state. In this state, the user requests that the basket be placed at the current location of the ball collection robot 600. In this state, the ball collection robot 600 lowers its scoop and deposits the basket at a current location. In one embodiment, as may be anticipated, the user may command the ball collection robot 600 to enter its carry basket mode 1118 state, then its follow person mode 1116 state, in which the user is the target person. Once the user has moved to a desired location, followed by the robot, the user may request the place basket mode 1120 state. In one embodiment, the ball collection robot 600 may be preconfigured with appropriate locations for ball baskets and ball launchers, and may be able to transition from the carry basket mode 1118 state to the place basket mode 1120 state without additional commands by the user. For example, the ball collection robot 600 may be configured with a practice setup routine in which it prepares a court for practice by locating a ball launcher and placing it in a desired location if it is not already at that location.
[0105]
[0106] As illustrated in
[0107] As shown in
[0108] While the robot shown in
[0109]
[0110] As illustrated in
[0111] As shown in
[0112] While the robot shown in
[0113]
[0114] During the approach step 1402, the ball collection robot 600 may move toward 1408 the target balls 1418 or other objects for pickup with the pusher pads spread wide enough 1410 to encompass a group of tennis balls or other objects to be picked up using the ball trapping maneuver 1400. In the caging step 1404, the ball collection robot 600 may then close one pusher pad 1412 slightly ahead of the other pusher pad 1414, such that the second pad may trap target balls 1418 or objects that may tend to roll or slide away from the pressure of the first pusher pad.
[0115] Finally, during the securing step 1406, the ball collection robot 600 may eventually close off 1416 the front of the scoop with both pusher pads, trapping the target balls 1418 or other objects within the basket. In one embodiment, the pusher pads may be configured to continue rotating inward in order to press the target balls 1418 captured against the back of the scoop, preventing them from rolling within or becoming dislodged from the scoop during transport, until the ball collection robot 600 acts to deposit the balls or objects at a post pickup location.
[0116] In one embodiment, the pusher pad arms 118 may include linear actuators, as is shown for the scoop arm 112 in
[0117]
[0118] In step 1502, the scoop may be positioned near the ground and tilted slightly back. In this position, the balls in the scoop may be prevented from rolling forward, and balls on the ground may be prevented from rolling under the scoop. The robot may approach additional balls to be picked up with its pusher pads spread open, as described with respect to
[0119] In step 1504, the robot may drive forward until the additional balls are against the edge of the scoop. In step 1506, the robot may begin closing its pusher pads which may hold the balls against the scoop edge, and may prevent the balls from rolling away.
[0120] In step 1508, the robot may lower the scoop to be flat against the ground. The robot may drive backwards slightly (e.g., 1-2 cm) while lowering the scoop to prevent the balls at the scoop edge from catching. The robot may continue closing its pusher pads in a caging maneuver such as was described with respect to
[0121] Finally, in step 1510, the additional balls are captured in the scoop. The robot may hold them in place with the pusher pads. The robot may also return the scoop to the position near the ground and slightly tilted back in which it began at step 1502, again preventing any of the balls in the scoop from rolling out.
[0122]
[0123] The scoop 110 may be rotated vertically with respect to the scoop arm 112 through the action of its third motor 606. As previously described, it may be moved away from or toward the chassis 102 through the action of a linear actuator 608 configured with the scoop arm 112. The scoop 110 may also be raised and lowered by the rotation of the scoop arm 112, actuated by the second motor 604.
[0124]
[0125] Configured thusly, the ball collection robot 600 may perform a forward or front dump of the tennis balls 1602 into a ball basket 1604 as shown. The ball collection robot 600 may approach the ball basket 1604 such that the ball basket 1604 is in front 202 of the ball collection robot 600. The ball collection robot 600 may move a front edge of the scoop 1612 over a wall of the basket 1610. The ball collection robot 600 may then rotate the scoop 110 to a downward position 1614 until all of the tennis balls 1602 have fallen out of the scoop 110 and been deposited in the ball basket 1604, accomplishing the forward dump 1616.
[0126]
[0127] Once the scoop 110 is seated within the slot of the ball basket 1604, the ball collection robot 600 may raise the ball basket 1604 to a carrying position 1706, and may navigate 1708 to a trailer 1710, as shown in
[0128] When in position, with the trailer 1710 to the front of the ball collection robot 600, the ball collection robot 600 may lower its scoop 110, thereby lowering 1716 the ball basket 1604 onto the trailer 1710. The ball collection robot 600 may then back up 1718, withdrawing the scoop 110 from the slot 1608 of the ball basket 1604 as indicated in
[0129] Once the ball basket 1604 is deposited on the trailer 1710, the ball collection robot 600 may navigate around 1720 to a position with the trailer 1710 to the rear 214 of the ball collection robot 600, the trailer coupler 1714 on the side of the trailer 1710 facing the ball collection robot 600. The ball collection robot 600 may then back up 1722 until the trailer coupler 1714 engages with a feature of the ball collection robot 600, thus securely coupling the the trailer 1710 to the ball collection robot 600, as shown in
[0130] The trailer coupler 1714 is illustrated as a feature of the trailer 1710 for simplicity, and is not intended to be limited to such. It is well understood by those of skill in the art that the trailer coupler 1714 may comprise any number of configurations, including magnetic coupling, mechanical coupling, etc., which may be designed as a pairing of physical features, one feature on the ball collection robot 600 and one on the trailer 1710, the two configured to engage with and securely attach to each other.
[0131] With the ball basket 1604 residing on the trailer 1710 and the trailer 1710 coupled to the ball collection robot 600, the ball collection robot 600 may proceed to capture and carry target objects such as tennis balls 1602 in its scoop 110 as disclosed elsewhere herein, while towing the trailer 1710 behind itself as it navigates and retrieves objects, as shown in
[0132] When the scoop 110 no longer has the capacity to collect more objects, the ball collection robot 600 may raise the scoop 110 along a path that is an arc 1724 from the front 202 of the ball collection robot 600, over the chassis 102 of the ball collection robot 600 toward the rear 214 of the ball collection robot 600. The ball collection robot 600 may maintain its scoop 110 in this raised position until all of the tennis balls 1602 or other objects have fallen from the scoop 110 into the ball basket 1604 residing on the trailer 1710 to the rear 214 of the ball collection robot 600, completing the rear dump 1726. In this manner, by carrying a ball basket 1604 on a trailer 1710 behind itself, into which it may empty its scoop 110 as needed by performing rear dumps 1726 as shown in
[0133]
[0134]
[0135]
[0136]
[0137] In one embodiment, the legs 1906 may provide enough clearance such that slots 1904 are not needed, the scoop is able to pass below the ball storage 1902 and between the legs 1906, and the ball basket 1900 may simply rest atop the outer sides of the scoop when lifted.
[0138]
[0139] In the passive instance shown in
[0140]
[0141] In the ball basket with actively extendable legs 2100 shown in
[0142] A controller 2106 may be provided to control the linear actuators 2102. In one embodiment, the controller 2106 may comprise some or all of the features of the robotic control system 2500 previously illustrated and may communicate wirelessly with a robot, a mobile device, or other computing device. When the robot's scoop is not present (such as after the ball basket is placed on the ground) then the linear actuators automatically extend so that the ball basket is at a height where it may be easily reached by players.
[0143] In one embodiment, the ball basket with actively extendable legs 2100 and robot may be connected via a wireless protocol such as Bluetooth so that the robot may send commands to the ball basket to raise or lower on demand. This may be useful in situations where the robot cannot reach high enough to drop balls into the ball basket. In such a circumstance the robot may approach the ball basket, send a wireless command for the ball basket to lower itself, drop balls into the ball basket, and send a wireless command for the ball basket to raise itself.
[0144] In one embodiment, the controller 2106 for the ball basket with actively extendable legs 2100 may further include sensors and control logic capable of recognizing radio frequency identification (RFID) tagging or other similar configurations used to individually mark specific balls or pieces of equipment. For example, a camera may be placed allowing recognition of different colors of tennis balls or other collected objects, or specific branding logos or other identifying marks. In this manner, the ball basket with actively extendable legs 2100 may support the ball collection robot 600 to accurately sort and store equipment based on this data.
[0145] The active extension is shown here with telescoping legs, but other ways of extending and retracting these legs, such as folding the leg ends up toward or down away from the ball storage area, folding the legs in a concertina action, etc., may be readily apprehended by one of ordinary skill in the art, and may be powered by actuators other than the simple linear actuators 2102 shown.
[0146]
[0147] Similar to the ball baskets described with respect to
[0148] An alternative to having the robot drop balls it picks up into a ball basket may be to have it drop balls into a ball launcher 2200. The ball launcher 2200 may have ball storage 2202 at the top, and may also have a ball launching mechanism able to launch balls on demand toward players whenever needed. This mechanism may involve pistons, linear actuators, or, as illustrated, compressed air. The linear actuator 2206 may open to allow a ball to drop out from ball storage 2202 to an area accessible to an outlet 2218. An air tank 2212 may be filled with pressurized air 2216 using a compressor 2210. A valve 2214 may prevent air 2216 from escaping until desired.
[0149] When the ball sensor 2208 detects a ball is in an appropriate position, the valve 2214 may be opened, and a burst of pressurized air 2216 may impel a ball out of the outlet 2218.
[0150] These components may be controlled by a controller 2222, which may include some or all of the components of the robotic control system 2500 illustrated with respect to
[0151] In one embodiment, the controller 2222 may further include sensors and control logic capable of recognizing radio frequency identification (RFID) tagging or other similar configurations used to individually mark specific balls or pieces of equipment. For example, a camera may be placed allowing recognition of different colors of tennis balls or other collected objects, or specific branding logos or other identifying marks. In this manner, the ball launcher 2200 may support the ball collection robot 600 to accurately sort and store equipment based on this data.
[0152] In some variations, as shown in
[0153]
[0154] A user may employ this user interface 2300 to create routines for the ball collection robot 600 to follow, to manually request that the ball collection robot 600 transition through the ball collection robot operating states 1100 described with respect to
[0155]
[0156] While the ball collection robot 600 is performing actions in its initialization mode 1104 state, a display such as that shown in
[0157]
[0158] Auto-pickup setting controls 2312 may include controls that display the present settings and provide access to menus to change settings. Settings may include a standby location (see
[0159] Action controls 2314 may include an Auto Pickup On/Off control that allows a user to instruct the robot to automatically pickup balls or other equipment when certain conditions are met. A Stop Activity control may allow a user to stop all of the robot's current activities, including disabling auto pickup. A Start Pickup Balls control may allow a user to request the robot to immediately enter a pick up balls state. A Go To Location control may allow a user to instruct the robot to go to a specified location and wait there (see
[0160] The court map 2316 displayed in
[0161] In addition to court bounds and surrounding areas, obstacles, objects for pickup, post pickup locations, predetermined standby locations, and charging stations may also be mapped and displayed in the court map 2316. General locations for known or detected personnel may be displayed, and personnel may be recognized and marked with a preconfigured designator, or may be generically marked as indicated using, for example, A, B, and C for players and BBG for ball boys and ball girls. In one embodiment, all personnel detected may be marked uniquely for case of reference regardless of which court is currently selected for operation. In one embodiment, players and personnel within the currently-selected court and no others may be tagged for interaction, or players may be tagged by court rather than using completely unique tags.
[0162] The court map 2316 may also display other robots, ball baskets ball launchers designated equipment cabinets, equipment sheds, and all other information for static and mobile objects contained in the maps generated by, provided to, and used by the ball collection robot 600 to perform the operations disclosed herein.
[0163]
[0164]
[0165]
[0166]
[0167]
[0168] Similar to the options shown for standby location in
[0169]
[0170] In one embodiment, this screen may be shown when the robot has detected a low power state and is automatically returning to its charging dock. In one embodiment, a low-power, go to dock operation may not be cancelled, and the cancel button 2338 may be omitted from the screen. This screen may then be replaced with the screen of
[0171] The controls, settings, and options illustrated in
[0172]
[0173]
[0174]
[0175]
[0176]
[0177] Input devices 2504 (e.g., of a robot or companion device such as a mobile phone or personal computer) comprise transducers that convert physical phenomena into machine internal signals, typically electrical, optical, or magnetic signals. Signals may also be wireless in the form of electromagnetic radiation in the radio frequency (RF) range but also potentially in the infrared or optical range. Examples of input devices 2504 are contact sensors which respond to touch or physical pressure from an object or proximity of an object to a surface, mice which respond to motion through space or across a plane, microphones which convert vibrations in the medium (typically air) into device signals, scanners which convert optical patterns on two or three-dimensional objects into device signals. The signals from the input devices 2504 are provided via various machine signal conductors (e.g., busses or network interfaces) and circuits to memory 2506.
[0178] The memory 2506 is typically what is known as a first- or second-level memory device, providing for storage (via configuration of matter or states of matter) of signals received from the input devices 2504, instructions and information for controlling operation of the central processing unit or CPU 2502, and signals from storage devices 2510. The memory 2506 and/or the storage devices 2510 may store computer-executable instructions and thus forming logic 2514 that when applied to and executed by the CPU 2502 implement embodiments of the processes disclosed herein. Logic 2514 may include portions of a computer program, along with configuration data, that are run by the CPU 2502 or another processor. Logic 2514 may include one or more machine learning models 2516 used to perform the disclosed actions. In one embodiment, portions of the logic 2514 may also reside on a mobile or desktop computing device accessible by a user to facilitate direct user control of the robot.
[0179] Information stored in the memory 2506 is typically directly accessible to the CPU 2502 of the device. Signals input to the device cause the reconfiguration of the internal material/energy state of the memory 2506, creating in essence a new machine configuration, influencing the behavior of the robotic control system 2500 by configuring the CPU 2502 with control signals (instructions) and data provided in conjunction with the control signals.
[0180] Second- or third-level storage devices 2510 may provide a slower but higher capacity machine memory capability. Examples of storage devices 2510 are hard disks, optical disks, large-capacity flash memories or other non-volatile memory technologies, and magnetic memories.
[0181] In one embodiment, memory 2506 may include virtual storage accessible through a connection with a cloud server using the network interface 2512, as described below. In such embodiments, some or all of the logic 2514 may be stored and processed remotely.
[0182] The CPU 2502 may cause the configuration of the memory 2506 to be altered by signals in storage devices 2510. In other words, the CPU 2502 may cause data and instructions to be read from storage devices 2510 in the memory 2506 which may then influence the operations of CPU 2502 as instructions and data signals, and which may also be provided to the output devices 2508. The CPU 2502 may alter the content of the memory 2506 by signaling to a machine interface of memory 2506 to alter the internal configuration and then converted signals to the storage devices 2510 alter its material internal configuration. In other words, data and instructions may be backed up from memory 2506, which is often volatile, to storage devices 2510, which are often non-volatile.
[0183] Output devices 2508 are transducers that convert signals received from the memory 2506 into physical phenomena such as vibrations in the air, patterns of light on a machine display, vibrations (i.e., haptic devices), or patterns of ink or other materials (i.e., printers and 3-D printers).
[0184] The network interface 2512 receives signals from the memory 2506 and converts them into electrical, optical, or wireless signals to other machines, typically via a machine network. The network interface 2512 also receives signals from the machine network and converts them into electrical, optical, or wireless signals to the memory 2506. The network interface 2512 may allow a robot to communicate with a cloud server, a mobile device, other robots, and other network-enabled devices.
[0185] In one embodiment, a global database 2518 may provide data storage available across the devices that comprise or are supported by the robotic control system 2500. The global database 2518 may include maps, robotic instruction algorithms, robot state information, static, movable, and tidyable object reidentification fingerprints, labels, and other data associated with known static, movable, and tidyable object reidentification fingerprints, or other data supporting the implementation of the disclosed solution. The term Tidyable object in this disclosure refers to elements of the scene that may be moved by the robot and put away in a home location. These objects may be of a type and size such that the robot may autonomously put them away, such as toys, clothing, books, stuffed animals, soccer balls, garbage, remote controls, keys, cellphones, etc. The global database 2518 may be a single data structure or may be distributed across more than one data structure and storage platform, as may best suit an implementation of the disclosed solution. In one embodiment, the global database 2518 is coupled to other components of the robotic control system 2500 through a wired or wireless network, and in communication with the network interface 2512.
[0186] In one embodiment, a robot instruction database 2520 may provide data storage available across the devices that comprise or are supported by the robotic control system 2500. The robot instruction database 2520 may include the programmatic routines that direct specific actuators of the ball collection robot, such as are described with respect to
[0187]
[0188] The robot as previously described includes a sensing system 106. This sensing system 106 may include at least one of cameras 124, IMU sensors 132, lidar sensor 130, odometry 2604, and actuator force feedback sensor 2606. These sensors may capture data describing the environment 2602 around the robot 100.
[0189] Image data 2608 from the cameras 124 may be used for object detection and classification 2610. Object detection and classification 2610 may be performed by algorithms and models configured within the robotic control system 2500 of the robot 100. In this manner, the characteristics and types of objects in the environment 2602 may be determined.
[0190] Image data 2608, object detection and classification 2610 data, and other sensor data 2612 may be used for a global/local map update 2614. The global and/or local map may be stored by the robot 100 and may represent its knowledge of the dimensions and objects within its decluttering environment 2602. This map may be used in navigation and strategy determination associated with decluttering tasks.
[0191] The robot may use a combination of camera 124, lidar sensor 130 and the other sensors to maintain a global or local area map of the environment and to localize itself within that. Additionally, the robot may perform object detection and object classification and may generate visual re-identification fingerprints for each object. The robot may utilize stereo cameras along with a machine learning/neural network software architecture (e.g., semi-supervised or supervised convolutional neural network) to efficiently classify the type, size and location of different objects on a map of the environment.
[0192] The robot may determine the relative distance and angle to each object. The distance and angle may then be used to localize objects on the global or local area map. The robot may utilize both forward and backward facing cameras to scan both to the front and to the rear of the robot.
[0193] Image data 2608, object detection and classification 2610 data, other sensor data 2612, and global/local map update 2614 data may be stored as observations, current robot state, current object state, and sensor data 2616. The observations, current robot state, current object state, and sensor data 2616 may be used by the robotic control system 2500 of the robot in determining navigation paths and task strategies.
[0194]
[0195] The sequence begins with the robot sleeping (sleep state 2802) and charging at the base station (block 2702). The robot is activated, e.g., on a schedule, and enters an exploration mode (environment exploration state 2804, activation action 2806, and schedule start time 2808). In the environment exploration state 2804, the robot scans the environment using cameras (and other sensors) to update its environmental map and localize its own position on the map (block 2704, explore for configured interval 2810). The robot may transition from the environment exploration state 2804 back to the sleep state 2802 on condition that there are no more objects to pick up 2812, or the battery is low 2814.
[0196] From the environment exploration state 2804, the robot may transition to the object organization state 2816, in which it operates to move the items on the floor to organize them by category 2818. This transition may be triggered by the robot determining that objects are too close together on the floor 2820, or determining that the path to one or more objects is obstructed 2822. If none of these triggering conditions is satisfied, the robot may transition from the environment exploration state 2804 directly to the object pick-up state 2824 on condition that the environment map comprises at least one drop-off container for a category of objects 2826, and there are unobstructed items for pickup in the category of the container 2828. Likewise the robot may transition from the object organization state 2816 to the object pick-up state 2824 under these latter conditions. The robot may transition back to the environment exploration state 2804 from the object organization state 2816 on condition that no objects are ready for pick-up 2830.
[0197] In the environment exploration state 2804 and/or the object organization state 2816, image data from cameras is processed to identify different objects (block 2706). The robot selects a specific object type/category to pick up, determines a next waypoint to navigate to, and determines a target object and location of type to pick up based on the map of environment (block 2708, block 2710, and block 2712).
[0198] In the object pick-up state 2824, the robot selects a goal location that is adjacent to the target object(s) (block 2714). It uses a path planning algorithm to navigate itself to that new location while avoiding obstacles. The robot actuates left and right pusher arms to create an opening large enough that the target object may fit through, but not so large that other unwanted objects are collected when the robot drives forwards (block 2716). The robot drives forwards so that the target object is between the left and right pusher arms, and the left and right pusher arms work together to push the target object onto the collection scoop (block 2718).
[0199] The robot may continue in the object pick-up state 2824 to identify other target objects of the selected type to pick up based on the map of environment. If other such objects are detected, the robot selects a new goal location that is adjacent to the target object. It uses a path planning algorithm to navigate itself to that new location while avoiding obstacles, while carrying the target object(s) that were previously collected. The robot actuates left and right pusher arms to create an opening large enough that the target object may fit through, but not so large that other unwanted objects are collected when the robot drives forwards. The robot drives forwards so that the next target object(s) are between the left and right pusher arms. Again, the left and right pusher arms work together to push the target object onto the collection scoop.
[0200] On condition that all identified objects in category are picked up 2832, or if the scoop is at capacity 2834, the robot transitions to the object drop-off state 2836 and uses the map of the environment to select goal location that is adjacent to bin for the type of objects collected and uses a path planning algorithm to navigate itself to that new location while avoiding obstacles (block 2720). The robot backs up towards the bin into a docking position where back of the robot is aligned with the back of the bin (block 2722). The robot lifts the scoop up and backwards rotating over a rigid arm at the back of the robot (block 2724). This lifts the target objects up above the top of the bin and dumps them into the bin.
[0201] From the object drop-off state 2836, the robot may transition back to the environment exploration state 2804 on condition that there are more items to pick up 2838, or it has an incomplete map of the environment 2840. the robot resumes exploring and the process may be repeated (block 2726) for each other type of object in the environment having an associated collection bin.
[0202] The robot may alternatively transition from the object drop-off state 2836 to the sleep state 2802 on condition that there are no more objects to pick up 2812 or the battery is low 2814. Once the battery recharges sufficiently, or at the next activation or scheduled pick-up interval, the robot resumes exploring and the process may be repeated (block 2726) for each other type of object in the environment having an associated collection bin.
[0203]
[0204] A path is formed to the starting goal location, the path comprising zero or more waypoints (block 2906). Movement feedback is provided back to the path planning algorithm. The waypoints may be selected to avoid static and/or dynamic (moving) obstacles (objects not in the target group and/or category). The robot's movement controller is engaged to follow the waypoints to the target group (block 2908). The target group is evaluated upon achieving the goal location, including additional qualifications to determine if it may be safely organized (block 2910).
[0205] The robot's perception system is engaged (block 2912) to provide image segmentation for determination of a sequence of activations generated for the robot's manipulators (e.g., arms) and positioning system (e.g., wheels) to organize the group (block 2914). The sequencing of activations is repeated until the target group is organized, or fails to organize (failure causing regression to block 2910). Engagement of the perception system may be triggered by proximity to the target group. Once the target group is organized, and on condition that there is sufficient battery life left for the robot and there are more groups in the category or categories to organize, these actions are repeated (block 2916).
[0206] In response to low battery life the robot navigates back to the docking station to charge (block 2918). However, if there is adequate battery life, and on condition that the category or categories are organized, the robot enters object pick-up mode (block 2920), and picks up one of the organized groups for return to the drop-off container. Entering pickup mode may also be conditioned on the environment map comprising at least one drop-off container for the target objects, and the existence of unobstructed objects in the target group for pick-up. On condition that no group of objects is ready for pick up, the robot continues to explore the environment (block 2922).
[0207]
[0208] Once the adjacent location is reached, as assessment of the target object is made to determine if may be safely manipulated (item 3010). On condition that the target object may be safely manipulated, the robot is operated to lift the object using the robot's manipulator arm, e.g., scoop (item 3012). The robot's perception module may by utilized at this time to analyze the target object and nearby objects to better control the manipulation (item 3014).
[0209] The target object, once on the scoop or other manipulator arm, is secured (item 3016). On condition that the robot does not have capacity for more objects, or it's the last object of the selected category(ies), object drop-off mode is initiated (item 3018). Otherwise the robot may begin the process again (3002).
[0210]
[0211] Scale invariant keypoint or visual keypoint in this disclosure refers to a distinctive visual feature that may be maintained across different perspectives, such as photos taken from different areas. This may be an aspect within an image captured of a robot's working space that may be used to identify a feature of the area or an object within the area when this feature or object is captured in other images taken from different angles, at different scales, or using different resolutions from the original capture.
[0212] Scale invariant keypoints may be detected by a robot or an augmented reality robotic interface installed on a mobile device based on images taken by the robot's cameras or the mobile device's cameras. Scale invariant keypoints may help a robot or an augmented reality robotic interface on a mobile device to determine a geometric transform between camera frames displaying matching content. This may aid in confirming or fine-tuning an estimate of the robot's or mobile device's location within the robot's working space.
[0213] Scale invariant keypoints may be detected, transformed, and matched for use through algorithms well understood in the art, such as (but not limited to) Scale-Invariant Feature Transform (SIFT), Speeded-Up Robust Features (SURF), Oriented Robust Binary features (ORB), and SuperPoint.
[0214] Objects located in the robot's working space may be detected at block 3104 based on the input from the left camera and the right camera, thereby defining starting locations for the objects and classifying the objects into categories. In one embodiment, a machine learning model may be run on left and right camera frames to generate a panoptic segmentation of the scene and a depth estimation layer.
[0215] At block 3106, re-identification fingerprints may be generated for the objects, wherein the re-identification fingerprints are used to determine visual similarity of objects detected in the future with the objects. The objects detected in the future may be the same objects, redetected as part of an update or transformation of the global area map, or may be similar objects located similarly at a future time, wherein the re-identification fingerprints may be used to assist in more rapidly classifying the objects.
[0216] At block 3108, the robot may be localized within the robot's working space. Input from at least one of the left camera, the right camera, light detecting and ranging (LIDAR) sensors, and inertial measurement unit (IMU) sensors may be used to determine a robot location. The robot's working space may be mapped to create a global area map that includes the scale invariant keypoints, the objects, and the starting locations of the objects. The objects within the robot's working space may be re-identified at block 3110 based on at least one of the starting locations, the categories, and the re-identification fingerprints. Each object may be assigned a persistent unique identifier at block 3112.
[0217] At block 3114, the robots may receive a camera frame from an augmented reality robotic interface installed as an application on a mobile device operated by a user, and may update the global area map with the starting locations and scale invariant keypoints using a camera frame to global area map transform based on the camera frame. In the camera frame to global area map transform, the global area map may be searched to find a set of scale invariant keypoints that match the those detected in the mobile camera frame by using a specific geometric transform. This transform may maximize the number of matching keypoints and minimize the number of non-matching keypoints while maintaining geometric consistency.
[0218] At block 3116, user indicators may be generated for objects, wherein user indicators may include next target, target order, dangerous, too big, breakable, messy, and blocking travel path. The global area map and object details may be transmitted to the mobile device at block 3118, wherein object details may include at least one of visual snapshots, the categories, the starting locations, the persistent unique identifiers, and the user indicators of the objects. This information may be transmitted using wireless signaling such as BlueTooth or Wifi, as supported by the communications 134 module introduced in
[0219] The updated global area map, the objects, the starting locations, the scale invariant keypoints, and the object details, may be displayed on the mobile device using the augmented reality robotic interface. The augmented reality robotic interface may accept user inputs to the augmented reality robotic interface, wherein the user inputs indicate object property overrides including change object type, put away next, don't put away, and modify user indicator, at block 3120. The object property overrides may be transmitted from the mobile device to the robot, and may be used at block 3122 to update the global area map, the user indicators, and the object details. Returning to block 3118, the robot may re-transmit its updated global area map to the mobile device to resynchronize this information.
[0220] Various functional operations described herein may be implemented in logic that is referred to using a noun or noun phrase reflecting said operation or function. For example, an association operation may be carried out by an associator or correlator. Likewise, switching may be carried out by a switch, selection by a selector, and so on. Logic refers to machine memory circuits and non-transitory machine readable media comprising machine-executable instructions (software and firmware), and/or circuitry (hardware) which by way of its material and/or material-energy configuration comprises control and/or procedural signals, and/or settings and values (such as resistance, impedance, capacitance, inductance, current/voltage ratings, etc.), that may be applied to influence the operation of a device. Magnetic media, electronic circuits, electrical and optical memory (both volatile and nonvolatile), and firmware are examples of logic. Logic specifically excludes pure signals or software per se (however does not exclude machine memories comprising software and thereby forming configurations of matter).
[0221] Within this disclosure, different entities (which may variously be referred to as units, circuits, other components, etc.) may be described or claimed as configured to perform one or more tasks or operations. This formulation-[entity] configured to [perform one or more tasks]is used herein to refer to structure (i.e., something physical, such as an electronic circuit). More specifically, this formulation is used to indicate that this structure is arranged to perform the one or more tasks during operation. A structure may be said to be configured to perform some task even if the structure is not currently being operated. A credit distribution circuit configured to distribute credits to a plurality of processor cores is intended to cover, for example, an integrated circuit that has circuitry that performs this function during operation, even if the integrated circuit in question is not currently being used (e.g., a power supply is not connected to it). Thus, an entity described or recited as configured to perform some task refers to something physical, such as a device, circuit, memory storing program instructions executable to implement the task, etc. This phrase is not used herein to refer to something intangible.
[0222] The term configured to is not intended to mean configurable to. An unprogrammed field programmable gate array (FPGA), for example, would not be considered to be configured to perform some specific function, although it may be configurable to perform that function after programming.
[0223] Reciting in the appended claims that a structure is configured to perform one or more tasks is expressly intended not to invoke 35 U.S.C. 112(f) for that claim element. Accordingly, claims in this application that do not otherwise include the means for [performing a function] construct should not be interpreted under 35 U.S.C 112(f).
[0224] As used herein, the term based on is used to describe one or more factors that affect a determination. This term does not foreclose the possibility that additional factors may affect the determination. That is, a determination may be solely based on specified factors or based on the specified factors as well as other, unspecified factors. Consider the phrase determine A based on B. This phrase specifies that B is a factor that is used to determine A or that affects the determination of A. This phrase does not foreclose that the determination of A may also be based on some other factor, such as C. This phrase is also intended to cover an embodiment in which A is determined based solely on B. As used herein, the phrase based on is synonymous with the phrase based at least in part on.
[0225] As used herein, the phrase in response to describes one or more factors that trigger an effect. This phrase does not foreclose the possibility that additional factors may affect or otherwise trigger the effect. That is, an effect may be solely in response to those factors, or may be in response to the specified factors as well as other, unspecified factors. Consider the phrase perform A in response to B. This phrase specifies that B is a factor that triggers the performance of A. This phrase does not foreclose that performing A may also be in response to some other factor, such as C. This phrase is also intended to cover an embodiment in which A is performed solely in response to B.
[0226] As used herein, the terms first, second, etc. are used as labels for nouns that they precede, and do not imply any type of ordering (e.g., spatial, temporal, logical, etc.), unless stated otherwise. For example, in a register file having eight registers, the terms first register and second register may be used to refer to any two of the eight registers, and not, for example, just logical registers 0 and 1.
[0227] When used in the claims, the term or is used as an inclusive or and not as an exclusive or. For example, the phrase at least one of x, y, or z means any one of x, y, and z, as well as any combination thereof.
[0228] As used herein, a recitation of and/or with respect to two or more elements should be interpreted to mean only one element, or a combination of elements. For example, element A, element B, and/or element C may include only element A, only element B, only element C, element A and element B, element A and element C, element B and element C, or elements A, B, and C. In addition, at least one of element A or element B may include at least one of element A, at least one of element B, or at least one of element A and at least one of element B. Further, at least one of element A and element B may include at least one of element A, at least one of element B, or at least one of element A and at least one of element B.
[0229] The subject matter of the present disclosure is described with specificity herein to meet statutory requirements. However, the description itself is not intended to limit the scope of this disclosure. Rather, the inventors have contemplated that the claimed subject matter might also be embodied in other ways, to include different steps or combinations of steps similar to the ones described in this document, in conjunction with other present or future technologies. Moreover, although the terms step and/or block may be used herein to connote different elements of methods employed, the terms should not be interpreted as implying any particular order among or between various steps herein disclosed unless and except when the order of individual steps is explicitly described.
[0230] Having thus described illustrative embodiments in detail, it will be apparent that modifications and variations are possible without departing from the scope of the disclosure as claimed. The scope of disclosed subject matter is not limited to the depicted embodiments but is rather set forth in the following Claims.