SYSTEMS AND METHODS FOR OBJECT DETECTION AND PICK ORDER DETERMINATION
20230182315 · 2023-06-15
Assignee
Inventors
- Lukas Merkle (Cambridge, MA, US)
- Matthew Turpin (Newton, MA, US)
- Samuel Shaw (Somerville, MA, US)
- Andrew Hoelscher (Somerville, MA)
- Rebecca Khurshid (Cambridge, MA, US)
- Laura Lee (Cambridge, MA, US)
- Colin Snow (Sharon, MA, US)
Cpc classification
G05B2219/40006
PHYSICS
B25J9/1687
PERFORMING OPERATIONS; TRANSPORTING
B25J9/1669
PERFORMING OPERATIONS; TRANSPORTING
International classification
Abstract
Methods and apparatus for object detection and pick order determination for a robotic device are provided. Information about a plurality of two-dimensional (2D) object faces of the objects in the environment may be processed to determine whether each of the plurality of 2D object faces matches a prototype object of a set of prototype objects stored in a memory, wherein each of the prototype objects in the set represents a three-dimensional (3D) object. A model of 3D objects in the environment of the robotic device is generated using one or more of the prototype objects in the set of prototype objects that was determined to match one or more of the 2D object faces.
Claims
1. A method of generating a model of objects in an environment of a robotic device, the method comprising: receiving, by at least one computing device, information about a plurality of two-dimensional (2D) object faces of the objects in the environment; determining, by the at least one computing device, based at least in part on the information, whether each of the plurality of 2D object faces matches a prototype object of a set of prototype objects stored in a memory accessible to the at least one computing device, wherein each of the prototype objects in the set represents a three-dimensional (3D) object; and generating, by the at least one computing device, a model of 3D objects in the environment using one or more of the prototype objects in the set of prototype objects that was determined to match one or more of the 2D object faces.
2. The method of claim 1, further comprising: determining that a first 2D object face of the plurality of 2D object faces does not match any prototype object in the set of prototype objects; creating a new prototype object for the first 2D object face that does not match any prototype object in the set; and adding the new prototype object to the set of prototype objects.
3. The method of claim 2, further comprising: controlling the robotic device to: pick up the object associated with the first 2D object face; and capture one or more images of the picked-up object, wherein the one or more images include at least one face of the object other than the first 2D object face, and wherein the new prototype object is created based, at least in part, on the captured one or more images of the picked-up object.
4. The method of claim 3, further comprising: controlling the robotic device to rotate the picked-up object prior to capturing the one or more images of the picked-up object.
5. The method of claim 4, wherein creating the new prototype object based, at least in part, on the captured one or more images comprises: identifying a first planar surface in a first image of the captured one or more images; and calculating a dimension of the picked-up object based on the first planar surface, wherein the new prototype object includes the calculated dimension.
6. The method of claim 1, further comprising: receiving user input describing prototype objects to include in the set of prototype objects; and populating the set of prototype objects with prototype objects based on the user input.
7. The method of claim 1, further comprising: receiving, from a computing system, input describing prototype objects to include in the set of prototype objects; and populating the set of prototype objects with prototype objects based on the input.
8. The method of claim 1, further comprising: determining a set of pickable objects based, at least in part, on the generated model of 3D objects; selecting a target object from the set of pickable objects; and controlling the robotic device to grasp the target object.
9. The method of claim 8, further comprising: determining interactions between objects in the generated model of 3D objects and another object in the environment of the robotic device; filtering the set of pickable objects based, at least in part, on the determined interactions; and selecting the target object from the filtered set of pickable objects.
10. The method of claim 9, wherein determining interactions between objects in the generated model of 3D objects comprises: determining which objects in the generated model have extraction constraints dependent on extraction of one or more other objects in the generated model, and wherein filtering the set of pickable objects comprises including in the filtered set of pickable objects, only objects in the generated model that do not have extraction constraints dependent on extraction of one or more other objects in the generated model.
11. The method of claim 9, wherein determining interactions between objects in the generated model of 3D objects and another object is based at least in part on at least one potential extraction trajectory associated with the objects in the generated model.
12. The method of claim 11, further comprising: determining a reserved space through which the at least one potential extraction trajectory will travel; and including, in the filtered set of pickable objects, only objects in the generated model that have a corresponding reserved space in which no other objects are present.
13. The method of claim 9, further comprising: determining based, at least in part, on the information, that a first 2D object face of the plurality of 2D object faces matches multiple prototype objects in the set of prototype objects, and wherein determining interactions between objects in the generated model of 3D objects and another object comprises determining, for each of the multiple prototype objects matching the first 2D object face, interactions between the prototype object and another object.
14. A robotic device, comprising: a robotic arm having disposed thereon, a suction-based gripper configured to grasp a target object; a perception system configured to capture one or more images of a plurality of two-dimensional (2D) object faces of objects in an environment of the robotic device; and at least one computing device configured to: determine based, at least in part, on the captured one or more images, whether each of the plurality of 2D object faces matches a prototype object of a set of prototype objects stored in a memory of the robotic device, wherein each of the prototype objects in the set represents a three-dimensional (3D) object; generate a model of 3D objects in the environment using one or more of the prototype objects in the set of prototype objects that was determined to match one or more of the 2D object faces; select based, at least in part, on the generated model, one of the objects in the environment as a target object; and control the robotic arm to grasp the target object.
15. The robotic device of claim 14, wherein the at least one computing device is further configured to: determine that a first 2D object face of the plurality of 2D object faces does not match any prototype object in the set of prototype objects; create a new prototype object for the first 2D object face that does not match any prototype object in the set; and add the new prototype object to the set of prototype objects.
16. The robotic device of claim 15, wherein the at least one computing device is further configured to: control the robotic arm to pick up the object associated with the first 2D object face; and control the perception system to capture one or more images of the picked-up object, wherein the one or more images include at least one face of the object other than the first 2D object face, and wherein the new prototype object is created based, at least in part, on the captured one or more images of the picked-up object.
17. The robotic device of claim 16, wherein the at least one computing device is further configured to control the robotic arm to rotate the picked-up object prior to capturing the one or more images of the picked-up object by the perception system.
18. The robotic device of claim 14, further comprising: a user interface configured to enable a user to provide user input describing prototype objects to include in the set of prototype objects, wherein the at least one computing device is further configured to: populate the set of prototype objects with prototype objects based on the user input.
19. The robotic device of claim 14, wherein the at least one computing device is further configured to: receive, from a computing system, input describing prototype objects to include in the set of prototype objects; and populate the set of prototype objects with prototype objects based on the user input.
20. The robotic device of claim 14, wherein the at least one computing device is further configured to: determine a set of pickable objects based, at least in part, on the generated model of 3D objects; select a target object from the set of pickable objects; and control the robotic arm to grasp the target object.
21. The robotic device of claim 20, wherein the at least one computing device is further configured to: determine a desired orientation of the target object; and place the target object in the desired orientation at a target location.
22. The robotic device of claim 21, wherein determining the desired orientation of the target object is based, at least in part, on the target location.
23. The robotic device of claim 22, wherein the target location includes a conveyor, and wherein determining the desired orientation of the target object comprises determining to align a longest axis of the target object with a length dimension of the conveyor.
24. The robotic device of claim 21, wherein the at least one computing device is further configured to: determine a stability estimate associated with placing a side of the target object on a surface, wherein determining the desired orientation of the target object is based, at least in part, on the stability estimate.
25. The robotic device of claim 24, wherein determining the stability estimate comprises: calculating a ratio of dimensions of the side of the target object; and determining the stability estimate based, at least in part, on the ratio.
26. The robotic device of claim 21, wherein the at least one computing device is further configured to: control the robotic arm to orient the target object based on the desired orientation.
27. A non-transitory computer readable medium encoded with a plurality of instructions that, when executed by at least one computing device, perform a method, the method comprising: receiving information about a plurality of two-dimensional (2D) object faces of the objects in the environment; determining, based at least in part on the information, whether each of the plurality of 2D object faces matches a prototype object of a set of prototype objects stored in a memory accessible to the at least one computing device, wherein each of the prototype objects in the set represents a three-dimensional (3D) object; and generating a model of 3D objects in the environment using one or more of the prototype objects in the set of prototype objects that was determined to match one or more of the 2D object faces.
Description
BRIEF DESCRIPTION OF DRAWINGS
[0088] The accompanying drawings are not intended to be drawn to scale. In the drawings, each identical or nearly identical component that is illustrated in various figures may be represented by a like numeral. For purposes of clarity, not every component may be labeled in every drawing. In the drawings:
[0089]
[0090]
[0091]
[0092]
[0093]
[0094]
[0095]
[0096]
[0097]
[0098]
[0099]
[0100]
[0101]
[0102]
[0103]
[0104]
[0105]
[0106]
[0107]
[0108]
[0109]
[0110]
[0111]
[0112]
[0113]
[0114]
[0115]
[0116]
[0117]
DETAILED DESCRIPTION
[0118] Robots are typically configured to perform various tasks in an environment in which they are placed. Generally, these tasks include interacting with objects and/or the elements of the environment. Notably, robots are becoming popular in warehouse and logistics operations. Before the introduction of robots to such spaces, many operations were performed manually. For example, a person might manually unload boxes from a truck onto one end of a conveyor belt, and a second person at the opposite end of the conveyor belt might organize those boxes onto a pallet. The pallet may then be picked up by a forklift operated by a third person, who might drive to a storage area of the warehouse and drop the pallet for a fourth person to remove the individual boxes from the pallet and place them on shelves in the storage area. More recently, robotic solutions have been developed to automate many of these functions. Such robots may either be specialist robots (i.e., designed to perform a single task, or a small number of closely related tasks) or generalist robots (i.e., designed to perform a wide variety of tasks). To date, both specialist and generalist warehouse robots have been associated with significant limitations, as explained below.
[0119] A specialist robot may be designed to perform a single task, such as unloading boxes from a truck onto a conveyor belt. While such specialist robots may be efficient at performing their designated task, they may be unable to perform other, tangentially related tasks in any capacity. As such, either a person or a separate robot (e.g., another specialist robot designed for a different task) may be needed to perform the next task(s) in the sequence. As such, a warehouse may need to invest in multiple specialist robots to perform a sequence of tasks, or may need to rely on a hybrid operation in which there are frequent robot-to-human or human-to-robot handoffs of objects.
[0120] In contrast, a generalist robot may be designed to perform a wide variety of tasks, and may be able to take a box through a large portion of the box's life cycle from the truck to the shelf (e.g., unloading, palletizing, transporting, depalletizing, storing). While such generalist robots may perform a variety of tasks, they may be unable to perform individual tasks with high enough efficiency or accuracy to warrant introduction into a highly streamlined warehouse operation. For example, while mounting an off-the-shelf robotic manipulator onto an off-the-shelf mobile robot might yield a system that could, in theory, accomplish many warehouse tasks, such a loosely integrated system may be incapable of performing complex or dynamic motions that require coordination between the manipulator and the mobile base, resulting in a combined system that is inefficient and inflexible. Typical operation of such a system within a warehouse environment may include the mobile base and the manipulator operating sequentially and (partially or entirely) independently of each other. For example, the mobile base may first drive toward a stack of boxes with the manipulator powered down. Upon reaching the stack of boxes, the mobile base may come to a stop, and the manipulator may power up and begin manipulating the boxes as the base remains stationary. After the manipulation task is completed, the manipulator may again power down, and the mobile base may drive to another destination to perform the next task. As should be appreciated from the foregoing, the mobile base and the manipulator in such systems are effectively two separate robots that have been joined together; accordingly, a controller associated with the manipulator may not be configured to share information with, pass commands to, or receive commands from a separate controller associated with the mobile base. As such, such a poorly integrated mobile manipulator robot may be forced to operate both its manipulator and its base at suboptimal speeds or through suboptimal trajectories, as the two separate controllers struggle to work together. Additionally, while there are limitations that arise from a purely engineering perspective, there are additional limitations that must be imposed to comply with safety regulations. For instance, if a safety regulation requires that a mobile manipulator must be able to be completely shut down within a certain period of time when a human enters a region within a certain distance of the robot, a loosely integrated mobile manipulator robot may not be able to act sufficiently quickly to ensure that both the manipulator and the mobile base (individually and in aggregate) do not a pose a threat to the human. To ensure that such loosely integrated systems operate within required safety constraints, such systems are forced to operate at even slower speeds or to execute even more conservative trajectories than those limited speeds and trajectories as already imposed by the engineering problem. As such, the speed and efficiency of generalist robots performing tasks in warehouse environments to date have been limited.
[0121] In view of the above, the inventors have recognized and appreciated that a highly integrated mobile manipulator robot with system-level mechanical design and holistic control strategies between the manipulator and the mobile base may be associated with certain benefits in warehouse and/or logistics operations. Such an integrated mobile manipulator robot may be able to perform complex and/or dynamic motions that are unable to be achieved by conventional, loosely integrated mobile manipulator systems. As a result, this type of robot may be well suited to perform a variety of different tasks (e.g., within a warehouse environment) with speed, agility, and efficiency.
Example Robot Overview
[0122] In this section, an overview of some components of one embodiment of a highly integrated mobile manipulator robot configured to perform a variety of tasks is provided to explain the interactions and interdependencies of various subsystems of the robot. Each of the various subsystems, as well as control strategies for operating the subsystems, are described in further detail in the following sections.
[0123]
[0124]
[0125]
[0126] Also of note in
[0127]
[0128] To pick some boxes within a constrained environment, the robot may need to carefully adjust the orientation of its arm to avoid contacting other boxes or the surrounding shelving. For example, in a typical “keyhole problem”, the robot may only be able to access a target box by navigating its arm through a small space or confined area (akin to a keyhole) defined by other boxes or the surrounding shelving. In such scenarios, coordination between the mobile base and the arm of the robot may be beneficial. For instance, being able to translate the base in any direction allows the robot to position itself as close as possible to the shelving, effectively extending the length of its arm (compared to conventional robots without omnidirectional drive which may be unable to navigate arbitrarily close to the shelving). Additionally, being able to translate the base backwards allows the robot to withdraw its arm from the shelving after picking the box without having to adjust joint angles (or minimizing the degree to which joint angles are adjusted), thereby enabling a simple solution to many keyhole problems.
[0129] Of course, it should be appreciated that the tasks depicted in
Example Computing Device
[0130] Control of one or more of the robotic arm, the mobile base, the turntable, and the perception mast may be accomplished using one or more computing devices located on-board the mobile manipulator robot. For instance, one or more computing devices may be located within a portion of the mobile base with connections extending between the one or more computing devices and components of the robot that provide sensing capabilities and components of the robot to be controlled. In some embodiments, the one or more computing devices may be coupled to dedicated hardware configured to send control signals to particular components of the robot to effectuate operation of the various robot systems. In some embodiments, the mobile manipulator robot may include a dedicated safety-rated computing device configured to integrate with safety systems that ensure safe operation of the robot.
[0131] The computing devices and systems described and/or illustrated herein broadly represent any type or form of computing device or system capable of executing computer-readable instructions, such as those contained within the modules described herein. In their most basic configuration, these computing device(s) may each include at least one memory device and at least one physical processor.
[0132] In some examples, the term “memory device” generally refers to any type or form of volatile or non-volatile storage device or medium capable of storing data and/or computer-readable instructions. In one example, a memory device may store, load, and/or maintain one or more of the modules described herein. Examples of memory devices include, without limitation, Random Access Memory (RAM), Read Only Memory (ROM), flash memory, Hard Disk Drives (HDDs), Solid-State Drives (SSDs), optical disk drives, caches, variations or combinations of one or more of the same, or any other suitable storage memory.
[0133] In some examples, the terms “physical processor” or “computer processor” generally refer to any type or form of hardware-implemented processing unit capable of interpreting and/or executing computer-readable instructions. In one example, a physical processor may access and/or modify one or more modules stored in the above-described memory device. Examples of physical processors include, without limitation, microprocessors, microcontrollers, Central Processing Units (CPUs), Field-Programmable Gate Arrays (FPGAs) that implement softcore processors, Application-Specific Integrated Circuits (ASICs), portions of one or more of the same, variations or combinations of one or more of the same, or any other suitable physical processor.
[0134]
[0135] During operation, perception module 310 can perceive one or more objects (e.g., boxes) for grasping (e.g., by an end-effector of the robotic device 300) and/or one or more aspects of the robotic device's environment. In some embodiments, perception module 310 includes one or more sensors configured to sense the environment. For example, the one or more sensors may include, but are not limited to, a color camera, a depth camera, a LIDAR or stereo vision device, or another device with suitable sensory capabilities. In some embodiments, image(s) captured by perception module 310 are processed by processor(s) 332 using trained object detection model(s) to extract surfaces (e.g., faces) of boxes or other objects in the image capable of being grasped by the robotic device 300.
[0136]
[0137]
[0138] Information about objects in the environment of the robotic device may be determined based on the RGBD image. In some embodiments, the RGBD image is provided as input to a trained statistical model (e.g., a machine learning model) that has been trained to identify one or more characteristics of objects of interest. For instance, the statistical model may be trained to recognize surfaces (e.g., faces) of objects 530 arranged in a stack as shown in
[0139] The inventors have recognized and appreciated that to enable a robot to successfully pick objects from a stack without colliding with other objects in the stack, information about the full dimensions (e.g., height, width, depth) of the object to be grasped can be helpful (whether known, inferred, or estimated). For example, three-dimensional knowledge about an object may be helpful to place the gripper of the robot above the middle of the center of mass of the object and/or may be helpful to decide the order in which objects should be picked from the stack. In many cases, only two dimensions (e.g., height and width) of an object corresponding to the object front face are visible from the robot's perspective, with the depth dimension unknown. Knowing only two dimensions of the object may result in collisions between neighboring objects and collisions with other obstacles (e.g., truck walls, ceilings, racks, etc.) in the environment of the robot if the objects are not picked in an order that reduces the chance of such collisions.
[0140]
[0141]
[0142]
[0143]
[0144] Although act 740 is shown in process 700 as being performed immediately after determining that an object face did not match any stored prototype in act 730, it should be appreciated that the non-matching object may be picked at some later time. For instance, all detected object faces may be attempted to be matched with stored prototypes and a three-dimensional model of the stack may be generated in act 760 only with the prototypes that matched those matched object faces. Then, when a next object is selected to be picked from the stack, if the selected object was determined not to match a prototype, that object can be picked from the stack, and a new prototype can be created at that point in time. In this way, a set of prototypes can be dynamically updated during an object picking process with limited interruption of the normal picking operation of the robotic device.
[0145] As described above, in some embodiments the set of prototypes stored by the robotic device may initially not include any prototypes, as shown in
[0146]
[0147]
[0148] The three-dimensional model of the stack of objects may be used to categorize the objects in the stack, which is used to inform the process of selecting a next object to pick from the stack. For instance, the three-dimensional model of the stack of objects may be used to evaluate interactions between neighboring objects in the stack to make an assessment of which objects should be grasped next and when they are grasped, whether they should be subjected to in-gripper detection to generate a new prototype in the set of prototypes. As shown in
[0149] In some instances, multiple prototypes may match a same detected object face, an example of which is shown in
[0150] As described above, the three-dimensional model output from process 700 shown in
[0151]
[0152] Factors other than object dependencies may additionally or alternatively be taken into account when generating the filtered set of objects in act 1020. For instance, when a previous extraction of an object failed, a hold may be placed on extracting the object, which enables the robot to remember that there was a previous issue with extracting that object from the stack. Objects with holds placed on them may not be included in the filtered set and as such may not be considered for extraction. Object extraction issues can arise for various reasons including, but not limited to, the extraction force exceeding a threshold which indicates that the object is stuck, motion planning during extraction fails, poor suction with the object is detected, or a timeout value to complete extraction is exceeded. In some embodiments, one or more conditions may occur that result in a hold placed on an object being removed. For example, such conditions include, but are not limited to, a threshold number of other objects having been extracted from the stack, enough time having passed since the hold was placed, neighboring objects that likely caused the extraction issue having been extracted, or there being no other feasible objects to extract from the stack (e.g., the filtered set of objects is empty but for removing one or more holds placed on objects in the stack).
[0153] After generating a filtered set of objects (e.g., by eliminating from the original set of objects all objects that are dependent or are otherwise likely to interact with other objects if extracted), process 1000 proceeds to act 1030, where a next object to pick (also referred to herein as a “target” object) is selected based on the filtered set. In some instances, the filtered set of objects may include only a single object in which case that object is selected. In other instances, multiple “non-interacting” objects that could be picked from the stack may be included in the filtered set of objects and one or more heuristics or rules may be used to select one of the objects in the filtered set as the target object. For instance, the object in the filtered set that is closest to the robot may be selected as the target object. If there are multiple objects in the filtered set located at a same or similar distance from the robot, the tallest object may be selected as the target object. It should be appreciated that other criteria may additionally or alternatively be used to select an object from the filtered set as the target object, and embodiments are not limited in this respect.
[0154] As discussed above, a filtered set of objects may be created, at least in part, on interactions (or likely interactions) between different objects in the stack.
[0155]
[0156] In some embodiments, possible extraction motions to extract a candidate target object from the stack may be considered when generating a filtered set of objects for extraction. For instance, kinematic and dynamic extraction models may be used to determine whether there are any likely interactions between a potential object for extraction and one or more other objects in the stack or other obstructions in the environment of the robot (e.g., truck walls or ceiling).
[0157]
[0158] As discussed above, some target objects when selected for extraction may be manipulated by the robotic device to enable a determination of a missing dimension (e.g., the depth dimension) of the object to be able to create a new prototype for the object. Having full dimension information (e.g., width, height, depth) for objects facilitates gripper placement on the object, object pick order determination and collision avoidance, among other things. For instance, as discussed in more detail below, having full dimension information for a picked object may facilitate how the robotic device places the picked object on a pallet, conveyor or other object.
[0159]
[0160]
[0161] After an object is grasped by a robot, the robot may determine how to place the object at a target location (e.g., on a conveyor, cart, or pallet). The orientation of the object when placed at the target location may impact, for example, the stability of the object when placed. Accordingly, placing objects at a target location using a desired orientation (e.g., top side up, smallest size face up, long side face down, etc.) may be important to help ensure that the object is placed in a manner that ensures or facilitates stability of the object when placed at the target location.
[0162] In some instances, the desired orientation of the object may depend, at least in part, on a particular task that the robot is performing. For example, when tasked with unloading boxes from a truck onto a conveyor, it may be desirable to place the long axis of the box on the surface of the conveyor to facilitate stable placement of the box on the conveyor surface. In some instances, the desired orientation of the object may depend on one or more characteristics of the object. For example, if the object is a box that includes fragile components (e.g., glassware), the desired orientation may be to keep the box in the same orientation (e.g., top up) as it was oriented in the stack to avoid breaking its contents (e.g., by flipping it sideways or upside down). Determining whether an object should be placed top up, for example, due to it containing fragile contents, may be performed in any suitable way. For instance, one or more prototypes associated with object may include information about the object that may be used to determine that the object should be placed top up. In some embodiments, information about the contents of the object may be determined, at least in part, based on a label (e.g., a barcode, a product label, etc.) on the object, and a determination that the object should be placed in a top up orientation may be based on identifying the label. Information about the contents of the object may also be used in some embodiments to change one or more operating parameters (e.g., arm speed, arm acceleration) of the robot.
[0163] As another example, the desired orientation may be determined based, at least in part, on an estimated stability of an object when placed at a target location. In some embodiments, the stability of the object when placed, for example, on a conveyor may be estimated based on information about the object including its dimensions and/or weight distribution (e.g., center of mass). For instance, the stability of the object when placed may be estimated by calculating a ratio of the sides of the object. Based on the calculated ratio, it may be determined whether placement of the object with its longest axis aligned with the surface of the conveyor will result in a stable placement and as such, should be the desired orientation. In some embodiments, the stability estimate may include factors other than or in addition to the dimensions of the object. For example, in some embodiment, a moment of inertia associated with the object may be used, at least in part, to estimate stability of the object when placed.
[0164] In some embodiments, multiple of the above factors (or additional factors) may be taken into consideration when determine a desired orientation of an object to be placed by a robot. For instance, although it may generally desirable to place an object on a conveyor with its longest dimension in line with the conveyor and its bottom face parallel with the conveyor plane, when the object includes fragile contents and/or if the object has an uneven weight distribution, an orientation other than long side down (e.g., a top up orientation) on the conveyor may be preferable. In some instances, a top up orientation of the object may be achieved while also rotating the object such that the bottom surface of the object is oriented to facilitate stability on the conveyor surface (e.g., by placing the longest of the bottom surface dimensions along the length of a conveyor surface).
[0165] After the desired orientation of the object is determined, process 1500 proceeds to act 1520, where the object is oriented (e.g., by movement of the robotic arm) based, at least in part, on the desired orientation. For example, the robot may determine a trajectory that results in the object arriving at the target location in the desired orientation. As described herein, the trajectory may also be determined, at least in part, to avoid collisions with other objects in the environment of the robot (e.g., truck walls, other objects, a conveyor). The orientation of the grasped object in the gripper of the robot may be included in the determined trajectory to ensure that the object arrives at the target location in the desired orientation and that any constraints (e.g., keeping the object with a top up orientation during the trajectory) associated with the trajectory are satisfied. Process 1500 then proceeds to act 1530, where the grasped object is placed at the target location in the desired orientation. For instance, the grasped object may be released from the gripper of the robot such that the object is placed at the target location.
[0166] As described above, when the target location of an object trajectory is associated with a conveyor coupled to or located near the robot, it may be desirable to orient the object such that its longest dimension is in line with the conveyor and the bottom face of the object is parallel with the conveyor plane. Such an orientation may facilitate stable placement of the object on the conveyor by ensuring that a large surface area of the object is placed in contact with the surface of the conveyor and/or that the object is placed in a manner to reduce overhang of the object relative to the conveyor surface.
[0167]
[0168] Process 1600 then proceeds to act 1620, where the robot is controlled to orient the object such that the longest dimension of the object in line with the conveyor. The robot may be controlled to orient the object into a desired orientation (e.g., with the longest dimension of the object in line with the conveyor) at any point in time after the robot has grasped the object and before the object is placed on the conveyor. For instance, the robot may be controlled to orient the object into the desired orientation immediately or soon after picking the object, but prior to moving the arm of the robot through a trajectory from a start location (e.g., near where the object is picked) to a target location (e.g., above the conveyor). Alternatively, the robot may be controlled to orient the object into the desired orientation only after it has reached the target location. In yet further instances, the robot may be controlled to orient the object into the desired orientation as the arm of the robot is moved from the start location to the target location. In such instances, orientation of the object into a desired orientation may be integrated with the trajectory planning and execution processes of the robot to ensure smooth pick and place operation of the robot while avoiding collisions with other objects in the robot's environment.
[0169] Process 1600 then proceeds to act 1630 where the grasped object is placed on the conveyor in the desired orientation (e.g., longest dimension oriented in line with the conveyor). In some embodiments, how the object is placed on the conveyor may be determined based, at least in part, on information about possible collisions between the object and the conveyor. As described above, in some embodiments, multiple prototypes may be associated with an object being manipulated by a robot. Whereas the most likely prototype may be used for operations such as grasp placement and dimension determination, the worst case prototype may be used for collision avoidance. One of the objects to be avoided for collision modelling may be the conveyor on which the object is to be placed. To ensure that the object does not collide with the conveyor when the worst case prototype indicates that the object may have a relatively long dimension, the target location may be located some distance from the surface of the conveyor to ensure adequate clearance during the trajectory. To facilitate stable placement of the object on the conveyor, the robot may be controlled to execute a “gentle placement” of the object on the conveyor. For instance rather than merely dropping the object from distance, which may result in the object falling off of the conveyor and/or cause damage to the contents of the object, the robot may be controlled to gently lower the object toward the conveyor (e.g., in its desired orientation) prior to releasing its grasp on the object. In some embodiments, a gentle placement may be achieved by tipping the object forward (or backward) prior to releasing its grasp on the object. Tipping the object relative to the conveyor surface reduces the distance between the conveyor surface and a portion of the object that will contact the conveyor surface first.
[0170]
[0171]
[0172]
[0173] Although illustrated as separate elements, the modules described and/or illustrated herein may represent portions of a single module or application. In addition, in certain embodiments one or more of these modules may represent one or more software applications or programs that, when executed by at least one computing device, may cause the computing device to perform one or more tasks. For example, one or more of the modules described and/or illustrated herein may represent modules stored and configured to run on one or more of the computing devices or systems described and/or illustrated herein. One or more of these modules may also represent all or portions of one or more special-purpose computers configured to perform one or more tasks.
[0174] In addition, one or more of the modules described herein may transform data, physical devices, and/or representations of physical devices from one form to another. Additionally, or alternatively, one or more of the modules recited herein may transform a processor, volatile memory, non-volatile memory, and/or any other portion of a physical computing device from one form to another by executing on the computing device, storing data on the computing device, and/or otherwise interacting with the computing device.
[0175] The above-described embodiments can be implemented in any of numerous ways. For example, the embodiments may be implemented using hardware, software or a combination thereof. When implemented in software, the software code can be executed on any suitable processor or collection of processors, whether provided in a single computer or distributed among multiple computers. It should be appreciated that any component or collection of components that perform the functions described above can be generically considered as one or more controllers that control the above-discussed functions. The one or more controllers can be implemented in numerous ways, such as with dedicated hardware or with one or more processors programmed using microcode or software to perform the functions recited above.
[0176] In this respect, it should be appreciated that embodiments of a robot may include at least one non-transitory computer-readable storage medium (e.g., a computer memory, a portable memory, a compact disk, etc.) encoded with a computer program (i.e., a plurality of instructions), which, when executed on a processor, performs one or more of the above-discussed functions. Those functions, for example, may include control of the robot and/or driving a wheel or arm of the robot. The computer-readable storage medium can be transportable such that the program stored thereon can be loaded onto any computer resource to implement the aspects of the present invention discussed herein. In addition, it should be appreciated that the reference to a computer program which, when executed, performs the above-discussed functions, is not limited to an application program running on a host computer. Rather, the term computer program is used herein in a generic sense to reference any type of computer code (e.g., software or microcode) that can be employed to program a processor to implement the above-discussed aspects of the present invention.
[0177] Various aspects of the present invention may be used alone, in combination, or in a variety of arrangements not specifically discussed in the embodiments described in the foregoing and are therefore not limited in their application to the details and arrangement of components set forth in the foregoing description or illustrated in the drawings. For example, aspects described in one embodiment may be combined in any manner with aspects described in other embodiments.
[0178] Also, embodiments of the invention may be implemented as one or more methods, of which an example has been provided. The acts performed as part of the method(s) may be ordered in any suitable way. Accordingly, embodiments may be constructed in which acts are performed in an order different than illustrated, which may include performing some acts simultaneously, even though shown as sequential acts in illustrative embodiments.
[0179] Use of ordinal terms such as “first,” “second,” “third,” etc., in the claims to modify a claim element does not by itself connote any priority, precedence, or order of one claim element over another or the temporal order in which acts of a method are performed. Such terms are used merely as labels to distinguish one claim element having a certain name from another element having a same name (but for use of the ordinal term).
[0180] The phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. The use of “including,” “comprising,” “having,” “containing”, “involving”, and variations thereof, is meant to encompass the items listed thereafter and additional items.
[0181] Having described several embodiments of the invention in detail, various modifications and improvements will readily occur to those skilled in the art. Such modifications and improvements are intended to be within the spirit and scope of the invention. Accordingly, the foregoing description is by way of example only, and is not intended as limiting.