Patent classifications
G05B2219/40507
OFFLINE ROBOT PLANNING WITH ONLINE ADAPTATION
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for performing online robotic motion planning from pre-generated motion plans. A library of pre-generated motion plans for performing a particular task is maintained. Each pre-generated motion plan comprises a plurality of waypoints and one or more actions. One or more present observations of a robot in a workcell are obtained. The one or more observations are classified. A pre-generated candidate motion plan that matches the labels assigned to the present observations of the robot in the workcell is selected from the library of pre-generated motion plans. The pre-generated candidate motion plan is adapted according to the present observations of the robot in the workcell to generate a final motion plan to be executed by the robot.
MOBILE ROBOT FOR AVOIDING NON-DRIVING AREA AND METHOD FOR AVOIDING NON-DRIVING AREA OF MOBILE ROBOT
According to the present disclosure, when a mobile robot travels, if the learning information of the avoiding mark which is machine-trained by the mobile robot corresponds to a mark around the target object sensed by the mobile robot, the mobile robot can avoid the sensed avoiding mark. That is, the mobile robot may collide with an object which needs to be avoided while traveling. Therefore, when an avoiding mark which is formed of at least any one of a different color or a different material from the floor of the driving area is disposed around an object to be avoided or an area to be avoided and the avoiding mark is sensed, the mobile robot autonomously avoids the object or the area to be avoided to be prevented from colliding with the object which needs to be avoided or traveling in the area to be avoided.
Systems and methods for distributed autonomous robot interfacing using live image feeds
Described in detail herein is an autonomous fulfillment system. The system includes the first computing system with an interactive display. The first computing system can transmit a request for physical objects from a facility. A second computing system can transmit instructions to autonomous robot devices to retrieve the physical objects from the facility. The second computing system can control the image capturing device of an autonomous robot device to capture a live image feed of at least one physical object picked up by the autonomous robot device. The second computing system can switch an input feed of the first computing system to display the live image feed on the interactive display of the first computing system. The second computing system can instruct the autonomous robot device to discard the physical objects picked up by the autonomous robot device and to pick up a replacement physical object.
Method and Device for Controlling the Motion of One or More Collaborative Robots
A method for controlling the motion of one or more collaborative robots is described, the collaborative robots being mounted on a fixed or movable base, equipped with one or more terminal members, and with a motion controller, the method including the following iterative steps: determining the position coordinates of the robots, and the position coordinates of one or more human operators collaborating with the robot; determining a set of productivity indices associated with relative directions of motion of the terminal member of the robot, the productivity indices being indicative of the speed at which the robot can move in each of the directions without having to slow down or stop because of the presence of the operator; supplying the controller of the robot with the data of the set of productivity indices associated with the relative directions of motion of the terminal member of the robot, so that the controller can determine the directions of motion of the terminal member of the robot based on the higher values of the productivity index.
Offline robot planning with online adaptation
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for performing online robotic motion planning from pre-generated motion plans. A library of pre-generated motion plans for performing a particular task is maintained. Each pre-generated motion plan comprises a plurality of waypoints and one or more actions. One or more present observations of a robot in a workcell are obtained. The one or more observations are classified. A pre-generated candidate motion plan that matches the labels assigned to the present observations of the robot in the workcell is selected from the library of pre-generated motion plans. The pre-generated candidate motion plan is adapted according to the present observations of the robot in the workcell to generate a final motion plan to be executed by the robot.
Systems and Methods for Distributed Autonomous Robot Interfacing Using Live Image Feeds
Described in detail herein is an autonomous fulfillment system. The system includes the first computing system, with an interactive display. The first computing system can transmit a request for physical objects from a facility. A second computing system can transmit instructions autonomous robot devices to retrieve the physical objects from the facility. The second computing system can control the image capturing device of the autonomous robot device to capture a live image feed of the at least one physical object picked up by the at least autonomous robot device. The second computing system can switch an input feed of the first computing system to display the live image feed on the display of the first computing system. The second computing system, instruct the autonomous device to discard the physical objects picked up by the at least one autonomous robot device and to pick up a replacement physical object.
Systems and methods for distributed autonomous robot interfacing using live image feeds
Described in detail herein is an autonomous fulfillment system. The system includes the first computing system, with an interactive display. The first computing system can transmit a request for physical objects from a facility. A second computing system can transmit instructions autonomous robot devices to retrieve the physical objects from the facility. The second computing system can control the image capturing device of the autonomous robot device to capture a live image feed of the at least one physical object picked up by the at least autonomous robot device. The second computing system can switch an input feed of the first computing system to display the live image feed on the display of the first computing system. The second computing system, instruct the autonomous device to discard the physical objects picked up by the at least one autonomous robot device and to pick up a replacement physical object.
Distributed autonomous robot interfacing systems and methods
Described in detail herein is an automated fulfillment system including a computing system programmed to receive requests from disparate sources for physical objects disposed at one or more locations in a facility. The computing system can combine the requests, and group the physical objects in the requests based on object types or expected object locations. Autonomous robot devices can receive instructions from the computing system to retrieve a group of the physical objects and deposit the physical objects in storage containers.
Systems and Methods for Distributed Autonomous Robot Interfacing Using Live Image Feeds
Described in detail herein is an autonomous fulfillment system. The system includes the first computing system, with an interactive display. The first computing system can transmit a request for physical objects from a facility. A second computing system can transmit instructions autonomous robot devices to retrieve the physical objects from the facility. The second computing system can control the image capturing device of the autonomous robot device to capture a live image feed of the at least one physical object picked up by the at least autonomous robot device. The second computing system can switch an input feed of the first computing system to display the live image feed on the display of the first computing system. The second computing system, instruct the autonomous device to discard the physical objects picked up by the at least one autonomous robot device and to pick up a replacement physical object.
Distributed Autonomous Robot Interfacing Systems and Methods
Described in detail herein is an automated fulfilment system including a computing system programmed to receive requests from disparate sources for physical objects disposed at one or more locations in a facility. The computing system can combine the requests, and group the physical objects in the requests based on object types or expected object locations. Autonomous robot devices can receive instructions from the computing system to retrieve a group of the physical objects and deposit the physical objects in storage containers.