Patent classifications
B25J9/1656
DISTRIBUTED ROBOTIC DEMONSTRATION LEARNING
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for distributed robotic demonstration learning. One of the methods includes receiving a skill template to be trained to cause a robot to perform a particular skill having a plurality of subtasks. One or more demonstration subtasks defined by the skill template are identified, wherein each demonstration subtask is an action to be refined using local demonstration data. On online execution system uploads sets of local demonstration data to a cloud-based training system. The cloud-based training system generates respective trained model parameters for each set of local demonstration data. The skill template is executed on the robot using the trained model parameters generated by the cloud-based training system.
CONTROL TOWER AND ENTERPRISE MANAGEMENT PLATFORM FOR VALUE CHAIN NETWORKS
A value chain system that provides recommendations for designing a logistics system generally includes a machine learning system that trains machine-learned models that output logistics design recommendations based on training data sets that each respectively defines one or more features of a respective logistic system and an outcome relating to the respective logistics system; an artificial intelligence system that receives a request for a logistics system design recommendation and determines the logistics system design recommendation based on one or more of the machine-learned models and the request; and a digital twin system that generates an environment digital twin of a logistics environment that incorporates the logistics system design recommendation, and one or more physical asset digital twins of physical assets. The digital twin system executes a simulation based on the logistics environment digital twin, the one or more physical asset digital twins.
AUTOMATED ROBOTIC PROCESS SELECTION AND CONFIGURATION
A system for selection and configuration of an automated robotic process includes a media input module structured to receive at least one functional media, a media analysis module structured to analyze the at least one functional media and identify an action parameter; and a solution selection module structured to select at least one component of an AI solution for use in an automated robotic process, wherein the selection is based, at least in part, on the action parameter.
Machine learning methods and apparatus related to predicting motion(s) of object(s) in a robot's environment based on image(s) capturing the object(s) and based on parameter(s) for future robot movement in the environment
Some implementations of this specification are directed generally to deep machine learning methods and apparatus related to predicting motion(s) (if any) that will occur to object(s) in an environment of a robot in response to particular movement of the robot in the environment. Some implementations are directed to training a deep neural network model to predict at least one transformation (if any), of an image of a robot's environment, that will occur as a result of implementing at least a portion of a particular movement of the robot in the environment. The trained deep neural network model may predict the transformation based on input that includes the image and a group of robot movement parameters that define the portion of the particular movement.
Robot system, robot controller, robot control method, and robot program
It is possible simply switch between a mode of causing one robot to perform an operation independently from another robot and another mode of causing one robot and another robot to perform an operation in cooperation. A robot system includes a first-type robot, a first-type control part which takes charge of drive control of the first-type robot, a second-type robot, and a second-type control part which takes charge of drive control of the second-type robot. When the first-type robot and the second-type robot perform an operation on the same object in cooperation, a control part in charge of drive control of the second-type robot is changed from the second-type control part to the first-type control part, and the first-type control part takes charge of drive control of the first-type robot and the second-type robot.
MODULAR ROBOT CONTROL METHOD AND SYSTEM
The present disclosure relates to the field of modular robot control, and more particularly, to a method for controlling a modular robot and a system thereof. The method includes the following steps: T1: providing a plurality of module units; T2: assembling the plurality of module units into an initial entity structure; T3: acquiring initial virtual configuration information of the initial entity structure; T4: generating an initial virtual configuration based on the initial virtual configuration information; T5: setting an action frame to generate preset action control information; and T6: transmitting the preset action control information to the modular robot which executes a motion according to the preset action control information.
Augmented Reality System and Method for Conveying To a Human Operator Information Associated With Human-Imperceptible Indicia within an Operating Environment of a Robot
A robotic system comprising a robot, and human-imperceptible indicia associated with an object within an environment in which the robot operates, the human-imperceptible indicia comprising or linking to interaction information pertaining to a predetermined intended interaction of the robot with the object, the interaction information being operable to facilitate interaction with the object by the robot in accordance with the predetermined intended interaction. The system can further comprise at least one sensor operable to sense the human-imperceptible indicia and the interaction information, and an augmented reality system comprising a computer for conveying human-understandable information to a human operator, which is associated with at least one of the interaction or linking information. The machine readable indicia can comprise symbols that can be sensed by a sensor of the robot or the augmented reality system and interpreted by the robot or the augmented reality system. The robot can utilize a camera to transmit a real-world view of the operating environment to the augmented reality system that can be combined with the human-understandable information to provide augmented reality operation of the robot.
User-assisted robotic control systems
Exemplary embodiments relate to user-assisted robotic control systems, user interfaces for remote control of robotic systems, vision systems in robotic control systems, and modular grippers for use by robotic systems. Systems, methods, apparatuses and computer-readable media instructions are disclosed for interactions with and control of robotic systems, in particular, pick and place systems using soft robotic actuators to grasp, move and release target objects.
Determining and utilizing corrections to robot actions
Methods, apparatus, and computer-readable media for determining and utilizing human corrections to robot actions. In some implementations, in response to determining a human correction of a robot action, a correction instance is generated that includes sensor data, captured by one or more sensors of the robot, that is relevant to the corrected action. The correction instance can further include determined incorrect parameter(s) utilized in performing the robot action and/or correction information that is based on the human correction. The correction instance can be utilized to generate training example(s) for training one or model(s), such as neural network model(s), corresponding to those used in determining the incorrect parameter(s). In various implementations, the training is based on correction instances from multiple robots. After a revised version of a model is generated, the revised version can thereafter be utilized by one or more of the multiple robots.
Method and Device for Creating a Robot Control Program
The present disclosure relates to a method for creating a robot control program for operating a machine tool, in particular a bending machine, having the steps of generating image material of a machining operation of a workpiece on the machine tool by means of at least one optical sensor; extracting at least one part of the workpiece and/or at least one part of a hand of an operator handling the workpiece from the image material; generating a trajectory and/or a sequence of movement points of at least one part of the workpiece and/or at least one part of a hand of an operator from the extracted image material; and creating a robot control program by reverse transformation of the trajectory and/or the sequence of movement points.