Patent classifications
G05B2219/39298
SYSTEMS AND METHODS FOR LEARNING TO EXTRAPOLATE OPTIMAL OBJECT ROUTING AND HANDLING PARAMETERS
A system for object processing is disclosed. The system includes a framework of processes that enable reliable deployment of artificial intelligence-based policies in a warehouse setting to improve the speed, reliability, and accuracy of the system. The system harnesses a vast number of picks to provide data points to machine learning techniques. These machine learning techniques use the data to refine or reinforce in-use policies to optimize the speed and successful transfer of objects within the system. For example, objects in the system are identified at a supply location, a predetermined set of information regarding object is retrieved and combined with a set of object information and processing parameters determined by the system. The combined information is then used to determine routing of the object according to an initial policy. This policy is then observed, altered, tested, and re-implemented in an altered form.
MACHINE LEARNING METHODS AND APPARATUS FOR AUTOMATED ROBOTIC PLACEMENT OF SECURED OBJECT IN APPROPRIATE LOCATION
Training and/or use of a machine learning model for placement of an object secured by an end effector of a robot. A trained machine learning model can be used to process: (1) a current image, captured by a vision component of a robot, that captures an end effector securing an object; (2) a candidate end effector action that defines a candidate motion of the end effector; and (3) a target placement input that indicates a target placement location for the object. Based on the processing, a prediction can be generated that indicates likelihood of successful placement of the object in the target placement location with application of the motion defined by the candidate end effector action. At many iterations, the candidate end effector action with the highest probability is selected and control commands provided to cause the end effector to move in conformance with the corresponding end effector action. When at least one release criteria is satisfied, control commands can be provided to cause the end effector to release the object, thereby leading to the object being placed in the target placement location.
Workpiece picking device and workpiece picking method for improving picking operation of workpieces
A workpiece picking device includes a sensor measuring a plurality of workpieces randomly piled in a three-dimensional space; a robot folding the workpieces; a hand mounted to the robot and hold the workpieces; a holding position posture calculation unit calculating holding position posture data of a position and a posture to hold the workpieces by the robot based on an output of the sensor; a loading state improvement operation generation unit generating loading state improvement operation data of improving a loading state of the workpieces by the robot based on an output of the sensor; and a robot control unit controlling the robot and the hand. The robot control unit controls the robot and the hand based on an output of the holding position posture calculation unit and the loading state improvement operation generation unit to pick the workpieces or perform a loading state improvement operation.
TEACHING DEVICE, TEACHING METHOD, AND STORAGE MEDIUM STORING TEACHING PROGRAM FOR LASER MACHINING
Provided is a teaching device including a grouping unit which divides machining points into machining point groups so that a machining head can sequentially machine each machining point for a machining time and so that a non-machining time can be minimized, a machining path determination unit which determines a machining path on which an in-group movement time of a robot is shortest for each machining point group, a teaching process adjustment unit which adjusts a machining order of the machining points and an operation order of the machining point groups so as to minimize a distance between groups and which optimizes the grouping so as to minimize a total movement time for completing machining, and a teaching data output unit which outputs, as teaching data, machining execution positions on the machining path obtained as a result of processing of the teaching process adjustment.
TEACHING DEVICE FOR LASER MACHINING
A teaching device for a laser machining system which performs laser machining on a workpiece while moving an irradiation position of laser light using a robot includes a graphical user interface processing unit which displays machining periods, in each of which machining is performed by irradiating a corresponding one of a plurality of machining points set for the workpiece with the laser light while the robot moves along a machining path, and non-machining intervals between the machining periods of the machining points arranged in time series in a band-like region in a distinguishable manner.
TEACHING DEVICE AND TEACHING METHOD FOR TEACHING OPERATION OF LASER PROCESSING DEVICE
A teaching device is provided with a processor. The processor generates a path image showing a moving path MP on which a laser processing device moves laser light with respect to a workpiece in laser processing, generates an input image for inputting a data set of a progress parameter indicating progress of the laser processing and a laser parameter of the laser light, and displays on the path image a position corresponding to the progress parameter on the moving path MP.
Task and process mining by robotic process automations across a computing environment
Disclosed herein is a method implemented by a task mining engine. The task mining engine is stored as processor executable code on a memory. The processor executable code is executed by a processor that is communicatively coupled to the memory. The method includes receiving recorded tasks identifying user activity with respect to a computing environment and clustering the recorded user tasks into steps by processing and scoring each recorded user task. The method also includes extracting step sequences that identify similar combinations or repeated combinations of the steps to mimic the user activity.
Deep reinforcement learning for robotic manipulation
Implementations utilize deep reinforcement learning to train a policy neural network that parameterizes a policy for determining a robotic action based on a current state. Some of those implementations collect experience data from multiple robots that operate simultaneously. Each robot generates instances of experience data during iterative performance of episodes that are each explorations of performing a task, and that are each guided based on the policy network and the current policy parameters for the policy network during the episode. The collected experience data is generated during the episodes and is used to train the policy network by iteratively updating policy parameters of the policy network based on a batch of collected experience data. Further, prior to performance of each of a plurality of episodes performed by the robots, the current updated policy parameters can be provided (or retrieved) for utilization in performance of the episode.
Simulation apparatus
A simulation apparatus includes a machine learning device for learning a change in a machining route in machining of a workpiece. The machine learning device observes data indicating the changed machining route and data indicating a machining condition of the workpiece as a state variable, and also acquires determination data for determining whether or not a cycle time obtained by simulation using the changed machining route is appropriate, and learns by associating the machining condition of the workpiece with the change in the machining route, using the state variable and the determination data.
Reduced degree of freedom robotic controller apparatus and methods
Apparatus and methods for training and controlling of, for instance, robotic devices. In one implementation, a robot may be trained by a user using supervised learning. The user may be unable to control all degrees of freedom of the robot simultaneously. The user may interface to the robot via a control apparatus configured to select and operate a subset of the robot's complement of actuators. The robot may comprise an adaptive controller comprising a neuron network. The adaptive controller may be configured to generate actuator control commands based on the user input and output of the learning process. Training of the adaptive controller may comprise partial set training. The user may train the adaptive controller to operate first actuator subset. Subsequent to learning to operate the first subset, the adaptive controller may be trained to operate another subset of degrees of freedom based on user input via the control apparatus.