Patent classifications
B25J9/1669
SYSTEM AND METHOD FOR CONTROLLING THE ROBOT, ELECTRONIC DEVICE AND COMPUTER READABLE MEDIUM
Systems, devices, and methods for controlling a robot. Some methods include, in response to determining that an object enters a reachable area of the robot, triggering a first sensor to sense a movement of the object; determining first position information of the object based on data received from the first sensor; determining second position information of the object based on second data received from a second sensor; and generating a first prediction of a target position at which the object is operated by the robot. In this way, the robot can complete an operation for the object on the AGV within a limit operation time during which the AGV passes through the reachable area of the robot. Meanwhile, by collecting the sensing data from different sensor groups, a target position at which the object is handled by the robot may be predicted more accurately.
REAL-TIME PREDICTOR OF HUMAN MOVEMENT IN SHARED WORKSPACES
Disclosed herein are systems, devices, and methods for real-time determinations of likelihoods for possible trajectories of a collaborator in a workspace with a robot. The system determines a current kinematic state of the collaborator and determines a goal of the collaborator based on occupancy information about objects in the workspace. The system also determines a possible trajectory for the collaborator based on the goal and the current kinematic state and determines a short-horizon trajectory for the collaborator based on previously observed kinematic states of the collaborator towards the goal. The system also determines a likelihood that the collaborator will follow the possible trajectory based on the short-horizon trajectory, the goal, and the current kinematic state. The system also generates a movement instruction to control movement of the robot based on the likelihood that the collaborator will follow the possible trajectory.
MULTI-DIRECTIONAL THREE-DIMENSIONAL PRINTING WITH A DYNAMIC SUPPORTING BASE
A computer-implemented dynamic supporting base creation method that interacts with a three-dimensional (3D) printer that prints an object, the method including providing a physical support, via a first robotic gripper, for an object during three-dimensional (3D) printing using a printing head of the 3D printer and transferring the object to a second robotic gripper to provide a physical support at a different location on the object.
Splitting transformers for robotics planning
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for optimizing a plan for one or more robots using a process definition graph. One of the methods includes receiving a process definition graph for a robot, the process definition graph having a plurality of action nodes. One or more of the action nodes are motion nodes that represent a motion to be taken by the robot from a respective start location to an end location. It is determined that a motion node satisfies one or more splitting criteria, and in response to determining that the motion node satisfies the one or more splitting criteria, the process definition graph is modified. Modifying the process definition graph includes splitting the motion node into two or more separate motion nodes whose respective paths can be scheduled independently.
SYSTEMS AND METHODS FOR OBJECT DETECTION
A computing system including a processing circuit in communication with a camera having a field of view. The processing circuit is configured to perform operations related to detecting, identifying, and retrieving objects disposed amongst a plurality of objects. The processing circuit may be configured to perform operations related to object recognition template generation, feature generation, hypothesis generation, hypothesis refinement, and hypothesis validation.
Feature detection by deep learning and vector field estimation
A system and method for extracting features from a 2D image of an object using a deep learning neural network and a vector field estimation process. The method includes extracting a plurality of possible feature points, generating a mask image that defines pixels in the 2D image where the object is located, and generating a vector field image for each extracted feature point that includes an arrow directed towards the extracted feature point. The method also includes generating a vector intersection image by identifying an intersection point where the arrows for every combination of two pixels in the 2D image intersect. The method assigns a score for each intersection point depending on the distance from each pixel for each combination of two pixels and the intersection point, and generates a point voting image that identifies a feature location from a number of clustered points.
Generation method for training dataset, model generation method, training data generation apparatus, inference apparatus, robotic controller, model training method and robot
One aspect of the present disclosure relates to a generation method for a training dataset, comprising: capturing, by one or more processors, a target object to which a marker unit recognizable under a first illumination condition is provided; and acquiring, by the one or more processors, a first image where the marker unit is recognizable and a second image obtained by capturing the target object under a second illumination condition.
Assembly planning device, assembly planning method, and non-transitory computer-readable storage medium
An assembly planning device: holds a partial-order graph showing assembly operations of parts and an order constraint based on a partial-order relationship of the assembly operations, an operation sequence template showing an operation sequence and a required time of the operation sequence, and part information showing parts that can be assembled by robots; refers to the part information to generate allocation plans in which the robot are allocated to the assembly operations shown by the partial-order graph; for each of the allocation plans, refers to the operation sequence template to allocate an operation sequence for each assembly operation shown by the allocation plan; calculates movement times of the robots in the allocated operation sequence; calculates an operation time in the allocation plan based on the movement times and the required time shown by the operation sequence template; and selects an allocation plan based on the operation times.
Systems and methods for automated operation and handling of autonomous trucks and trailers hauled thereby
A system and method for operation of an autonomous vehicle (AV) yard truck is provided. A processor facilitates autonomous movement of the AV yard truck, and connection to and disconnection from trailers. A plurality of sensors are interconnected with the processor that sense terrain/objects and assist in automatically connecting/disconnecting trailers. A server, interconnected, wirelessly with the processor, that tracks movement of the truck around and determines locations for trailer connection and disconnection. A door station unlatches/opens rear doors of the trailer when adjacent thereto, securing them in an opened position via clamps, etc. The system computes a height of the trailer, and/or if landing gear of the trailer is on the ground and interoperates with the fifth wheel to change height, and whether docking is safe, allowing a user to take manual control, and optimum charge time(s). Reversing sensors/safety, automated chocking, and intermodal container organization are also provided.
Learning device, learning method, learning model, detection device and grasping system
An estimation device includes a memory and at least one processor. The at least one processor is configured to acquire information regarding a target object. The at least one processor is configured to estimate information regarding a location and a posture of a gripper relating to where the gripper is able to grasp the target object. The estimation is based on an output of a neural model having as an input the information regarding the target object. The estimated information regarding the posture includes information capable of expressing a rotation angle around a plurality of axes.