Patent classifications
G05B2219/40532
HIGH-LEVEL SENSOR FUSION AND MULTI-CRITERIA DECISION MAKING FOR AUTONOMOUS BIN PICKING
In described embodiments of method for executing autonomous bin picking, a physical environment comprising a bin containing a plurality of objects is perceived by one or more sensors. Multiple artificial intelligence (AI) modules feed from the sensors to compute grasping alternatives, and in some embodiments, detected objects of interest. Grasping alternatives and their attributes are computed based on the outputs of the AI modules in a high-level sensor fusion (HLSF) module. A multi-criteria decision making (MCDM) module is used to rank the grasping alternatives and select the one that maximizes the application utility while satisfying specified constraints.
METHOD OF ACQUIRING SENSOR DATA ON A CONSTRUCTION SITE, CONSTRUCTION ROBOT SYSTEM, COMPUTER PROGRAM PRODUCT, AND TRAINING METHOD
A method of acquiring sensor data on a construction site by at least one sensor of a construction robot system comprising at least one construction robot is provided, wherein a sensor is controlled using a trainable agent, thus improving the quality of acquired sensor data. A construction robot system, a computer program product, and a training method are also provided.
GRIPPING SYSTEM WITH MACHINE LEARNING
A gripping system includes a hand that grips a workpiece, a robot that supports the hand and changes at least one of a position and a posture of the hand, and an image sensor that acquires image information from a viewpoint interlocked with at least one of the position and the posture of the hand. Additionally, the gripping system includes a construction module that constructs a model by machine learning based on collection data. The model corresponds to at least a part of a process of specifying an operation command of the robot based on the image information acquired by the image sensor and hand position information representing at least one of the position and the posture of the hand. An operation module executes the operation command of the robot based on the image information, the hand position information, and the model, and a robot control module operates the robot based on the operation command of the robot operated by the operation module.
Robotic laundry sorting devices, systems, and methods of use
Devices, systems, and methods for autonomously separating and sorting a plurality of individual articles from a pile of laundry articles into two or more sorted loads for washing are described. For example, an autonomous sorting and separating system includes a stationary surface configured to receive thereon at a first location the pile of laundry articles. A plurality of actuatable grippers are disposed at spaced apart positions adjacent the stationary surface and comprise a first actuatable gripper configured to grasp, hoist, and deposit at a second location at least one of the plurality of individual articles within reach of a second actuatable gripper. A terminal gripper comprising at least one of the second actuatable gripper and another actuatable gripper is configured to release an individual article into one of the two or more sorted loads. At least one controller is in operable communication with the grippers.
Domain Adaptation Using Simulation to Simulation Transfer
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a generator neural network to adapt input images.
ROBOT FOR DETECTING AND PICKING UP AT LEAST ONE PREDETERMINED OBJECT
A robot configured to recognize and pick up at least one predetermined object, the robot being configured in such a manner that the predetermined object is recognized and picked up in a work space below the robot. The robot may have an end effector and an adjusting unit for picking up the predetermined object. The end effector and the adjusting unit are disposed in the work space below the robot.
ROBOTIC LAUNDRY SORTING DEVICES, SYSTEMS, AND METHODS OF USE
Systems for autonomously batching a plurality of separated laundry articles into sorted loads for washing and drying are described. For example, each one of a plurality of collection bins is configured to receive a sorted load of separated articles including at least one common one of one or more washing and drying characteristics. A plurality of conveyors are configured to receive thereon the bins and position one bin into a loading position adjacent to an exit orifice of a sorting surface. At least one sensor disposed at least one of on, adjacent to, and within the surface is configured to detect the washing and drying characteristics. A controller in operable communication with a drive of the plurality of conveyors and the at least one sensor is configured to instruct the conveyors to move the bins to batch each separated laundry article into a bin matching the washing and drying characteristics.
CONTROL DEVICE, ROBOT, AND ROBOT SYSTEM
A control device includes a processor that is configured to execute computer-executable instructions so as to control a robot, wherein the processor is configured to calculate an optical parameter related to an optical system imaging a target object, by using machine learning, detect the target object on the basis of an imaging result in the optical system by using the calculated optical parameter, and control a robot on the basis of a detection result of the target object.
CONTROL DEVICE, ROBOT, AND ROBOT SYSTEM
A control device includes a processor that is configured to execute computer-executable instructions so as to control a robot, wherein the processor is configured to calculate an operation parameter related to an operation of a robot by using machine learning, and control the robot on the basis of the calculated operation parameter.
CONTROL DEVICE, ROBOT, AND ROBOT SYSTEM
A control device includes a processor that is configured to execute computer-executable instructions so as to control a robot, wherein the processor is configured to calculate an image processing parameter related to image processing on an image of a target object captured by a camera, by using machine learning, detect the target object on the basis of an image on which the image processing is performed by using the calculated image processing parameter, and control a robot on the basis of a detection result of the target object.