G05B2219/40543

METHODS AND SYSTEMS FOR AUTOMATICALLY ANNOTATING ITEMS BY ROBOTS
20210023716 · 2021-01-28 · ·

A robot automatically annotates items by training a semantic segmentation module. The robot includes one or more imaging devices, and a controller comprising machine readable instructions. The machine readable instructions, when executed by one or more processors, cause the controller to capture an image with the one or more imaging devices, identify a target area in the image in response to one or more points on the image designated by a user, obtain depth information for the target area, calculate a center of an item corresponding to the target area based on the depth information, rotate the imaging device based on the center, and capture an image of the item at a different viewing angle in response to rotating the view of the imaging device.

SILVERWARE PROCESSING SYSTEMS AND METHODS

An imaging system captures images of a work surface having articles of cutlery distributed thereon. An end effector, such as a magnetic end effector, may be used to grasp articles of cutlery and place them in a designated location according to the type of the article of cutlery as determined using an image of the work surface and machine vision. A type of articles of cutlery may be classified based on the image such as according to category (knife, fork, spoon), size, or brand. Multiple articles of cutlery occluding one another may be dealt with by stirring or grasping multiple items and releasing them on a separate work surface in order to disperse them. Whether the end effector is grasping multiple articles of cutlery may be determined by capturing an image of the end effector. Machine vision may determine whether an article of cutlery is contaminated or damaged.

System and method for piece picking or put-away with a mobile manipulation robot

A method and system for piece-picking or piece put-away within a logistics facility. The system includes a central server and at least one mobile manipulation robot. The central server is configured to communicate with the robots to send and receive piece-picking data which includes a unique identification for each piece to be picked, a location within the logistics facility of the pieces to be picked, and a route for the robot to take within the logistics facility. The robots can then autonomously navigate and position themselves within the logistics facility by recognition of landmarks by at least one of a plurality of sensors. The sensors also provide signals related to detection, identification, and location of a piece to be picked or put-away, and processors on the robots analyze the sensor information to generate movements of a unique articulated arm and end effector on the robot to pick or put-away the piece.

APPARATUS AND METHOD FOR IDENTIFYING OBJECT

According to various embodiments of the present invention, an electronic device comprises: a memory including instructions and a training database, which includes data, on at least one object, acquired on the basis of an artificial intelligence algorithm; at least one sensor; and a processor connected to the at least one sensor and the memory, wherein the processor can be configured to execute the instructions in order to acquire data on a designated area including the at least one object by using the at least one sensor, identify location information and positioning information on the at least one object on the basis of the training database, and transmit a control signal for picking the at least one object to a picking tool related to the electronic device on the basis of the identified location information and positioning information.

A METHOD FOR ASSEMBLING A COOLING APPARATUS, AN ASSEMBLING LINE IMPLEMENTING THE SAME, AND A COMPARTMENT OF SAID COOLING APPARATUS
20200371505 · 2020-11-26 · ·

A method for assembling a cooling apparatus having a cabinet which houses an inner casing defining at least one compartment for the storage of products to be cooled and one or more objects configured to be connected to the inner casing. The method includes: providing the inner casing; automatically univocally identifying the model of the inner casing among various known inner casing models by using a detecting device and performing a step of connecting the one or more objects to the inner casing based on the model of inner casing identified in the identifying step.

WORKPIECE IDENTIFICATION METHOD
20200368923 · 2020-11-26 ·

Whether or not workpieces are present in a workpiece storage area is determined based on an image acquired by image capturing. When the workpieces are determined to be present, whether or not a crossing part is present in the workpiece storage area is determined based on the image, the crossing part being a part where soft body portions of a plurality of workpieces cross each other in an overlapping manner. When the crossing part is determined to be present, an uppermost soft body portion placed at an uppermost position among the soft body portions crossing each other is determined based on the image. A workpiece including the uppermost soft body portion thus determined is determined as an uppermost workpiece placed at an uppermost position.

Method Of Controlling Robot
20200368911 · 2020-11-26 ·

A method of controlling a robot that performs work using an end effector on an object transported by a handler includes calculating a target position of the end effector based on a position of the object, calculating a tracking correction amount for correction of the target position in correspondence with a transport amount of the object, controlling the end effector to follow the object based on the target position and the tracking correction amount, acquiring an acting force acting on the end effector from the object using a force sensor, calculating a force control correction amount for correction of the target position to set the acting force to a target force, and controlling the acting force to be the predetermined target force by driving the manipulator based on the force control correction amount.

Position and orientation estimation apparatus, position and orientation estimation method, and program
10810761 · 2020-10-20 · ·

A three-dimensional detailed position/orientation estimation apparatus includes a first position/orientation estimation unit and a second position/orientation estimation unit that are configured to estimate three-dimensional position and orientation. The first position/orientation estimation unit optimizes six parameters (translations x, y, and z, and rotations , , and ) using 3D data, and the second position/orientation estimation unit optimizes only three parameters (translations x and y, and rotation ) that can be estimated with high accuracy using a 2D image, based on the result of the three-dimensional position/orientation estimation performed by the first position/orientation estimation unit using the 3D data.

Automated Manipulation Of Transparent Vessels
20200311956 · 2020-10-01 ·

An actuator and end effector are controlled according to images from cameras having a surface in their field of view. Vessels (cups, bowls, etc.) and other objects are identified in the images and their configuration is assigned to a finite set of categories by a classifier that does not output a 3D bounding box or determine a 6D pose. For objects assigned to a first subset of categories, grasping parameters for controlling the actuator and end effector are determined using only 2D bounding boxes, such as oriented 2D bounding boxes. For objects not assigned to the first subset, a righting operation may be performed using only 2D bounding boxes. Objects that are still not in the first set may then be grasped by estimating a 3D bounding box and 6D pose.

Eye in-hand robot
10766145 · 2020-09-08 · ·

A robot includes a gripping member configured to move and pick up the object, a camera affixed to the gripping member such that movement of the gripping member causes movement of the camera, the camera configured to measure and store data related to intensity of light and direction of light rays within the environment, an image processing module configured to process the data to generate a probabilistic model defining a location of the object within the environment, and an operation module configured to move the gripping member to the location and pick up the object.