Patent classifications
B25J9/1669
System and method for positioning one or more robotic apparatuses
An approach to positioning one or more robotic arms in an assembly system may be described herein. For example, an apparatus may include a first robotic arm having a distal end and a proximal end. The distal end may be configured for movement and the proximal end may secure the first robotic arm. The apparatus may further include a camera connected with the distal end of the first robotic arm. The camera may be configured to capture image data of a marker connected with a second robotic arm and provide the image data to a computer. The computer may generate a set of instructions for the first robotic arm based on the image data of the marker. The movement of the first robotic arm may be caused by the computer according to the generated set of instructions.
Vacuum-based end effector for engaging parcels
A vacuum-based end effector for engaging parcels includes a base plate, one or more vacuum cups of a first type, and one or more vacuum cups of a second type. Each vacuum cup of the vacuum-based end effector is configured to be placed in fluid communication with a vacuum source to provide the vacuum cup with a suction force which can be used to engage and grasp parcels. Each vacuum cup includes a bellows defining a pathway for a flow of air and a lip connected to the bellows. Each lip of the one or more vacuum cups of the first type comprises a foam lip, and each lip of the one or more vacuum cups of the second type comprises an elastomeric lip. The vacuum-based end effector can be combined with a robot to provide an improved system for engaging parcels.
HANDLING SYSTEM, TRANSPORT SYSTEM, CONTROL DEVICE, STORAGE MEDIUM, AND HANDLING METHOD
According to an embodiment, there is provided a handling system capable of handling a plurality of objects, the handling system including: a movable arm, a holder, a sensor, and a controller. The holder is attached to the movable arm and is capable of holding the object. The sensor is capable of detecting the object. The controller controls the movable arm and the holder. The controller determines whether or not to change an arrangement of the object before the object is held, on the basis of information acquired from the sensor. In a case where it is determined to change the arrangement of the object, the controller evaluates effectiveness of an arrangement change operation for each object, and decides on an arrangement change operation on the basis of an evaluation result.
GRASP LEARNING USING MODULARIZED NEURAL NETWORKS
A method for modularizing high dimensional neural networks into neural networks of lower input dimensions. The method is suited to generating full-DOF robot grasping actions based on images of parts to be picked. In one example, a first network encodes grasp positional dimensions and a second network encodes rotational dimensions. The first network is trained to predict a position at which a grasp quality is maximized for any value of the grasp rotations. The second network is trained to identify the maximum grasp quality while searching only at the position from the first network. Thus, the two networks collectively identify an optimal grasp, while each network's searching space is reduced. Many grasp positions and rotations can be evaluated in a search quantity of the sum of the evaluated positions and rotations, rather than the product. Dimensions may be separated in any suitable fashion, including three neural networks in some applications.
Handling device, control device, and holding method
A handling device according to an embodiment has an arm, a holder, a storage, and a controller. The arm includes at least one joint. The holder is attached to the arm and is configured to hold an object. The storage stores a function map including at least one of information about holdable positions of the holder and information about possible postures of the holder. The detector is configured to detect information about the object. The controller is configured to generate holdable candidate points on the basis of the information detected by the detector, to search the function map for a position in an environment in which the object is present, the position being associated with the generated holdable candidate points, and to determine a holding posture of the holder on the basis of the searched position. The function map associates a manipulability with each position in the environment in which the object is present. The manipulability is a parameter calculated from at least one joint angle of the holder.
Autonomous control of analyzing business-related data by a robot system
An objective of the present invention is to promote efficiency improvement of a business by a mobile device. A business analysis server that analyzes a business in a mobile device system conducting the business by controlling a mobile device on the basis of a scenario includes a control unit, and a storage unit including a business index value database that manages a business index value indicating an effect of the business and a business index value history database that manages a change in the business index value as time-series data. The control unit receives scenario execution information when a business index value is designated, analyzes a correlation between the business index value and the scenario by referring to the business index value history database and the scenario execution information, extracts a target scenario whose correlation value with the business index value satisfies predetermined conditions, and generates a correction scenario.
TRANSPARENT OBJECT BIN PICKING
A system and method identifying an object, such as a transparent object, to be picked up by a robot from a bin of objects. The method includes obtaining a 2D red-green-blue (RGB) color image and a 2D depth map image of the objects using a 3D camera, where pixels in the depth map image are assigned a value identifying the distance from the camera to the objects. The method generates a segmentation image of the objects using a deep learning mask R-CNN (convolutional neural network) that performs an image segmentation process that extracts features from the RGB image and assigns a label to the pixels so that objects in the segmentation image have the same label. The method then identifies a location for picking up the object using the segmentation image and the depth map image.
System and method for robotic bin picking
A method and computing system comprising identifying one or more candidate objects for selection by a robot. A path to the one or more candidate objects may be determined based upon, at least in part, a robotic environment and at least one robotic constraint. A feasibility of grasping a first candidate object of the one or more candidate objects may be validated. If the feasibility is validated, the robot may be controlled to physically select the first candidate object. If the feasibility is not validated, at least one of a different grasping point of the first candidate object, a second path, or a second candidate object may be selected.
System and method for in-motion railcar loading
A system and method for loading railcars of a train wherein the railcars are provided with upper lids having latches for securing the lids in a closed position. The system comprises at least one sensing system for determining the position of the latches and the lids in order for one or more robot arms to perform operations such as unlatching, latching, lid opening and lid closing. At least one velocity sensor measures individual railcar velocity rather than overall train speed to enable engagement of the one or more robotic arms, as adjacent railcars may move at differing velocities due to slack in the connections between them.
Inspection robot and methods thereof for responding to inspection data in real time
An inspection robot, and methods and a controller thereof are disclosed. An inspection robot may include an inspection chassis including a plurality of inspection sensors and coupled to at least one drive module to drive the robot over an inspection surface. The inspection robot may also include a controller including an inspection data circuit to interpret inspection base data, an inspection processing circuit to determine refined inspection data, and an inspection configuration circuit to determine an inspection response value in response to the refined inspection data. The controller may further include an inspection response circuit to, in response to the inspection response value, provide an inspection command value while the inspection robot is interrogating the inspection surface.