Patent classifications
B25J9/1612
METHOD FOR CONTROLLING A ROBOTIC DEVICE
A method for controlling a robotic device. The method includes: obtaining an image, processing the image using a neural convolutional network, which generates an image in a feature space from the image, the image in the feature space, feeding the image in the feature space to a neural actor network, which generates an action parameter image, feeding the image in the feature space and the action parameter image to a neural critic network, which generates an assessment image, which defines for each pixel an assessment for the action defined by the set of action parameter values for that pixel, selecting, from multiple sets of action parameters of the action parameter image, that set of action parameter values having the highest assessment, and controlling the robot for carrying out an action according to the selected action parameter set.
ROBOTIC SYSTEM FOR OBJECT SIZE DETECTION
A computing system including a processing circuit in communication with a camera having a field of view, The processing circuit obtains image information based on the objects in the field of view and defines a minimum viable region for a target open corner. Potential minimum viable regions are defined by identifying candidate edges of an object and determining potential intersection points based on the candidate edges. The minimum viable region may then be identified and validated from the potential minimum viable regions.
ROBOT HAND
A robot hand includes a first proximal end finger having a first protrusion at a distal end of the first proximal end finger, a first distal end finger that is connected to the first proximal end finger in a relatively rotatable manner and has a first cutout allowable the first protrusion to pass, a second proximal end finger having a second protrusion at a distal end of the second proximal end finger, a second distal end finger that is connected to the second proximal end finger in a relatively rotatable manner and has a second cutout allowable the second protrusion to pass, an opening and closing drive unit that relatively moves the second proximal end finger with respect to the first proximal end finger, a first rotation drive unit that relatively rotates the first distal end finger with respect to the first proximal end finger, a second rotation drive unit that relatively rotates the second distal end finger with respect to the second proximal end finger, and a controller that actuates the opening and closing drive unit, the first rotation drive unit, and the second rotation drive unit.
Direct Drive End-Effectors with Parallel Kinematics
A gripper includes at least one movable finger. Each movable finger includes a first motor, a second motor, a first motor link having a first end coupled to a rotor of the first motor, a second motor link having a first end coupled to a rotor of the second motor, a finger link having a first end in pivotal connection with a second end of the second motor link and a gripper pad, and a connecting link having a first end in pivotal connection with a second end of the first motor link and a second end in pivotal connection with the finger link. The gripper further includes at least one controller programmed or configured to actuate the first motor and the second motor of each of the at least one movable finger.
Grip manipulator and method for controlling the same
The present disclosure provides a grip manipulator used in a robot to eliminate an inefficient and power consumable operation caused by the use of multiple manipulators when gripping an object or handling the gripped object. The grip manipulator includes a manipulator body, a support rod of a first group and a support rod of a second group provided to protrude from the manipulator body in a direction, a longitudinal drive unit configured to discharge or introduce the support rod of the first group or the support rod of the second group in the protruding direction, and a transverse drive unit configured to spread or gather the support rod of the first support group or the support rod of the second group in a direction perpendicular to the protruding direction.
Characteristic estimation system, characteristic estimation method, and information storage medium
A characteristic estimation system, comprising circuitry configured to, cause a robot hand configured to grip an object to operate based on operation information defining an operation of the robot hand, acquire a physical quantity at a time when the robot hand grips the object, and estimate a characteristic of the object based on the physical quantity.
METHOD FOR DETECTING SLIPPAGE WHEN GRIPPING AN OBJECT WITH A GRIPPER
The present invention relates to a method for detecting slippage when gripping an object with a gripper, in which use is made of at least one acceleration sensor (5) on a gripping surface (3) of the gripper (1), a sensor signal from the acceleration sensor (5) is captured during the gripping process and filtered in order to obtain a filtered sensor signal in a first frequency range, and the filtered sensor signal or a value derived therefrom is compared with a threshold value (SW.sub.1, SW.sub.2) which, when exceeded, constitutes an indication of slippage of the object. In the method, the acceleration sensor (5) is integrated into a rigid main body (2) and the main body (2) is integrated into the gripper (1) using an elastic material (4) so as to be capable of oscillation on the gripping surface (3) such that, in the event of slippage of the object, said main body can be set in oscillation on the gripper (1) only in a plane parallel to the gripping surface (3). The first frequency range is selected such that the oscillation frequencies of the main body (2) that occur in the event of slippage are within the first frequency range. The proposed method allows simple and reliable detection of slippage.
AUTOMATIC APPLICATION DEVICE AND AUTOMATIC APPLICATION METHOD
An automatic application device includes: a robot arm; an application hand configured to apply, to a workpiece, a paint that is a liquid; a force sensor configured to detect a force and a moment acting on the application hand; and a control section configured to control the robot arm in accordance with a parameter calculated from an output signal from the force sensor.
Robotic Grasping Via RF-Visual Sensing And Learning
Described is the design, implementation, and evaluation of a robotic system configured to search for and retrieve RFID-tagged items in line-of-sight, non-line-of-sight, and fully-occluded settings. The robotic system comprises a robotic arm having a camera and antenna strapped around a portion thereof (e.g. a gripper) and a controller configured to receive information from the camera and (radio frequency) RF information via the antenna and configured to use the information provided thereto to implement a method that geometrically fuses at least RF and visual information. This technique reduces uncertainty about the location of a target object even when the object is fully occluded. Also described is a reinforcement-learning network that uses fused RF-visual information to efficiently localize, maneuver toward, and grasp a target object. The systems and techniques described herein find use in many applications including robotic retrieval tasks in complex environments such as warehouses, manufacturing plants, and smart homes.
RAPID CHANGE MECHANISM FOR COMPLEX END EFFECTORS
Technology identifies that an end effector is provisioned to a robot. The technology accesses identification data of the end effector. The identification data is specific to the end effector. The identification data includes one or more of at least one setting associated with the end effector or at least one parameter associated with the end effector. The technology controls the end effector based on the identification data to adjust one or more runtime parameters of the robot based on the identification data.