Patent classifications
G05B2219/39528
Robot grip detection using non-contact sensors
A method is provided that includes controlling a robotic gripping device to cause a plurality of digits of the robotic gripping device to move towards each other in an attempt to grasp an object. The method also includes receiving, from at least one non-contact sensor on the robotic gripping device, first sensor data indicative of a region between the plurality of digits of the robotic gripping device. The method further includes receiving, from the at least one non-contact sensor on the robotic gripping device, second sensor data indicative of the region between the plurality of digits of the robotic gripping device, where the second sensor data is based on a different sensing modality than the first sensor data. The method additionally includes determining, using an object-in-hand classifier that takes as input the first sensor data and the second sensor data, a result of the attempt to grasp the object.
Systems and methods for robotic control under contact
A system comprises a database; at least one hardware processor coupled with the database; and one or more software modules that, when executed by the at least one hardware processor, receive at least one of sensory data from a robot and images from a camera, identify and build models of objects in an environment, wherein the model encompasses immutable properties of identified objects including mass and geometry, and wherein the geometry is assumed not to change, estimate the state including position, orientation, and velocity, of the identified objects, determine based on the state and model, potential configurations, or pre-grasp poses, for grasping the identified objects and return multiple grasping configurations per identified object, determine an object to be picked based on a quality metric, translate the pregrasp poses into behaviors that define motor forces and torques, communicate the motor forces and torques to the robot.
CONTROL DEVICE, CONTROL METHOD, AND CONTROL PROGRAM
Provided are a control device, a control method, and a control program capable of realizing natural interaction between a human and a robot like between humans. A control device according to an embodiment includes a slip detection unit (15, 104) and a control unit (10). The slip detection unit detects a slip of a target object gripped by a grip portion. The control unit controls a gripping force with which the grip portion grips the target object based on the slip detected by the slip detection unit. The control unit estimates an external force applied to the target object gripped by the grip portion based on the slip detected by the slip detection unit, and controls the gripping force based on the estimated external force.
A ROBOTIC SYSTEM FOR PICKING AND PLACING OBJECTS FROM AND INTO A CONSTRAINED SPACE
A system comprising: a database configured to store a multi-body model of a robot, the robot comprising a plurality of manipulators, and a plurality of joints and plurality of actuators and actuator motors configured to move the joints, and wherein the multi-body model includes a kinematic and geometric model of each manipulator, a catalog of models for objects to be manipulated, the models comprising a current configuration and a target configuration, and a functional mapping of sensory data to configurations of the robot and the manipulators needed to manipulate the objects; at least one hardware processor coupled with the database; and one or more software modules that, when executed by the at least one hardware processor, receive sensory data from within a constrained space, identify objects in the constrained space based on the received sensory data and the catalog of models, determine a target pose for the joints and the manipulators based on the sensory data and the current and target configurations associated with the identified object, and compute joint space positions to necessary to realize the target pose.
SYSTEMS AND METHOD FOR ROBOTICS CONTROL UNDER CONTACT
A system comprises a database; at least one hardware processor coupled with the database; and one or more software modules that, when executed by the at least one hardware processor, receive at least one of sensory data from a robot and images from a camera, identify and build models of objects in an environment, wherein the model encompasses immutable properties of identified objects including mass and geometry, and wherein the geometry is assumed not to change, estimate the state including position, orientation, and velocity, of the identified objects, determine based on the state and model, potential configurations, or pre-grasp poses, for grasping the identified objects and return multiple grasping configurations per identified object, determine an object to be picked based on a quality metric, translate the pre-grasp poses into behaviors that define motor forces and torques, communicate the motor forces and torques to the robot in order to allow the robot to perform a complex behavior generated from the behaviors.
Robotic gripper with integrated tactile sensor arrays
A robotic gripper (end effector) for an arm-type robotic system includes a hierarchical sensor architecture that utilizes a central data processing circuit to generate rich sensory tactile data in response to pressure, temperature, vibration and/or proximity sensor data generated by finger-mounted sensor groups in response to interactions between the robotic gripper and a target object during robotic system operations. The rich sensory tactile data is used to generate feedback signals that directly control finger actuators and/or tactile information that is supplied to the robotic system's control circuit. Sensor data processing circuits are configured to receive single-sensor data signals in parallel from the sensor groups, and to transmit corresponding finger-level sensor data signal on a serial bus/signal line to the central data processing circuit. Each sensor group and an associated sensor data processing circuit are disposed on a PCB structure and mounted on a contact portion of an associated gripper finger.
Flex-rigid sensor array structure for robotic systems
A flex-rigid sensor apparatus for providing sensor data from sensors disposed on an end-effector/gripper to the control circuit of an arm-type robotic system. The apparatus includes piezo-type pressure sensors sandwiched between lower and upper PCB stack-up structures respectively fabricated using rigid PCB (e.g., FR-4) and flexible PCB (e.g., polyimide) manufacturing processes. Additional (e.g., temperature and proximity) sensors are mounted on the upper/flexible stack-up structure. A spacer structure is disposed between the two stack-up structures and includes an insulating material layer defining openings that accommodate the pressure sensors. Copper film layers are configured to provide Faraday cages around each pressure sensor. The pressure sensors, additional sensors and Faraday cages are connected to sensor data processing and control circuitry (e.g., analog-to-digital converter circuits) by way of signal traces formed in the lower and upper stack-up structures and in the spacer structure. An encapsulation layer is formed on the upper PCB stack-up structure.
ROBOTIC MANIPULATORS
A robot comprising: a chopstick, configured for at least four degrees of freedom of movement, a stiff body of shape and proportions approximate to a pool cue; an electromagnetic actuator, comprising a motor, for each degree of freedom of movement coupled with the stiff body, wherein the functional mapping from each actuator's motor current to torque output along an axis of motion is stored, and used in concert with a calibrated model of the robot for effective impedance control; and a 6-axis force/torque sensor mounted inline between the actuators and each chopstick.
SYSTEMS AND METHODS FOR VISUO-TACTILE OBJECT POSE ESTIMATION
Systems and methods for visuo-tactile object pose estimation are provided. In one embodiment, a computer implemented method includes receiving image data, depth data, and tactile data about the object in the environment. The computer implemented method also includes generating a visual estimate of the object that includes an object point cloud. The computer implemented method further includes generating a tactile estimate of the object that includes a surface point cloud based on the tactile data. The computer implemented method yet further includes estimating a pose of the object based on the visual estimate and the tactile estimate by fusing the object point cloud and the surface point cloud in a 3D space. The pose is a six-dimensional pose.
Robot hand, robot apparatus, and control method for robot hand
Force sensors capable of measuring only forces in xyz coordinate axis directions are installed in fingertips, respectively, and forces and moment forces acting on a robot hand are calculated based on positional information about each fingertip. This structure eliminates the need for using large force sensors to thereby enable downsizing of each fingertip, and enables detection of loads and moment forces acting on the robot hand.