Patent classifications
G05B2219/40625
Incorporating Vision System and In-Hand Object Location System for Object Manipulation and Training
A system and method of object manipulation and training including providing at least one robotic hand including a plurality of grippers connected to a body and providing a plurality of cameras disposed in a periphery surface of the grippers. The method also includes providing a plurality of tactile sensors disposed in the periphery surface of the grippers and actuating the grippers to grasp an object. The method further includes detecting a position of the object with respect to the robotic hand via a first image feed from the tactile sensors and detecting a position of the object with respect to the robotic hand via a second image feed from the cameras. The method also includes generating instructions to grip and manipulate an orientation of the object based on the first and the second image feeds for a visualization of the object relative to the robotic hand.
Method of Automated Calibration for In-Hand Object Location System
A method of automated in-hand calibration including providing at least one robotic hand including a plurality of grippers connected to a body and providing at least one camera disposed on a periphery surface of the plurality of grippers. The method also includes providing at least one tactile sensor disposed in the at least one illumination surface and actuating the plurality of grippers to grasp an object. The method further includes locating a position of the object with respect to the at least one robotic hand and calibrating a distance parameter via the at least one camera. The method also includes calibrating the at least one tactile sensor with the at least one camera and generating instructions to grip and manipulate an orientation of the object via an image feed from the at least one camera for a visualization of the object.
TACTILE PERCEPTION APPARATUS FOR ROBOTIC SYSTEMS
A human-like tactile perception apparatus for providing enhanced tactile information (feedback data) from an end-effector/gripper to the control circuit of an arm-type robotic system. The apparatus's base structure is attached to the gripper's finger and includes a flat/planar support plate that presses a pressure sensor array against a target object during operable interactions. The pressure sensor array generates pressure sensor data that indicates portions of the array contacted by surface features of the target object. A sensor data processing circuit generates tactile information in response to the pressure sensor data, and then transmits the tactile information to the robotic system's control circuit. An optional mezzanine connector extends through an opening in the support plate to pass pressure sensor data to the processing circuit. An encapsulating layer covers the pressure sensor array and transmits pressure waves generated by slipping objects to enhance the tactile information.
FLEX-RIGID SENSOR ARRAY STRUCTURE FOR ROBOTIC SYSTEMS
A flex-rigid sensor apparatus for providing sensor data from sensors disposed on an end-effector/gripper to the control circuit of an arm-type robotic system. The apparatus includes piezo-type pressure sensors sandwiched between lower and upper PCB stack-up structures respectively fabricated using rigid PCB (e.g., FR-4) and flexible PCB (e.g., polyimide) manufacturing processes. Additional (e.g., temperature and proximity) sensors are mounted on the upper/flexible stack-up structure. A spacer structure is disposed between the two stack-up structures and includes an insulating material layer defining openings that accommodate the pressure sensors. Copper film layers are configured to provide Faraday cages around each pressure sensor. The pressure sensors, additional sensors and Faraday cages are connected to sensor data processing and control circuitry (e.g., analog-to-digital converter circuits) by way of signal traces formed in the lower and upper stack-up structures and in the spacer structure. An encapsulation layer is formed on the upper PCB stack-up structure.
FORCE ESTIMATION USING DEEP LEARNING
A computer system generates a tactile force model for a tactile force sensor by performing a number of calibration tasks. In various embodiments, the calibration tasks include pressing the tactile force sensor while the tactile force sensor is attached to a pressure gauge, interacting with a ball, and pushing an object along a planar surface. Data collected from these calibration tasks is used to train a neural network. The resulting tactile force model allows the computer system to convert signals received from the tactile force sensor into a force magnitude and direction with greater accuracy than conventional methods. In an embodiment, force on the tactile force sensor is inferred by interacting with an object, determining the motion of the object, and estimating the forces on the object based on a physical model of the object.
Robotic Touch Perception
An apparatus such as a robot capable of performing goal oriented tasks may include one or more touch sensors to receive touch perception feedback on the location of objects and structures within an environment. A fusion engine may be configured to combine touch perception data with other types of sensor data such as data received from an image or distance sensor. The apparatus may combine distance sensor data with touch sensor data using inference models such as Bayesian inference. The touch sensor may be mounted onto an adjustable arm of a robot. The apparatus may use the data it has received from both a touch sensor and distance sensor to build a map of its environment and perform goal oriented tasks such as cleaning or moving objects.
Robotic Touch Perception
An apparatus such as a robot capable of performing goal oriented tasks may include one or more touch sensors to receive touch perception feedback on the location of objects and structures within an environment. A fusion engine may be configured to combine touch perception data with other types of sensor data such as data received from an image or distance sensor. The apparatus may combine distance sensor data with touch sensor data using inference models such as Bayesian inference. The touch sensor may be mounted onto an adjustable arm of a robot. The apparatus may use the data it has received from both a touch sensor and distance sensor to build a map of its environment and perform goal oriented tasks such as cleaning or moving objects.
End effector device
The end effector device includes an end effector including a palm and a plurality of fingers, a drive device, a position shift direction determination unit and a position shift correction unit. Each finger includes a tactile sensor unit capable of detecting external forces in at least three axial directions. The position shift direction determination unit determines in which direction the object being grasped is position-shifted with respect to the fitting recess based on a detection result detected by the tactile sensor unit in a case where at least one of the external forces detected by the tactile sensor unit is a specified value or more. The position shift correction unit moves the palm in a direction opposite to a position shift direction of the object being grasped determined by the position shift direction determination unit.
ENHANCED DEPTH ESTIMATION USING DEEP LEARNING
Retrographic sensors described herein may provide smaller sensors capable of high-resolution three-dimensional reconstruction of an object in contact with the sensor. Such sensors may be used by robots for work in narrow environments, fine manipulation tasks, and other applications. To provide a smaller sensor, a reduced number of light sources may be provided in the sensor in some embodiments. For example, three light sources, two light sources, or one light source, may be used in some sensors. When fewer light sources are provided, full color gradient information may not be provided. Instead, the missing gradients in one direction or other information related to a three-dimensional object in contact with the sensor may be determined using gradients in a different direction that were provided by the real data. This may be done using a trained statistical model, such as a neural network, in some embodiments.
Tactile, interactive neuromorphic robots
In one embodiment, a neuromorphic robot includes a curved outer housing that forms a continuous curved outer surface, a plurality of trackball touch sensors provided on and extending across the continuous curved outer surface in an array, each trackball sensor being configured to detect a direction and velocity a sweeping stroke of a user, and a plurality of lights, one light being collocated with each trackball touch sensor and being configured to illuminate when its collocated trackball touch sensor is stroked by the user, wherein the robot is configured to interpret the sweeping stroke of the user sensed with the plurality of trackball touch sensors and to provide immediate visual feedback to the user at the locations of the touched trackball touch sensors.