Patent classifications
G05B2219/40575
Event-driven visual-tactile sensing and learning for robots
A classifying sensing system, a classifying method performed using a sensing system, a tactile sensor, and a method of fabricating a tactile sensor. The classifying sensing system comprises a first spiking neural network, SNN, encoder configured for encoding an event-based output of a vision sensor into individual vision modality spiking representations with a first output size; a second SNN encoder configured for encoding an event-based output of a tactile sensor into individual tactile modality spiking representations with a second output size; a combination layer configured for merging the vision modality spiking representations and the tactile modality spiking representations; and a task SNN configured to receive the merged vision modality spiking representations and tactile modality spiking representations and output vision-tactile modality spiking representations with a third output size for classification.
System and method for object shape identification and pose measurement using multi-fingered hand
A system and method for identifying a shape and pose of an object is provided. The system includes a controller and a grasping device. The grasping device includes a plurality of fingers that are moveable relative to a base. The fingers include one or more tactile sensors attached thereto. The tactile sensors are configured to collect data, such as a 2D image or 3D point cloud, based on points of contact with the object when the object is grasped. The grasping device is configured to roll the object within the fingers and collect additional data. The controller may combine the data and determine the shape and pose of the object based on the combined data.
System and Method for Controlling Robotic Manipulator with Self-Attention Having Hierarchically Conditioned Output
A method for controlling a robotic manipulator according to a task comprises accepting a feedback signal including a sequence of multi-modal observations of a state of execution of the task. The multi-modal observations are processed with a neural network having a self-attention module with a hierarchically conditioned output to produce a skill of the robotic manipulator and an action conditioned on the skill. The neural network is trained in a supervised manner with demonstration data to produce a sequence of skills and a corresponding sequence of actions for the actuators of the robotic manipulator to perform the task. The method further comprises determining one or more control commands for the one or more actuators based on the produced action and submitting the one or more control commands to the one or more actuators causing a change of the state of execution of the task.