G05B2219/40532

CONTROL DEVICE, ROBOT, AND ROBOT SYSTEM
20180225113 · 2018-08-09 ·

A control device includes a processor that is configured to execute computer-executable instructions so as to control a robot, wherein the processor is configured to calculate a force control parameter related to force control of a robot by using machine learning, and control the robot on the basis of the calculated force control parameter.

AUTOMATED BIN-PICKING BASED ON DEEP LEARNING
20240408766 · 2024-12-12 ·

Methods and systems for determining a grasp proposal for object picking by a robot gripper are descried wherein the method may comprise capturing an image comprising an object to be grasped by the robot gripper; providing the image to a deep neural network system that is trained to generate an object segmentation map for identifying pixels in the image that are associated with the object and to generate a plurality of object property maps, each object property map linking pixels of the object to information about a predetermined object property; and, determining a grasp proposal for a controller of the robot based on the one or more generated object property maps.

METHOD FOR UPDATING A SCENE REPRESENTATION MODEL

A computer implemented method for updating a scene representation model is disclosed. The method comprises obtaining a scene representation model representing a scene having one or more objects, the scene representation model being configured to predict a value of a physical property of one or more of the objects; obtaining a value of the physical property of at least one of the objects, the obtained value being derived from a physical contact of a robot with the at least one object; and updating the scene representation model based on the obtained value. An apparatus is also disclosed.

Eye-on-Hand Reinforcement Learner for Dynamic Grasping with Active Pose Estimation

A controller is provided for performing dynamic grasping of a target object using visual sensory inputs. The controller includes a robotic interface connected to a robotic arm including links connected by joints having actuators and encoders, and a gripper of the end-effector of the robotic arm configured to grasp the target object in response to robot control signals, and a vision sensor configured to continuously provide visual observations for tracking poses of the target object in a workspace and compute grasp poses, wherein the vision sensor is mounted on a distal end of the robotic arm adjacent to the gripper. The controller trains the Eye-on-Hand reinforcement learner policy, tracks the poses of the target object, and generates robot control signals to follow the target object while keeping it in the field of view of the vision sensor and grasp the target object in the workspace.

Robot controlling system

An application processor processes an application. A sensor processor acquires image data from an image sensor and analyzes the image data. A motion controlling processor controls motion of a movable part of a robot. The motion controlling processor provides posture information for specifying an orientation of the image sensor to the sensor processor, not via the application processor. The posture information includes information for specifying a position of the image sensor.

Event-driven visual-tactile sensing and learning for robots

A classifying sensing system, a classifying method performed using a sensing system, a tactile sensor, and a method of fabricating a tactile sensor. The classifying sensing system comprises a first spiking neural network, SNN, encoder configured for encoding an event-based output of a vision sensor into individual vision modality spiking representations with a first output size; a second SNN encoder configured for encoding an event-based output of a tactile sensor into individual tactile modality spiking representations with a second output size; a combination layer configured for merging the vision modality spiking representations and the tactile modality spiking representations; and a task SNN configured to receive the merged vision modality spiking representations and tactile modality spiking representations and output vision-tactile modality spiking representations with a third output size for classification.

Methods, apparatuses, and systems for automatically performing sorting operations

Apparatuses, method and computer program products for automatically performing sorting operations are disclosed herein. An example apparatus may comprise: an array of gripping elements, and at least one processing component configured to: obtain image data corresponding with the plurality of items; identify, from the image data, one or more characteristics of the plurality of items; determine, based at least in part on the one or more characteristics, an ordered sequence corresponding with the plurality of items; and generate a control indication to cause at least one of the gripping elements to perform the sorting operations based at least in part on the ordered sequence.

Intelligent visual humanoid robot and computer vision system programmed to perform visual artificial intelligence processes
09573277 · 2017-02-21 ·

The disclosed visual RRC-humanoid robot is a computer-based system that has been programmed to reach human-like levels of visualization Artificial Intelligence (AI). Behavioral-programming techniques are used to reach human-like levels of identification AI, recognition AI, visualization AI, and comprehension AI. The system is programmed to identify, recognize, visualize and comprehend the full array of sizes, distances, shapes, and colors of objects recorded in the FOV of the system. The following innovative features have been incorporated into the system: (i) incorporation of the RRC, (ii) incorporation of the Relational Correlation Sequencer (RCS): A proprietary RRC-module, (iii) a paradigm shift in the analytical-programming methodology employed in computer vision systems, (iv) incorporation of a central hub of intelligence, (v) design of a self knowledge capability and Internalization of all data, and (vi) design of an interface circuit compatible with human-like levels of visualization-AI.

INTELLIGENT VISUAL HUMANOID ROBOT AND COMPUTER VISION SYSTEM PROGRAMMED TO PERFORM VISUAL ARTIFICIAL INTELLIGENCE PROCESSES
20170008174 · 2017-01-12 ·

The disclosed visual RRC-humanoid robot is a computer-based system that has been programmed to reach human-like levels of visualization Artificial Intelligence (AI). Behavioral-programming techniques are used to reach human-like levels of identification AI, recognition AI, visualization AI, and comprehension AI. The system is programmed to identify, recognize, visualize and comprehend the full array of sizes, distances, shapes, and colors of objects recorded in the FOV of the system. The following innovative features have been incorporated into the system: (i) incorporation of the RRC, (ii) incorporation of the Relational Correlation Sequencer (RCS): A proprietary RRC-module, (iii) a paradigm shift in the analytical-programming methodology employed in computer vision systems, (iv) incorporation of a central hub of intelligence, (v) design of a self knowledge capability and Internalization of all data, and (vi) design of an interface circuit compatible with human-like levels of visualization-AI.

Method for controlling the operation of an industrial robot
12365551 · 2025-07-22 · ·

Method for controlling the operation of an industrial robot configured in particular to carry out pick-and-place or singulation tasks.