G05B2219/39064

REACHABLE MANIFOLD AND INVERSE MAPPING TRAINING FOR ROBOTS

A system includes: a first module configured to, based on a set of target robot joint angles, generate a first estimated end effector pose and a first estimated latent variable that is a first intermediate variable between the set of target robot joint angles and the first estimated end effector pose; a second module configured to determine a set of estimated robot joint angles based on the first estimated latent variable and a target end effector pose; a third module configured to determine joint probabilities for the robot based on the first estimated latent variable and the target end effector pose; and a fourth module configured to, based on the set of estimated robot joint angles, determine a second estimated end effector pose and a second estimated latent variable that is a second intermediate variable between the set of estimated robot joint angles and the second estimated end effector pose.

OPERATION RANGE SETTING DEVICE, OPERATION RANGE SETTING METHOD, AND STORAGE MEDIUM
20230271317 · 2023-08-31 · ·

An operation range setting device 1X includes a first recognition means 15Xa, a second recognition means 15Xb, and an operation range setting means 17X. The first recognition means 15Xa is configured to recognize positions of plural reference objects. The second recognition means 15Xb is configured to recognize combinations of reference objects, the combinations each being selected to be a pair of the reference objects from the plural reference objects. The operation range setting means 17X is configured to set an operation range of a robot based on line segments, the line segments each connecting a pair of the reference object for each of the combinations.

TRANSFORMATION OF JOINT SPACE COORDINATES USING MACHINE LEARNING

Apparatuses, systems, and techniques to map coordinates in task space to a set of joint angles of an articulated robot. In at least one embodiment, a neural network is trained to map task-space coordinates to joint space coordinates of a robot by simulating a plurality of robots at various joint angles, and determining the position of their respective manipulators in task space.

CONTROL SYSTEM AND CONTROL METHOD OF MANIPULATOR

A control system for a manipulator includes a position indicator provided on a flange for mounting a tool of the manipulator, a position detector provided near the manipulator and configured to detect a position information of the position indicator in real time, a computer calculating a position data of the position indicator in real time according to the position information, a cloud server calculating a working parameter of a joint of the manipulator in real time by an artificial intelligence neural network according to the position data, and a controller controlling the joint in real time based on the working parameter. The artificial intelligence neural network is a self-learning neural network that calculates and automatically adjusts a weight among a plurality of neurons based on the position data.

Image processing system, image processing device, method of reconfiguring circuit in FPGA, and program for reconfiguring circuit in FPGA
10474124 · 2019-11-12 · ·

An image processing system which can execute various image processings in an operation process of a robot is provided. The image processing system includes: a robot for performing a predetermined operation on a workpiece; a photographing unit for photographing the workpiece; an acquisition unit for acquiring a position of the robot; a field programmable gate array (FPGA) for reconfiguring an internal circuit configuration; a storage unit for storing area information where circuit information for implementing predetermined image processing on an image obtained from the photographing unit as information for defining the circuit configuration of the FPGA is defined for each operation area of the robot; and a reconfiguration unit for reconfiguring the circuit configuration of the FPGA with the circuit information associated with the operation area based on that a position of the robot sequentially acquired by the acquisition unit belongs to one operation area defined in the area information.

Transformation of joint space coordinates using machine learning

Apparatuses, systems, and techniques to map coordinates in task space to a set of joint angles of an articulated robot. In at least one embodiment, a neural network is trained to map task-space coordinates to joint space coordinates of a robot by simulating a plurality of robots at various joint angles, and determining the position of their respective manipulators in task space.

IMAGE PROCESSING SYSTEM, IMAGE PROCESSING DEVICE, METHOD OF RECONFIGURING CIRCUIT IN FPGA, AND PROGRAM FOR RECONFIGURING CIRCUIT IN FPGA
20180224825 · 2018-08-09 · ·

An image processing system which can execute various image processings in an operation process of a robot is provided. The image processing system includes: a robot for performing a predetermined operation on a workpiece; a photographing unit for photographing the workpiece; an acquisition unit for acquiring a position of the robot; a field programmable gate array (FPGA) for reconfiguring an internal circuit configuration; a storage unit for storing area information where circuit information for implementing predetermined image processing on an image obtained from the photographing unit as information for defining the circuit configuration of the FPGA is defined for each operation area of the robot; and a reconfiguration unit for reconfiguring the circuit configuration of the FPGA with the circuit information associated with the operation area based on that a position of the robot sequentially acquired by the acquisition unit belongs to one operation area defined in the area information.

Reachable manifold and inverse mapping training for robots
12151374 · 2024-11-26 · ·

A system includes: a first module configured to, based on a set of target robot joint angles, generate a first estimated end effector pose and a first estimated latent variable that is a first intermediate variable between the set of target robot joint angles and the first estimated end effector pose; a second module configured to determine a set of estimated robot joint angles based on the first estimated latent variable and a target end effector pose; a third module configured to determine joint probabilities for the robot based on the first estimated latent variable and the target end effector pose; and a fourth module configured to, based on the set of estimated robot joint angles, determine a second estimated end effector pose and a second estimated latent variable that is a second intermediate variable between the set of estimated robot joint angles and the second estimated end effector pose.