G05B2219/39451

Brain-computer interface based robotic arm self-assisting system and method

Disclosed are a brain-computer interface based robotic arm self-assisting system and method. The system comprises a sensing layer, a decision-making layer and an execution layer. The sensing layer comprises an electroencephalogram acquisition and detection module and a visual identification and positioning module and is used for analyzing and identifying the intent of a user and identifying and locating positions of a corresponding cup and the user's mouth based on the user intent. The execution layer comprises a robotic arm control module that performs trajectory planning and control for a robotic arm based on an execution instruction received from a decision-making module. The decision-making layer comprises the decision-making module that is connected to the electroencephalogram acquisition and detection module, the visual identification and positioning module and the robotic arm control module to implement the acquisition and transmission of data of an electroencephalogram signal, a located position and a robotic arm status and the sending of the execution instruction for the robotic arm. The system combines the visual identification and positioning technology, a brain-computer interface and a robotic arm to facilitate paralyzed patients to drink water by themselves, improving the quality of life of the paralyzed patients.

Robot Image Display Method, Recording Medium, And Robot Image Display System
20220331972 · 2022-10-20 ·

A robot image display method includes (a) a step of recognizing the position and the posture of a base of a robot from a base section image of a base section for teaching, (b) a step of recognizing the position and the posture of a finger section of the robot from a finger section image of a finger section for teaching, (c) a step of calculating angles of one or more joints of the robot from the position and the posture of the base and the position and the posture of the finger section, and (d) a step of displaying, in a virtual space, a three-dimensional image of the robot in a state in which the joints are at the angles calculated in the step (c).

WEARABLE ROBOT DATA COLLECTION SYSTEM WITH HUMAN-MACHINE OPERATION INTERFACE

A data collection system that performs data collection of human-driven robot actions for robot learning. The data collection system includes: i) a wearable computation subsystem that is worn by a human data collector and that controls the data collection process and ii) a human-machine operation interface subsystem that allows the human data collector to use the human-machine operation interface to operate an attached robotic gripper to perform one or more actions. A user interface subsystem receives instructions from the wearable computation subsystem that direct the human data collector to perform the one or more actions using the human-machine operation interface subsystem. A visual sensing subsystem includes one or more cameras that collect raw visual data related to the pose and movement of the robotic gripper while performing the one or more actions. A data collection subsystem receives collected data related to the one or more actions.

Operation guiding system for operation of a movable device

System includes: an operation terminal that (i) receives an operation instruction that is given by the operator to a movable section of a machine and (ii) senses an operation standby state that allows the operation instruction to be received; an image sensor estimating section configured to estimate a positional relationship between the operator and the machine; a model generating section configured to, in response to sensing of the operation standby state by the operation terminal, generate an operating direction indicating image; and in accordance with the positional relationship, in a direction that is in accordance with a direction in which the operator views the machine, the operating direction indicating image indicating an operating direction of the movable section; an combining section configured to generate a combined image obtained by combining the operating direction indicating image with a captured image of the movable section that has been photographed.

Augmented reality visualization for robotic picking system

An augmented reality (AR) system for production-tuning of parameters for a visual tracking robotic picking system. The robotic picking system includes one or more robots configured to pick randomly-placed and randomly-oriented parts off a conveyor belt and place the parts in an available position, either on a second moving conveyor belt or on a stationary device such as a pallet. A visual tracking system identifies position and orientation of the parts on the feed conveyor. The AR system allows picking system tuning parameters including upstream, discard and downstream boundary locations to be visualized and controlled, real-time robot pick/place operations to be viewed with virtual boundaries, and system performance parameters such as part throughput rate and part allocation by robot to be viewed. The AR system also allows virtual parts to be used in simulations, either instead of or in addition to real parts.

OPERATION SYSTEM FOR INDUSTRIAL MACHINERY
20230062991 · 2023-03-02 ·

An operation system for industrial machinery comprises: an information acquisition unit which acquires machine identification information corresponding to an industrial machine; a machine identification unit which identifies the industrial machine on the basis of the acquired machine identification information; a model projection unit which projects a model corresponding to the identified industrial machine into a virtual space; a distance/direction calculation unit which calculates the distance and direction, of a user observing the model, with respect to the projected model; a gesture observation unit which observes the gesture of the user as an instruction from the user to the identified industrial machine; an instruction determination unit which determines whether or not a user can give an instruction; and an operation instruction unit which operates the identified industrial machine on the basis of the observed gesture of the user, when the determination result is positive.

Registration System and Method for robot-oriented augmented reality teaching system
20220324117 · 2022-10-13 ·

A registration system for robot-oriented augmented reality teaching system, comprising: a physical robot unit, a registration unit, a virtual robot generation unit and a computer; the physical robot unit comprising a physical robot, a physical robot controller and a robot point-to-point intermittent movement control program; the physical robot provided thereon with a physical robot base coordinate system; the physical robot controller connected with the physical robot and the computer respectively; the robot point-to-point intermittent movement control program installed in the computer; the registration unit comprising a registration marker, a camera and a conversion calculation unit; the registration marker arranged on the physical robot body; the camera fixed in a physical environment except the physical robot; the camera connected with the computer, and the conversion calculation unit arranged in the computer; the virtual robot generation unit arranged in the computer and used for generating a virtual robot model.

AUGMENTED REALITY ROBOTIC SYSTEM VISUALIZATION

A technique for displaying a representative path associated with a robotic device. The technique includes detecting at least one reference point within a first image of a workspace, generating the representative path based on path instructions associated with the robotic device and the at least one reference point, and displaying the representative path within the workspace.

Off-line programming apparatus, robot controller, and augmented reality system
11673273 · 2023-06-13 · ·

An off-line programming apparatus includes a model creation unit that creates three-dimensional models of a robot and a load, a storage unit that stores a dynamic parameter of the load, a graphic creation unit that creates a three-dimensional graphic representing the dynamic parameter based on the dynamic parameter, and a display unit that displays the three-dimensional models of the robot and the load and the three-dimensional graphic. The dynamic parameter includes inertia around three axes that are orthogonal to one another at a centroid of the load. The three-dimensional graphic is a solid defined by dimensions in three directions orthogonal to one another. The graphic creation unit sets a ratio of the dimensions in the three directions of the three-dimensional graphic to a ratio corresponding to a ratio of the inertia around the three axes.

Robot teaching system based on image segmentation and surface electromyography and robot teaching method thereof

The present invention relates to a robot teaching system based on image segmentation and surface electromyography and robot teaching method thereof, comprising a RGB-D camera, a surface electromyography sensor, a robot and a computer, wherein the RGB-D camera collects video information of robot teaching scenes and sends to the computer; the surface electromyography sensor acquires surface electromyography signals and inertial acceleration signals of the robot teacher, and sends to the computer; the computer recognizes a articulated arm and a human joint, detects a contact position between the articulated arm and the human joint, and further calculates strength and direction of forces rendered from a human contact position after the human joint contacts the articulated arm, and sends a signal controlling the contacted articulated arm to move along with such a strength and direction of forces and robot teaching is done.