Patent classifications
B25J9/163
ROBOT PROGRAMMING DEVICE AND ROBOT PROGRAMMING METHOD
A robot programming device is provided with: a robot model movement unit that moves a prescribed movable part of a robot model from a first position to a second position in accordance with instruction content; an arm inversion detection unit that detects whether or not a prescribed state has occurred in which, when the prescribed movable part of the robot model is moved to the second position, any axis constituting the robot model is rotated 180°±a first prescribed value from a rotation angle serving as a reference; and an arm inversion correction unit that, when occurrence of the prescribed state has been detected for any axis constituting the robot model, corrects the posture of the robot model when the prescribed movable part is in the second position so that said axis is not in the prescribed state.
LOOP CLOSURE DETECTION METHOD AND SYSTEM, MULTI-SENSOR FUSION SLAM SYSTEM, ROBOT, AND MEDIUM
The present invention provides a loop closure detection method and system, a multi-sensor fusion SLAM system, a robot, and a medium. Said system runs on a mobile robot, and comprises a similarity detection unit, a visual pose solving unit, and a laser pose solving unit. According to the loop closure detection system, the multi-sensor fusion SLAM system and the robot provided in the present invention, the speed and accuracy of loop closure detection in cases of a change in a viewing angle of the robot, a change in the environmental brightness, a weak texture, etc. can be significantly improved.
REAL-TIME PREDICTOR OF HUMAN MOVEMENT IN SHARED WORKSPACES
Disclosed herein are systems, devices, and methods for real-time determinations of likelihoods for possible trajectories of a collaborator in a workspace with a robot. The system determines a current kinematic state of the collaborator and determines a goal of the collaborator based on occupancy information about objects in the workspace. The system also determines a possible trajectory for the collaborator based on the goal and the current kinematic state and determines a short-horizon trajectory for the collaborator based on previously observed kinematic states of the collaborator towards the goal. The system also determines a likelihood that the collaborator will follow the possible trajectory based on the short-horizon trajectory, the goal, and the current kinematic state. The system also generates a movement instruction to control movement of the robot based on the likelihood that the collaborator will follow the possible trajectory.
TEMPLATE ROBOTIC CONTROL PLANS
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for using template robotic control plans. One of the methods comprises obtaining a template robotic control plan that is configurable for a plurality of different robotics applications, wherein the template robotic control plan comprises data defining (i) an adaptation procedure and (ii) a set of one or more open parameters; obtaining a user input defining a respective value or range of values for each open parameter in the set of open parameters, wherein the user input characterizes a specific robotics application for which the template robotic control plan can be configured; and executing, using the obtained values for the set of open parameters, the adaptation procedure to generate a specific robotic control plan from the template robotic control plan.
CONTROLLING MECHANICAL SYSTEMS BASED ON NATURAL LANGUAGE INPUT
A method is provided. The method includes obtaining an enhanced state graph. The enhanced state graph represents a set of objects within an environment and a set of positions of the set of objects. The enhanced state graph includes a set of object nodes, a set of property nodes and a set of goal nodes to represent a set of objectives. The method also includes generating a set of instructions for a set of mechanical systems based on the enhanced state graph. The set of mechanical systems is configured to interact with one or more of the set of objects within the environment. The method further includes operating the set of mechanical systems to achieve the set of objectives based on the set of instructions.
ROBOTIC PROCESS AUTOMATION SYSTEM FOR MANAGING HUMAN AND ROBOTIC TASKS
Improved techniques for combining human tasks and robotic tasks in an organized manner to define an automation workflow process. A workflow process platform can assist a developer in creating an automation workflow process, and/or manage performance of an automation workflow process. The improved techniques enable a Robotic Process Automation (RPA) system to support programmatically combining various robotic tasks with human actions to provide an interrelated relationship of both human tasks and automated tasks.
MACHINE-LEARNABLE ROBOTIC CONTROL PLANS
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for using learnable robotic control plans. One of the methods comprises obtaining a learnable robotic control plan comprising data defining a state machine that includes a plurality of states and a plurality of transitions between states, wherein: one or more states are learnable states, and each learnable state comprises data defining (i) one or more learnable parameters of the learnable state and (ii) a machine learning procedure for automatically learning a respective value for each learnable parameter of the learnable state; and processing the learnable robotic control plan to generate a specific robotic control plan, comprising: obtaining data characterizing a robotic execution environment; and for each learnable state, executing, using the obtained data, the respective machine learning procedures defined by the learnable state to generate a respective value for each learnable parameter of the learnable state.
Automatic robot perception programming by imitation learning
Apparatus, systems, methods, and articles of manufacture for automatic robot perception programming by imitation learning are disclosed. An example apparatus includes a percept mapper to identify a first percept and a second percept from data gathered from a demonstration of a task and an entropy encoder to calculate a first saliency of the first percept and a second saliency of the second percept. The example apparatus also includes a trajectory mapper to map a trajectory based on the first percept and the second percept, the first percept skewed based on the first saliency, the second percept skewed based on the second saliency. In addition, the example apparatus includes a probabilistic encoder to determine a plurality of variations of the trajectory and create a collection of trajectories including the trajectory and the variations of the trajectory. The example apparatus also includes an assemble network to imitate an action based on a first simulated signal from a first neural network of a first modality and a second simulated signal from a second neural network of a second modality, the action representative of a perceptual skill.
Robot and method for recognizing wake-up word thereof
Provided is a robot including a microphone configured to acquire a sound signal corresponding to a sound generated near the robot, a camera, an output interface including at least one of a display configured to output a wake-up screen or a speaker configured to output a wake-up sound when the robot wakes up, and a processor configured to recognize whether the acquired sound includes a voice of a person, activate the camera when the sound includes a voice of a person, recognize whether a person is present in an image acquired by the activated camera, set a wake-up word recognition sensitivity based on a recognition result as to whether a person is present, and recognize whether a wake-up word is included voice data of a user acquired through the microphone based on the set wake-up word recognition sensitivity.
Robot teaching device
A robot teaching device includes: a display device; an operation key formed of a hard key or a soft key and including an input changeover switch; a microphone; a voice recognition section; a correspondence storage section storing each of a plurality of types of commands and a recognition target word in association with each other; a recognition target word determination section configured to determine whether a phrase represented by character information includes the recognition target word; and a command execution signal output section configured to switch, in response to the input changeover switch being operated, between a first operation in which a signal for executing the command corresponding to an operation to the operation key is outputted and a second operation in which a signal for executing the command associated with the recognition target word represented by the character information is outputted.