Patent classifications
B25J13/003
HUMANOID ROBOT FOR PERFORMING MANEUVERS LIKE HUMANS
A modular robotic vehicle (MRV) having a modular chassis configured for a vehicle utilizing two-wheel steering, four-wheel steering, six-wheel steering, eight-wheel steering controlled by a semiautonomous system or an autonomous driving system, either system is associated with operating modes which may include a two-wheel steering mode, an all-wheel steering mode, a traverse steering mode, a park mode, or an omni-directional mode utilized for steering sideways, driving diagonally or move crab like. Accordingly, during semiautonomous control a driver of the modular robotic vehicle may utilize smart I/O devices including a smartphone, tablet like devices, or a control panel to select a preferred driving mode. The driver may communicate navigation instructions via smart I/O devices to control steering, speed and placement of the MRV in respect to the operating mode. Accordingly, GPS and a wireless network provides navigation instructions during an autonomous operation involving driving, parking, docking or connecting to another MRV.
Manipulator system
A manipulator system configured to perform a work to a workpiece being moved by a moving device, includes a robotic arm, having one or more joints and to which a tool configured to perform the work to the workpiece is attached, an operating device configured to operate the robotic arm, a first imaging means configured to image the workpiece, while following the movement of the workpiece, a second imaging means fixedly provided in a work area to image a situation of the work to the workpiece, a displaying means configured to display an image imaged by the first imaging means and an image imaged by the second imaging means, and a control device configured to control the operation of the robotic arm based on an operating instruction of the operating device, while detecting a moving amount of the workpiece being moved by the moving device and carrying out a tracking control of the robotic arm according to the moving amount of the workpiece.
METHODS AND SYSTEMS FOR ENABLING HUMAN ROBOT INTERACTION BY SHARING COGNITION
The disclosure generally relates to methods and systems for enabling human robot interaction by cognition sharing which includes gesture and audio. Conventional techniques that use the gestures and the speech, require extra hardware setup and are limited to navigation in structured outdoor driving environments. The present disclosure herein provides methods and systems that solves the technical problem of enabling the human robot interaction with a two-step approach by transferring the cognitive load from the human to the robot. An accurate shared perspective associated with the task is determined in the first step by computing relative frame transformations based on understanding of navigational gestures of the subject. Then, the shared perspective transformed to the robot in the field view of the robot. The transformed shared perspective is then given to a language grounding technique in the second step, to accurately determine a final goal associated with the task.
METHOD FOR DYNAMIC MULTI-DIMENSIONAL SPATIO-TEMPORAL HUMAN MACHINE INTERACTION AND FEEDBACK
A system for safe interaction between a human and an industrial machine includes a cyber-mechanical system. The cyber-mechanical system includes at least one industrial machine and a cyber-mechanical control system for processing inputs and producing control outputs for the at least one industrial machine; The system further includes a task planner configured to translate high level goals into scheduled tasks of the industrial machine. The system includes an interaction reasoner that identifies at least one interaction between the industrial machine and a human working in cooperation with the industrial machine. Output of the interaction reasoner is provided to an image generator that produces an interaction image. The interaction image represents information relating to one or more of the scheduled tasks of the industrial machine. An image projector associated with the industrial machine conveys information about the scheduled tasks of the associated industrial machine to the human.
CONTROL DEVICE, TASK SYSTEM, CONTROL METHOD AND CONTROL PROGRAM
A control device according to an aspect of the present disclosure is a control device for a robot that operates in a facility used by a user, and includes a detection information acquisition unit that acquires detection information of the user who is present in a preset area of the facility, and a control unit that controls the robot such that the robot operates at a speed equal to or lower than a set maximum operation speed based on the detection information of the user.
Social robot with environmental control feature
A method and apparatus for controlling a social robot includes operating an electronic output device based on social interactions between the social robot and a user. The social robot utilizes an algorithm or other logical solution process to infer a user mental state, for example a mood or desire, based on observation of the social interaction. Based on the inferred mental state, the social robot causes an action of the electronic output device to be selected. Actions may include, for example, playing a selected video clip, brewing a cup of coffee, or adjusting window blinds.
METHOD AND APPARATUS FOR DETECTING GROUND ATTRIBUTE OF LEGGED ROBOT
A method for detecting a ground attribute of a legged robot includes obtaining a collision audio of a foot of the legged robot with a ground; and detecting a workable level attribute of the ground in a working environment of the legged robot according to the collision audio. The sound of the collision between the foot of the robot and the ground is collected, and the workable level attribute of the ground in the working environment of the legged robot is detected based on the sound, so that the operable level attribute can be effectively used to control the legs of the legged robot. On the one hand, the motion noise of the legged robot can be reduced, and on the other hand, the power consumption of the legged robot can be reduced, thereby increasing its range of motion.
Article carrying robot
Included are a bottom portion having a traveling portion, a body portion having a first pillar portion and a second pillar portion extending in a vertical direction, respectively, from one end and another end in a horizontal direction of the bottom portion, and a top portion including one end connected to an end of the first pillar portion opposite to the bottom portion and including another end connected to an end of the second pillar portion opposite to the bottom portion, an article storage portion that forms an opening with the first pillar portion, the second pillar portion, the top portion, and the bottom portion in such a way that the opening penetrates the body portion, and fixing portions provided in the first and second pillar portions to sandwich the opening and pair up with each other, for fixing an article storage auxiliary instrument.
INTERACTIVE COST CORRECTIONS WITH NATURAL LANGUAGE FEEDBACK
Approaches presented herein provide for a framework to integrate human provided feedback in natural language to update a robot planning cost or value. The natural language feedback may be modeled as a cost or value associated with completing a task assigned to the robot. This cost or value may then be added to an initial task cost or value to update one or more actions to be performed by the robot. The framework can be applied to both real work and simulated environments where the robot may receive instructions, in natural language, that either provide a goal, modify an existing goal, or provide constraints to actions to achieve an existing goal.
Robot
Disclosed is a robot including a microphone configured to acquire a voice, a camera configured to acquire a first image including a gesture, and a controller configured to recognize the acquired voice, recognize a pointed position corresponding to the gesture included in the first image, control the camera to acquire a second image including the recognized pointed position, identify a pointed target included in the second image, and perform a control operation on the basis of the identified pointed target and a command included in the recognized voice.