B25J11/0005

TELEPRESENCE ROBOTS HAVING COGNITIVE NAVIGATION CAPABILITY

The embodiments of present disclosure herein address unresolved problem of cognitive navigation strategies for a telepresence robotic system. This includes giving instruction remotely over network to go to a point in an indoor space, to go an area, to go to an object. Also, human robot interaction to give and understand interaction is not integrated in a common telepresence framework. The embodiments herein provide a telepresence robotic system empowered with a smart navigation which is based on in situ intelligent visual semantic mapping of the live scene captured by a robot. It further presents an edge-centric software architecture of a teledrive comprising a speech recognition based HRI, a navigation module and a real-time WebRTC based communication framework that holds the entire telepresence robotic system together. Additionally, the disclosure provides a robot independent API calls via device driver ROS, making the offering hardware independent and capable of running in any robot.

Robot and controlling method thereof
11548144 · 2023-01-10 · ·

Disclosed herein is a robot including an output interface including at least one of a display or a speaker, a camera, and a processor controlling the output interface to output content, acquiring an image including a user through the camera while the content is output, detecting an over-immersion state of the user based on the acquired image, and controlling an operation of releasing over-immersion when the over-immersion state is detected.

Apparatus and method for generating robot interaction behavior

Disclosed herein are an apparatus and method for generating robot interaction behavior. The method for generating robot interaction behavior includes generating co-speech gesture of a robot corresponding to utterance input of a user, generating a nonverbal behavior of the robot, that is a sequence of next joint positions of the robot, which are estimated from joint positions of the user and current joint positions of the robot based on a pre-trained neural network model for robot pose estimation, and generating a final behavior using at least one of the co-speech gesture and the nonverbal behavior.

Rehabilitation Robot Control Apparatus and Method Thereof

An embodiment rehabilitation robot control apparatus includes a brainwave signal measuring device configured to measure a brainwave signal of a user, a preprocessing device configured to preprocess the measured brainwave signal, a classification device configured to classify a motor intention of the user based on the brainwave signal preprocessed by the preprocessing device, and a controller configured to reflect the motor intention of the user in real time to control an operation or a stop of a rehabilitation robot.

AUTONOMOUS MOBILE BODY, INFORMATION PROCESSING METHOD, PROGRAM, AND INFORMATION PROCESSING APPARATUS
20220413795 · 2022-12-29 ·

The present technology relates to an autonomous mobile body, an information processing method, a program, and an information processing apparatus capable of improving user experience by an output sound of the autonomous mobile body.

The autonomous mobile body includes: a recognition unit that recognizes a motion of its own device; and a sound control unit that controls an output sound output from the own device. The sound control unit controls output of a plurality of operation sounds that is the output sound corresponding to a plurality of the motions of the own device, and changes the operation sound in a case where the plurality of motions has been recognized. The present technology can be applied to, for example, a robot.

ROBOT AND METHOD FOR OPERATING THE SAME

A robot includes at least one motor driving the robot to perform a predetermined motion; a memory storing a motion map database and a program comprising one or more instructions; and at least one processor electrically connected to the at least one motor and the memory, the at least one processor being configured to: obtain an input motion identifier based on a user input, identify a motion state indicating whether the robot is performing a motion, based on the motion state being in an active state, store the input motion identifier in the memory, and based on the motion state being in an idle state: determine an active motion identifier from at least one motion identifier stored in the memory based on a predetermined criterion; and control the at least one motor to drive a motion corresponding to the active motion identifier based on the motion map database.

Riding system of robot and method thereof

A riding system of a robot which supports a PUI through user authentication to provide convenience to users, and including a server for exchanging authentication information with the robot; a mobile terminal including an application interlocking with the server and for arranging use information of the user through the application and for calling the robot through the application; and a robot storing the authentication information, to authenticate the user through the authentication information when there is a call from the mobile terminal, and deforming according to a body size of the user included in the use information and to allow the user to ride thereon and to move the user to the destination, and a control method thereof.

Privacy protection in mobile robot

A mobile robot is configured for operation in a commercial or industrial setting, such as an office building or retail store. The mobile robot may include cameras for capturing images and videos and include microphones for capturing audio of its surroundings. To improve privacy by preventing confidential information from being transmitted, the mobile robot may detect text in images and modify the images to make the text illegible before transmitting the images. The mobile robot may also detect human voice in audio and modify audio to make the human voice unintelligible before transmitting the audio.

Robot and operation method therefor
11529739 · 2022-12-20 · ·

The present invention is the invention for providing a guidance service by using a robot. For example, the robot may provide the guidance service in an airport. The robot may receive a destination, acquire a movement path from a current position to the destination, and transmit the movement path to the mobile terminal. The mobile terminal may receive the movement path from the robot and display a guidance path representing a movement path and a user path representing a position movement of the mobile terminal and overlapping the guidance path.

Gaming service automation system with graphical user interface

A robot management system (RMS) includes a plurality of service robots deployed within an operations venue that includes a plurality of gaming devices, an operator terminal presenting a graphical user interface (GUI) to an operator, and a robot management system server (RMS server) configured in networked communication with the plurality of service robots. The RMS server is configured to: identify location data for the service robots; create an interactive overlay map of the operations venue that includes a static map of the operations venue, overlay data showing the location data of the plurality of service robots over the static map, and an interactive icon for each service robot of the plurality of service robots; display, via the GUI, the overlay map; receive a first input indicating a selection of a first interactive icon associated with a first service robot; and display current status information associated with the first service robot.