Patent classifications
B25J11/0005
AUTONOMOUSLY NAVIGATING ROBOT CAPABLE OF CONVERSING AND SCANNING BODY TEMPERATURE TO HELP SCREEN FOR COVID-19 AND OPERATION SYSTEM THEREOF
This application relates to an autonomously navigating robot. In one aspect, the robot includes an end effector configured to measure a person's body temperature and, when the body temperature exceeds a standard fever temperature, activate a chatbot to check symptoms of Covid-19. The robot may also include a manipulator configured to align the end effector with the person's forehead. The robot may further include a mobile robot configured to detect the person and move the end effector and the manipulator to a position where the person is located by performing autonomous navigation.
Visual annotations in robot control interfaces
Methods, apparatus, systems, and computer-readable media are provided for visually annotating rendered multi-dimensional representations of robot environments. In various implementations, an entity may be identified that is present with a telepresence robot in an environment. A measure of potential interest of a user in the entity may be calculated based on a record of one or more interactions between the user and one or more computing devices. In some implementations, the one or more interactions may be for purposes other than directly operating the telepresence robot. In various implementations, a multi-dimensional representation of the environment may be rendered as part of a graphical user interface operable by the user to control the telepresence robot. In various implementations, a visual annotation may be selectively rendered within the multi-dimensional representation of the environment in association with the entity based on the measure of potential interest.
Dynamic learning method and system for robot, robot and cloud server
A dynamic learning method for a robot includes a training and learning mode. The training and learning mode includes the following steps: dynamically annotating a belonging and use relationship between an object and a person in a three-dimensional environment to generate an annotation library; acquiring a rule library, and establishing a new rule and a new annotation by means of an interactive demonstration behavior based on the rule library and the annotation library; and updating the new rule to the rule library and updating the new annotation to the annotation library when it is determined that the established new rule is not in conflict with rules in the rule library and the new annotation is not in conflict with annotations in the annotation library.
Robot and method for recognizing wake-up word thereof
Provided is a robot including a microphone configured to acquire a sound signal corresponding to a sound generated near the robot, a camera, an output interface including at least one of a display configured to output a wake-up screen or a speaker configured to output a wake-up sound when the robot wakes up, and a processor configured to recognize whether the acquired sound includes a voice of a person, activate the camera when the sound includes a voice of a person, recognize whether a person is present in an image acquired by the activated camera, set a wake-up word recognition sensitivity based on a recognition result as to whether a person is present, and recognize whether a wake-up word is included voice data of a user acquired through the microphone based on the set wake-up word recognition sensitivity.
AUTONOMOUS MOBILE BODY, INFORMATION PROCESSING METHOD, PROGRAM, AND INFORMATION PROCESSING DEVICE
The present technology relates to an autonomous mobile body, an information processing method, a program, and an information processing device, by which a user experience based on an output sound of the autonomous mobile body can be improved. The autonomous mobile body includes a recognition section that recognizes a paired device that is paired with the autonomous mobile body, and a sound control section that changes a control method for an output sound to be outputted from the autonomous mobile body, on the basis of a recognition result of the paired device, and controls the output sound in accordance with the changed control method. The present technology is applicable to a robot, for example.
ROBOT SERVICE METHOD AND ROBOT APPARATUS USING SOCIAL NETWORK SERVICE
The present invention relates to a robot service system and a robot apparatus using a social network service, and comprises: (a) a step in which a terminal device is connected to a robot apparatus by executing a social network service program, and displays a service screen on which an image captured by the robot apparatus is displayed; (b) a step in which the terminal device transmits a robot control command inputted to the service screen to the robot apparatus; (c) a step in which the robot apparatus performs an operation according to the robot control command and transmits operation performance data to the terminal device; and (d) a step in which the terminal device displays the operation performance data transmitted from the robot apparatus on the service screen.
ROBOT CONTROL DEVICE, ROBOT, ROBOT CONTROL METHOD, AND PROGRAM RECORDING MEDIUM
Disclosed are a robot control device and the like with which the accuracy with which a robot starts listening to speech is improved, without requiring a user to perform an operation. This robot control device is provided with: an action executing means which, upon detection of a person, determines an action to be executed with respect to said person, and performs control in such a way that a robot executes the action; an assessing means which, upon detection of a reaction from the person in response to the action determined by the action executing means, assesses the possibility that the person will talk to the robot, on the basis of the reaction; and an operation control means which controls an operating mode of the robot main body on the basis of the result of the assessment performed by the assessing means.
Systems and methods to adapt and optimize human-machine interaction using multimodal user-feedback
Systems and methods for human-machine interaction. An adaptive behavioral control system of a human-machine interaction system controls an interaction sub-system to perform a plurality of actions for a first action type in accordance with a computer-behavioral policy, each action being a different alternative action for the action type. The adaptive behavioral control system detects a human reaction of an interaction participant to the performance of each action of the first action type from data received from a human reaction detection sub-system. The adaptive behavioral control system stores information indicating each detected human reaction in association with information identifying the associated action. In a case where stored information indicating detected human reactions for the first action type satisfy an update condition, the adaptive behavioral control system updates the computer-behavioral policy for the first action type.
METHOD FOR GENERATING A HANDWRITING VECTOR
One variation of a method includes: accessing a handwriting sample comprising a set of user glyphs handwritten by a user; for each character in a set of characters, identifying a subset of user glyphs corresponding to the character in the handwriting sample, characterizing variability of a set of spatial features across the subset of user glyphs, and storing variability of the set of spatial features across the subset of user glyphs in a character container corresponding to the character; and compiling the set of character containers into a handwriting model for the user. The method further includes: accessing a text string comprising a combination of characters in the set of characters; for each instance of each character in the text string, inserting a set of variability parameters into the handwriting model to generate a synthetic glyph representing the character; and assembling the set of synthetic glyphs into a print file.
Artificial intelligence server and method for providing information to user
In an artificial intelligence server for providing information to a user, the artificial intelligence server includes a communication unit configured to communicate with a plurality of artificial intelligence apparatuses deployed in a service area and a processor configured to receive at least one of speech data of the user or terminal usage information of the user from at least one of the plurality of artificial intelligence apparatuses, generate intention information of the user based on at least one of the received speech data or the received terminal usage information, generate status information of the user using the plurality of artificial intelligence apparatuses, determine an information providing device among the plurality of artificial intelligence apparatuses based on the generated status information of the user, generate output information to be outputted from the determined information providing device, and transmit a control signal for outputting the generated output information to the determined information providing device.