Patent classifications
B25J11/0005
System for user interactions with an autonomous mobile device
A user interacts with an autonomous mobile device (AMD) using a voice user interface. The voice user interface allows a user to instruct the AMD to move, stop, go to a specified location, and so forth. The commands may include, but are not limited to: stop, stop moving, move, turn, go to, stay here, go away, and so forth. In one implementation, if the AMD is instructed by the user to go away, the AMD may move out of sight of the user from a first region to another region, such as another room. The AMD will avoid traversing the first region until a timer expires or a command to enter the first region is given.
Systems and methods to adapt and optimize human-machine interaction using multimodal user-feedback
Systems and methods for human-machine interaction. An adaptive behavioral control system of a human-machine interaction system controls an interaction sub-system to perform a plurality of actions for a first action type in accordance with a computer-behavioral policy, each action being a different alternative action for the action type. The adaptive behavioral control system detects a human reaction of an interaction participant to the performance of each action of the first action type from data received from a human reaction detection sub-system. The adaptive behavioral control system stores information indicating each detected human reaction in association with information identifying the associated action. In a case where stored information indicating detected human reactions for the first action type satisfy an update condition, the adaptive behavioral control system updates the computer-behavioral policy for the first action type.
Robot control method and companion robot
The present invention provides a robot control method, and the method includes: collecting interaction information of a companion target, and obtaining digital person information of a companion person (101), where the interaction information includes interaction information of a sound or an action of the companion target toward the robot, and the digital person information includes a set of digitized information of the companion person; and determining, by using the interaction information and the digital person information, a manner of interacting with the companion target (103); generating, based on the digital person information of the companion person and by using a machine learning algorithm, an interaction content corresponding to the interaction manner (105); and generating a response action toward the companion target based on the interaction manner and the interaction content (107).
Artificial intelligence (AI) robot and control method thereof
Disclosed is a method of controlling a robot, comprising: switching to a surrounding environment concentration mode according to a sound of surrounding environment in a display off mode; searching a user in the surrounding environment concentration mode and switching to a user concentration mode when the user is searched; switching to a user conversation mode from the user concentration mode according to a sound received from the user; and entering the display off mode again after passing through a play alone mode, when the user is not searched in the surrounding environment concentration mode. Accordingly, the robot can operate in an optimal mode according to the change of the surrounding environment by setting various modes of the robot.
Interactive autonomous robot configured for deployment within a social environment
An interactive autonomous robot is configured for deployment within a social environment. The disclosed robot includes a show subsystem configured to select between different in-character behaviors depending on robot status, thereby allowing the robot to appear in-character despite technical failures. The disclosed robot further includes a safety subsystem configured to intervene with in-character behavior when necessary to enforce safety protocols. The disclosed robot is also configured with a social subsystem that interprets social behaviors of humans and then initiates specific behavior sequences in response.
System and method for selective animatronic peripheral response for human machine dialogue
The present teaching relates to method, system, medium, and implementation for activating an animatronic device. Information about a user is obtained for whom an animatronic device is to be configured to carry out a dialogue with the user. The animatronic device includes a head portion and a body portion and the head portion is configured based on one of a plurality of selectable head portions. One or more preferences of the user are identified from the obtained information and used to select, from the plurality of selectable head portions, a first selected head portion. A configuration of the head portion of the animatronic device is then configured based on the first selected head portion for carrying out the dialogue.
Robot and method of controlling same
Disclosed herein is a robot including an output interface including at least one of a display or a speaker, and a processor configured to acquire output data of a predetermined playback time point of content output via the robot or an external device, recognize a first emotion corresponding to the acquired output data, and control the output interface to output an expression based on the recognized first emotion.
Method for generating a handwriting vector
One variation of a method includes: accessing a handwriting sample comprising a set of user glyphs handwritten by a user; for each character in a set of characters, identifying a subset of user glyphs corresponding to the character in the handwriting sample, characterizing variability of a set of spatial features across the subset of user glyphs, and storing variability of the set of spatial features across the subset of user glyphs in a character container corresponding to the character; and compiling the set of character containers into a handwriting model for the user. The method further includes: accessing a text string comprising a combination of characters in the set of characters; for each instance of each character in the text string, inserting a set of variability parameters into the handwriting model to generate a synthetic glyph representing the character; and assembling the set of synthetic glyphs into a print file.
Autonomously acting robot that stares at companion
A robot includes an operation control unit that selects a motion of the robot, a drive mechanism that executes a motion selected by the operation control unit, an eye control unit that causes an eye image to be displayed on a monitor installed in the robot, and a recognizing unit that detects a user. The eye control unit causes a pupil region included in the eye image to change in accordance with a relative position of the user and the robot. A configuration may be such that the eye control unit causes the pupil region to change when detecting a sight line direction of the user, or when the user is in a predetermined range.
Touch input processing method and electronic device supporting the same
An electronic device including: a housing; a sensor module disposed on an inner face of the housing and including a plurality of sensing units; and a processor positioned within the housing and electrically connected to the sensor module. Each of the plurality of sensing units is electrically connected to another sensing unit adjacent thereto among the plurality of sensing units, and includes a central portion and a plurality of peripheral portions connected to a partial area of the central portion and arranged around the central portion, and each of the central portion and the plurality of peripheral portions includes a touch sensor. In addition to this, various embodiments understood through this document are possible.