Patent classifications
B25J11/001
APPARATUS AND METHOD FOR GENERATING ROBOT INTERACTION BEHAVIOR
Disclosed herein are an apparatus and method for generating robot interaction behavior. The method for generating robot interaction behavior includes generating co-speech gesture of a robot corresponding to utterance input of a user, generating a nonverbal behavior of the robot, that is a sequence of next joint positions of the robot, which are estimated from joint positions of the user and current joint positions of the robot based on a pre-trained neural network model for robot pose estimation, and generating a final behavior using at least one of the co-speech gesture and the nonverbal behavior.
System and method for controlling intelligent animated characters
A system and method for controlling animated characters. The system involves an animated character mapping perception into a number of domains, including a world domain, linguistic domain and social domain. The system is computationally perceived items that should be abstracted from the character's environment for processing. The animated character is able to utilize different levels of information gathering or learning, different levels of decision making, and different levels of dynamic responses to provide life-like interactions.
INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING APPARATUS, AND INFORMATION PROCESSING SYSTEM
Provided is an information processing apparatus and an information processing method, and an information processing system, each of which provides a service related to a broadcast-type moving picture content.
An information processing apparatus includes: a receiving unit that receives, from a first device, a notice of data including a location or acquisition method of information suggesting an action of a virtual character, the action corresponding to an event that occurs in a broadcast-type moving picture content; an acquisition unit that acquires the information on the basis of the data issued as a notice from the first device; a display unit capable of two-dimensional or three-dimensional display; and a control unit that controls driving of the virtual character, which is to be displayed using the display unit, on the basis of the information acquired by the acquisition unit.
Data processing method for care-giving robot and apparatus
A data processing method for a care-giving robot and an apparatus comprises receiving data from a target object comprising a capability parameter of the target object, generating a growing model capability parameter matrix of the target object that includes the capability parameter, a capability parameter adjustment value, and a comprehensive capability parameter that is calculated based on the capability parameter; adjusting the capability parameter adjustment value in the growing model capability parameter matrix, to determine an adjusted capability parameter adjustment value; determining whether the adjusted capability parameter adjustment value exceeds a preset threshold; and sending the adjusted capability parameter adjustment value to a machine learning engine when the adjusted capability parameter adjustment value is within a range of the preset threshold.
Virtual-life-based human-machine interaction methods, apparatuses, and electronic devices
Disclosed herein are methods, systems, and apparatus, including computer programs encoded on computer storage media, for virtual-life-based human-machine interaction. One of the method includes obtaining cognitive data for a target user in response to a human-machine interaction from the target user by performing perception analysis on the target user. Target response content is identified based on the cognitive data and sent to the target user. A virtual interactive animation that comprises a virtual life image is dynamically generated based on the cognitive data and the target response content, where the virtual life image has an animation effect that matches the human-machine interaction performed by the target user.
ARTIFICIAL INTELLIGENCE (AI) ROBOT AND CONTROL METHOD THEREOF
Disclosed is a method of controlling a robot, comprising: switching to a surrounding environment concentration mode according to a sound of surrounding environment in a display off mode; searching a user in the surrounding environment concentration mode and switching to a user concentration mode when the user is searched; switching to a user conversation mode from the user concentration mode according to a sound received from the user; and entering the display off mode again after passing through a play alone mode, when the user is not searched in the surrounding environment concentration mode. Accordingly, the robot can operate in an optimal mode according to the change of the surrounding environment by setting various modes of the robot.
Goal-based robot animation
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for performing goal-based robot animation. One system includes a robot configured to receive a goal that specifies a goal state to be attained by the robot or one of the components. A collection of animation tracks is searched to identify one or more animation tracks that when executed by the robot cause the robot to perform one or more physical movements to satisfy the goal state. The identified one or more animation tracks are executed to perform the one or more physical movements to satisfy the received goal state.
Autonomously acting robot that wears clothes
An aspect of the invention provides technology of changing behavioral characteristics when clothing is put on a robot. An autonomously acting robot includes an operation control unit that controls an operation of the robot, a drive mechanism that executes an operation specified by the operation control unit, and an equipment detecting unit that detects clothing worn by the robot. The robot refers to action restriction information correlated in advance to the clothing, and regulates an output of the drive mechanism.
Autonomously acting robot exhibiting shyness
Empathy toward a robot is increased by the robot emulating human-like or animal-like behavior. A robot includes a movement determining unit that determines a direction of movement, a drive mechanism that executes a specified movement, and a familiarity managing unit that updates familiarity with respect to a moving object. The robot moves away from a user with low familiarity, and approaches a user with high familiarity. Familiarity changes in accordance with a depth of involvement between a user and the robot.
NONVERBAL INFORMATION GENERATION APPARATUS, NONVERBAL INFORMATION GENERATION MODEL LEARNING APPARATUS, METHODS, AND PROGRAMS
A nonverbal information generation apparatus includes a nonverbal information generation unit that generates nonverbal information that corresponds to feature quantities of voice or text on the basis of the feature quantities and a learned nonverbal information general model. The nonverbal information is information for controlling an expression unit that expresses behavior so that at least one of the number of times that the behavior is performed and the magnitude of the behavior correspond to the feature quantities.