Patent classifications
B25J11/0015
ROBOT REACTING ON BASIS OF USER BEHAVIOR AND CONTROL METHOD THEREFOR
A robot for outputting various reactions according to user behaviors is disclosed. A control method for a robot using an artificial intelligence model, according to the present disclosure, comprises the steps of: acquiring data related to at least one user; inputting the data related to the at least one user into the artificial intelligence model as learning data so as to learn a user state for each user of which there is at least one; determining representative reactions corresponding to the user states learned on the basis of the data related to the at least one user; and inputting the input data into the artificial intelligence model so as to determine a user state of a first user and controlling the robot on the basis of a representative reaction corresponding to the determined user state, when input data related to the first user among the users, of which there is a least one, is acquired.
Method and server for controlling interaction robot
A control method of an interaction robot according to an embodiment of the present invention comprises the steps of: receiving a user input, by the interaction robot; determining a robot response corresponding to the received user input, by the interaction robot; and outputting the determined robot response, by the interaction robot, wherein the step of outputting the determined robot response includes the steps of: outputting a color matching to the received user input or the determined robot response to a light emitting unit, by the interaction robot; and outputting a motion matching to the received user input or the determined robot response to any one or more among a first driving unit and a second driving unit, by the interaction robot.
Modular robotic vehicle
A modular robotic vehicle (MRV) having a modular chassis configured for a vehicle utilizing two-wheel steering, four-wheel steering, six-wheel steering, eight-wheel steering controlled by a semiautonomous system or an autonomous driving system, either system is associated with operating modes which may include a two-wheel steering mode, an all-wheel steering mode, a traverse steering mode, a park mode, or an omni-directional mode utilized for steering sideways, driving diagonally or move crab like. Accordingly, during semiautonomous control a driver of the modular robotic vehicle may utilize smart I/O devices including a smartphone, tablet like devices, or a control panel to select a preferred driving mode. The driver may communicate navigation instructions via smart I/O devices to control steering, speed and placement of the MRV in respect to the operating mode. Accordingly, GPS and a wireless network provides navigation instructions during an autonomous operation involving driving, parking, docking or connecting to another MRV.
EMOTIONAL INTELLIGENT ROBOTIC PILOT
A system for providing pilot or controller support in a reduced crew transport environment may include a set of sensors configured to collect a set of measurements of a set of physiological signals of a user. The system may further include an emotional analyzer model usable to generate at least one emotional parameter value corresponding to the set of measurements. The system may also include a natural language processing model usable to generate a natural language statement corresponding to the at least one emotional parameter value for output to the pilot.
EXPRESSION-VARIABLE ROBOT
To provide an expression-variable robot 100 capable of stably showing a variety of expressions by a simple configuration an expression-variable robot 100 comprising a simulated face 10 and an eyebrow body 40 that is disposed on the surface of the simulated face 10, the eyebrow body 40 having flexibility, and the expression-variable robot including a first support part 50 that supports an outer end 41 of the eyebrow body 40, a second support part 60 that supports an inner end 42 of the eyebrow body 40, and drive parts 81, 82 that rotate the outer end 41.
Robot attention detection
A robot that uses sensor inputs for attention activation and corresponding methods, systems, and computer programs encoded on computer storage media. The robot can be configured to compute a plurality of attention signals from sensor inputs and provide the plurality of attention signals as input to the attention level classifier to generate an attention level. If a user is paying attention to the robot based on the generated attention level, the robot selects a behavior to execute based on the current attention level, wherein a behavior comprises one or more coordinated actions to be performed by the robot.
System and method for dynamic robot profile configurations based on user interactions
The present teaching relates to method, system, medium, and implementations for configuring an animatronic device. Information about a user is obtained for whom an animatronic device is to be configured to carry out a dialogue with the user. One or more preferences of the user are identified from the obtained information and are used to select, from the plurality of selectable profiles, a first selected profile, which specifies parameters to be used to control a manner by which the animatronic device is to communicate with the user. The animatronic device is then configured based on the first selected profile for carrying out the dialogue in the manner specified.
EYE CONTACT SENSING AND CONTROL FOR ROBOTIC CHARACTERS
A system for sensing and controlling eye contact for a robot. The system includes a robotic figure with a movable eye. The system includes a light source positioned in the robotic figure to output light through a light outlet of the eye. A light sensor is included that senses light striking surfaces in a physical space in which the robotic figure is positioned including the output light from the light source. The system includes an image processor processing output of the light sensor to identify a location of a target formed by the output light striking surfaces in the physical space and to identify a location of a face of a human. Further, the system includes a robot controller generating eye movement control signals based on the location of the target and the location of the face to position the eye to provide eye contact with the human observer.
ROBOT AND METHOD FOR CONTROLLING THE SAME
A robot according to the present disclosure comprises: a microphone; a camera disposed to face a predetermined direction; and a processor configured to: inactivate driving of the camera and activate driving of the microphone, if a driving mode of the robot is set to a user monitoring mode; acquire a sound signal through the microphone; activate the driving of the camera based on an event estimated from the acquired sound signal; confirm the event from the image acquired through the camera; and control at least one constituent included in the robot to perform an operation based on the confirmed event.
VIRTUAL ROBOT IMAGE PRESENTATION METHOD AND APPARATUS
A virtual robot image presentation method and an apparatus are provided to improve virtual robot utilization and user experience. In this method, an electronic device generates a first virtual robot image, and presents the first virtual robot image. The first virtual robot image is determined by the electronic device based on scene information. The scene information includes at least one piece of information in first information and second information, the first information is used to represent a current time attribute, and the second information is used to represent a type of an application currently running in the electronic device. According to the foregoing method, in a human-machine interaction process, virtual robot images can be richer and more vivid, so that user experience can be better, thereby improving virtual robot utilization of a user.