B25J11/0015

SOCIALLY ASSISTIVE ROBOT

A companion robot is disclosed. In some embodiments, the companion robot may include a head having a facemask and a projector configured to project facial images onto the facemask; a facial camera; a microphone configured to receive audio signals from the environment; a speaker configured to output audio signals; and a processor electrically coupled with the projector, the facial camera, the microphone, and the speaker. In some embodiments, the processor may be configured to receive facial images from the facial camera; receive speech input from the microphone; determine an audio output based on the facial images and/or the speech input; determine a facial projection output based the facial images and/or the speech input; output the audio output via the speaker; and project the facial projection output on the facemask via the projector.

Robot having a head unit and a display unit

A robot according to the present disclosure may include: a casing that has an internal space; a head unit that protrudes upward from the casing and has a first display; a display unit that is disposed ahead of the casing and has a second display; an ascending and descending motor that is disposed in the casing; an ascending and descending plate that ascends and descends between a first position and a second position higher than the first position by power of the ascending and descending motor; a contact bar that has an upper end connected to the head unit and a lower end being in contact with the ascending and descending plate; a fixing plate that is positioned between the ascending and descending plate and the head unit and has an opening through which the contact bar passes; and a link that connects the ascending and descending plate and the fixing plate to the display unit.

SYSTEMS AND METHODS TO MANAGE CONVERSATION INTERACTIONS BETWEEN A USER AND A ROBOT COMPUTING DEVICE OR CONVERSATION AGENT

Exemplary implementations may: receive one or more inputs including parameters or measurements regarding a physical environment from the one or more input modalities; identify a user based on analyzing the received inputs from the one or more input modalities; determine if the user shows signs of engagement or interest in establishing a communication interaction by analyzing a user's physical actions, visual actions, and/or audio actions, the user's physical actions, visual actions and/or audio actions determined based at least in part on the one or more inputs received from the one or more input modalities; and determine whether the user is interested in an extended communication interaction with the robot computing device by creating visual actions of the robot computing device utilizing the display device or by generating one or more audio files to be reproduced by one or more speakers.

CONTROL DEVICE, CONTROL METHOD, AND CONTROL SYSTEM

A control device that controls a robot capable of self-propelling in a facility is provided. The control device comprises: a visitor identifying unit configured to identify a location of a visitor in the facility; a robot identifying unit configured to identify a position of the robot; an instruction unit configured to instruct the robot to capture an image of the visitor in a case where the robot is located in a predetermined range near the visitor; an estimation unit configured to estimate an emotion of the visitor, based on the image that has been captured by the robot; and a control unit configured to control whether the robot stays within the predetermined range or moves out of the predetermined range in accordance with the emotion of the visitor.

Wide-Field-of-View Anti-Shake High-Dynamic Bionic Eye

The present application discloses a wide-field-of-view anti-shake high-dynamic bionic eye. A trajectory tracking method based on a bionic eye robot includes: establishing a linear model according to a bionic eye robot; establishing a full state feedback control system on the basis of the linear model; in the full state feedback control system, acquiring an angle and an angular acceleration required for a joint in a target tracking process of the bionic eye on the basis of a preset trajectory expectation value and a preset joint angle expectation value; the method further includes: adopting a linear quadratic regulator (LQR) to calculate a parameter K in the full state feedback control system, and minimizing energy consumption by establishing an energy function, so as to optimize the coordinated head-eye motion control of the linear bionic eye. The present application achieves the optimal control of the target tracking.

Device for recognizing voice content, server connected thereto, and method for recognizing voice content
11450326 · 2022-09-20 · ·

An artificial intelligence (AI) device, such as a robot, comprises: an output interface to output content in response to a request of a user; a camera to acquire an image of the user; a microphone to acquire a voice signal including a voice content uttered by the user; a processor to determine a characteristic of the user based on the content, the image, and/or the voice signal, and recognize the voice content through a voice recognition mode corresponding to the determined characteristic. The AI device may include a communication interface to forward the voice signal to a remote computer that identifies the characteristic and recognizes the voice content based on the characteristic. According to an embodiment, when an irregular voice is recognized from the acquired voice signal, the processor may recognize a regular voice corresponding to the irregular voice using an artificial intelligence-based learning model.

Moving body
11419193 · 2022-08-16 · ·

A moving body configured to move autonomously includes: a light emitting device configured to emit light in radial directions with respect to a vertical axis of the moving body; an acquisition unit configured to acquire a moving direction of the moving body; and a light-emitting control unit configured to make the light emitting device emit light using luminous colors previously associated with each direction of the radial directions with reference to the moving direction acquired by the acquisition unit, in which the light emitting device is installed so as to be able to emit light with at least one of the luminous colors within an arbitrary range of 180° in the radial directions.

Autonomously acting robot that understands physical contact
11285614 · 2022-03-29 · ·

A planar touch sensor (an electrostatic capacitance sensor) for detecting a contact by a user on a body surface is installed on a robot. Pleasantness and unpleasantness are determined in accordance with a place of contact and a strength of contact on the touch sensor. Behavioral characteristics of the robot change in accordance with a determination result. Familiarity with respect to the user who touches changes in accordance with the pleasantness or unpleasantness. The robot has a curved form and has a soft body.

Interaction system, apparatus, and non-transitory computer readable storage medium
11276420 · 2022-03-15 · ·

An interaction system that provides an interaction interface comprising: a computer and a device that obtains information, wherein the computer stores information for managing data in which a type of a second feature value is associated with a listening pattern that defines a tendency of a response action performed by the interaction interface on a user; calculates a first feature value that is an index for evaluating a change in state during speech by the user on the basis of information obtained by the device; calculates second feature values on the basis of the first feature value; selects a target second feature value from among the second feature values; selects the listening pattern corresponding to the target second feature value; and generates output information for controlling the interaction interface on the basis of the selected listening pattern.

Robot, robot control method, and storage medium

In a robot, an actuator causes the robot to operate. A processor is configured to acquire, when a holding portion is held by a predetermined target, physical information on a physical function of the predetermined target, and cause, by controlling the actuator depending on the acquired physical information, the robot to perform at least one of an examination operation for examining the physical function of the predetermined target and a training support operation for training the physical function of the predetermined target.