Patent classifications
B25J11/0015
Socially assistive robot
A companion robot is disclosed. In some embodiments, the companion robot may include a head having a facemask and a projector configured to project facial images onto the facemask; a facial camera; a microphone configured to receive audio signals from the environment; a speaker configured to output audio signals; and a processor electrically coupled with the projector, the facial camera, the microphone, and the speaker. In some embodiments, the processor may be configured to receive facial images from the facial camera; receive speech input from the microphone; determine an audio output based on the facial images and/or the speech input; determine a facial projection output based the facial images and/or the speech input; output the audio output via the speaker; and project the facial projection output on the facemask via the projector.
INFINITE ROBOT PERSONALITIES
Aspects of the present disclosure generally relate to providing a large variety of robot personalities. In certain aspects, a robot personality may be represented as a personality location in a personality space, which may be a continuous unidimensional or multidimensional space. The dimensions of the personality space may be based on one or more factors. Based on the personality location, an affective state may be maintained for the robot, which may be represented as an affect location in an affect space. The affect location may be updated based on one or more inputs. Accordingly, robot expressions may be influenced based upon the affect location, which in turn is affected by the personality of the robot in the personality space.
Biomimetic humanoid robotic model, control system, and simulation process
A biomimetics based robot and process for simulation is disclosed. The robot may include filament driven and fluid pumped elastomer based artificial muscles coordinated for slow twitch/fast twitch contraction and movement of the robot by one or more microcontrollers. A process may provide physics based simulation for movement of a robot in a virtual setting. Successfully tested movement data may be stored and embedded into a robot at build and/or before a new movement in programmed into the robot. Some embodiments include an artificial skin system supporting the artificial muscles.
EXPRESSION FEEDBACK METHOD AND SMART ROBOT
An expression feedback method and a smart robot, belonging to smart devices. The method comprises: step S1, a smart robot using an image collection apparatus to collect image information; step S2, the smart robot detecting whether human face information representing a human face exists in the image information; if so, acquiring position information and size information associated with the human face information, and then turning to step S3; and if not, returning to step S1; step S3, according to the position information and the size information, obtaining, by prediction, a plurality of pieces of feature point information in the human face information and outputting same; and step S4, using a first identification model formed through pre-training, determining whether the human face information represents a smiling face according to the feature point information, and then exiting from this step; if so, the smart robot outputting preset expression feedback information; and if not, exiting from this step. The beneficial effect of the method is: enriching an information interaction content between a smart robot and a user, so as to improve usage experience of the user.
AUTONOMOUS COMPANION MOBILE ROBOT AND SYSTEM
An autonomous companion mobile robot and system may complement the intelligence possessed by a user with machine learned intelligence to make a user's life more fulfilling. The robot and system includes a mobile robotic device and a mobile robotic docking station. Either or both of the mobile robotic device and the mobile robotic docking station may operate independently, as well as operating together as a team, as a system. The mobile robotic device may have an external form of a three-dimensional shape, a humanoid, a present or historical person, some fictional character, or some animal. The mobile robotic device and/or the mobile robotic docking station may each include a fog Internet of Things (IoT) gateway processor and a plurality of sensors and input/output devices. The autonomous companion mobile robot and system may collect data from and observe its users and offer suggestions, perform tasks, and present information to its users.
Integrated System Design For A Mobile Manipulation Robot With Socially Expressive Abilities
Various embodiments of the present technology generally relate to robotics. More specifically, some embodiments of the present technology relate to an integrated system design for a mobile manipulation robot with socially expressive abilities. Some embodiments provide for a robot comprising a socially expressive head unit. The head can have at least two degrees of freedom created by a motor with a planetary gear box and a servo. The motor can be connected to a shell via a support that allows the shell to tilt up and down upon activation of the motor. The shell can include a camera housing configured to receive a camera which can be attached to the support. The motor can be mounted on a rotatable shaft controlled by a servo causing the head unit to pan.
Robot, method for operating the same, and server connected thereto
A method of operating a robot includes detecting movement of a video call counterpart using a video call counterpart robot included in image data received from the video call counterpart robot; canceling movement of a user from detected movement of the video call counterpart; and determining motion corresponding to the canceled movement of the video call counterpart.
Eye contact sensing and control for robotic characters
A system for sensing and controlling eye contact for a robot. The system includes a robotic figure with a movable eye. The system includes a light source positioned in the robotic figure to output light through a light outlet of the eye. A light sensor is included that senses light striking surfaces in a physical space in which the robotic figure is positioned including the output light from the light source. The system includes an image processor processing output of the light sensor to identify a location of a target formed by the output light striking surfaces in the physical space and to identify a location of a face of a human. Further, the system includes a robot controller generating eye movement control signals based on the location of the target and the location of the face to position the eye to provide eye contact with the human observer.
ROBOT-AIDED SYSTEM AND METHOD FOR DIAGNOSIS OF AUTISM SPECTRUM DISORDER
The disclosed system uses facial expressions and upper body movement patterns to detect autism spectrum disorder. Emotionally expressive robots participate in sensory experiences by reacting to stimuli designed to resemble typical everyday experiences, such as uncontrolled sounds and light or tactile contact with different textures. The robot-child interactions elicit social engagement from the children, which is captured by a camera. A convolutional neural network, which has been trained to evaluate multimodal behavioral data collected during those robot-child interactions, identifies children that are at risk for autism spectrum disorder. Because the robot-assisted framework effectively engages the participants and models behaviors in ways that are easily interpreted by the participants, the disclosed system may also be used to teach children with autism spectrum disorder to communicate their feelings about discomforting sensory stimulation (as modeled by the robots) instead of allowing uncomfortable experiences to escalate into extreme negative reactions (e.g., tantrums or meltdowns).
SYSTEMS AND METHODS TO CONTROL AN ENTERTAINMENT FIGURE
An animated figure system includes an animated figure comprising a flexible skin layer, an actuating system coupled to a connection location of the flexible skin layer, and an automation controller. The automation controller is configured to access a digital model of the animated figure, in which the digital model comprises a vertex associated with the connection location, determine a first positioning of the vertex within the digital model, and control the actuating system to set a second positioning of the connection location based on the first positioning of the vertex.