Patent classifications
B25J13/003
AUTONOMOUSLY MOTILE DEVICE WITH SPEECH COMMANDS
An autonomously motile device may be controlled by speech received by a user device. A first speech-processing system associated with the user device may determine that audio data includes a representation of a command; a second speech-processing system associated with the autonomously motile device may determine that the command should be executed by the autonomously motile device. A network connection is established between the user device and the autonomously motile device, and a device manager authorizes execution of the command.
Multimodal sentiment detection
Described herein is a system for improving sentiment detection and/or recognition using multiple inputs. For example, an autonomously motile device is configured to generate audio data and/or image data and perform sentiment detection processing. The device may process the audio data and the image data using a multimodal temporal attention model to generate sentiment data that estimates a sentiment score and/or a sentiment category. In some examples, the device may also process language data (e.g., lexical information) using the multimodal temporal attention model. The device can adjust its operations based on the sentiment data. For example, the device may improve an interaction with the user by estimating the user's current emotional state, or can change a position of the device and/or sensor(s) of the device relative to the user to improve an accuracy of the sentiment data.
MOVING ROBOT AND METHOD OF CONTROLLING THE SAME
According to a moving robot and a method of controlling the same of the present disclosure, the moving robot detect the sound generated in the area, moves a sound generation point according to a type of the sound and an operation mode, analyzes an image of the sound generation point and determines an indoor situation to perform the corresponding operation. The moving robot detects the sound to determine an accident at a location at which the sound is generated, can automatically perform a specified operation corresponding to the generated accident even when there is no control command of a user, and thus, it is possible rapidly respond to the generated accident. The moving robot can divide an object generating the sound into a person, a companion animal, and a subject, and can perform different operations according to the object.
CONTROL DEVICE, MOBILE BODY, AND CONTROL METHOD
A control device (10) includes: an acquisition unit (12) that acquires outside-world information (11B) around a mobile body (100); and a control unit (13) that performs control to switch characteristics of a contact portion (130) capable of switching characteristics of a portion where a leg portion of the mobile body (100) comes into contact with an external environment on the basis of the outside-world information (11B) such that a contact sound between the contact portion (130) and the external environment changes.
METHOD, SYSTEM, AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM FOR CONTROLLING A ROBOT
A method for controlling a robot is provided. The method includes the steps of: acquiring information on a sound associated with a robot call in a serving place; determining a call target robot associated with the sound, among a plurality of robots in the serving place, on the basis of the acquired information; and providing feedback associated with the sound by the call target robot.
Robot-connected IoT-based sleep-caring system
A robot-connected IoT-based sleep-caring system includes a sleep-caring robot and an IoT system. The sleep-caring robot includes environment monitoring, physiology monitoring, sleep monitoring, sound, lighting and electricity control, a smart storage compartment, central data processing, and machine arms. The IoT system senses and executes instructions from the sleep-caring robot, thereby catering to bedroom activities of the user.
ROBOT, ROBOT CONTROL METHOD, AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM
The robot includes a storage unit and a control unit. The control unit acquires outside stimulus feature amounts that are feature amounts of an outside stimulus acting from outside, stores the acquired outside stimulus feature amounts in the storage unit as a history, compares outside stimulus feature amounts acquired at a certain timing with outside stimulus feature amounts stored in the storage unit to calculate a first similarity degree, and controls operations based on the calculated first similarity degree.
Robot, robot control method, and recording medium
A robot is equipped with a processor. The processor detects external appearance or audio of a living being, and by controlling the robot, causes the robot to execute an operation in accordance with liking data indicating preferences of the robot regarding external appearance or audio and the detected external appearance or audio of the living being.
INFORMATION PROCESSING DEVICE AND ACTION MODE SETTING METHOD
A feature acquiring section 100 obtains feature data of a target person. A matching degree deriving section 110 derives a matching degree between the feature data and feature data of a registered user stored in a feature amount database 120. An identifying section 130 determines that the target person is the registered user in a case where the matching degree is greater than or equal to a first threshold, and determines that the target person is not the registered user in a case where the matching degree is less than a second threshold smaller than the first threshold. An action management section 140 sets an action mode of an acting subject according to the matching degree.
SEMANTIC REARRANGEMENT OF UNKNOWN OBJECTS FROM NATURAL LANGUAGE COMMANDS
A robotic system is provided for performing rearrangement tasks guided by a natural language instruction. The system can include a number of neural networks used to determine a selected rearrangement of the objects in accordance with the natural language instruction. A target object predictor network processes a point cloud of the scene and the natural language instruction to identify a set of query objects that are to-be-rearranged. A language conditioned prior network processes the point cloud, natural language instruction, and the set of query objects to sample a distribution of rearrangements to generate a number of sets of pose offsets for the set of query objects. A discriminator network then processes the samples to generate scores for the samples. The samples may be refined until a score for at least one of the sample generated by the discriminator network is above a threshold value.