Patent classifications
G05B2219/40411
Monitoring of surface touch points for precision cleaning
A system includes a robotic device, a sensor disposed on the robotic device, and circuitry configured to perform operations. The operations include determining a map that represents stationary features of an environment and receiving, from the sensor, sensor data representing the environment. The operations also include determining, based on the sensor data, a representation of an actor within the environment, where the representation includes keypoints representing corresponding body locations of the actor. The operations also include determining that a portion of a particular stationary feature is positioned within a threshold distance of a particular keypoint and, based on thereon, updating the map to indicate that the portion is to be cleaned. The operations further include, based on the map as updated, causing the robotic device to clean the portion of the particular stationary feature.
Apparatus and methods for remotely controlling robotic devices
Computerized appliances may be operated by users remotely. A learning controller apparatus may be operated to determine association between a user indication and an action by the appliance. The user indications, e.g., gestures, posture changes, audio signals may trigger an event associated with the controller. The event may be linked to a plurality of instructions configured to communicate a command to the appliance. The learning apparatus may receive sensory input conveying information about robot's state and environment (context). The sensory input may be used to determine the user indications. During operation, upon determine the indication using sensory input, the controller may cause execution of the respective instructions in order to trigger action by the appliance. Device animation methodology may enable users to operate computerized appliances using gestures, voice commands, posture changes, and/or other customized control elements.
ROBOT AND METHOD FOR CONTROLLING SAME
A robot and a method for controlling a robot are provided. The method includes: acquiring an image of a user; acquiring, by analyzing the image, a first information regarding a position of the user and a gaze direction of the user; acquiring, based on an image capturing position associated with the image and an image capturing direction associated with the image, matching information for matching the first information with a map corresponding to an environment in which the robot is operated; acquiring, based on the matching information and the first information, second information regarding the position of the user on the map and the gaze direction of the user on the map; and identifying an object corresponding to the gaze direction of the user on the map by inputting the second information into an artificial intelligence model trained to identify an object on the map.
Robotically controlled display
A robotic mount is configured to move an entertainment element such as a video display, a video projector, a video projector screen or a camera. The robotic mount is moveable in multiple degrees of freedom, whereby the associated entertainment element is moveable in three-dimensional space. In one embodiment, a system of entertainment elements are made to move and operate in synchronicity with each other, such as to move a single camera via multiple robotic mounts to one or more positions or along one or more paths.
Food-safe, washable, thermally-conductive robot cover
A cover for an automated robot includes elastic sheets that are adhered to each other in a geometry. The geometry is configured to allow the elastic sheets to expand and contract while the automated robot moves within its range of motion. The elastic sheets are attached to the automated robot by elasticity of the elastic sheets. A first group of the elastic sheets forms an elastic collar configured to grip the automated robot at a distal end and a proximal end of the cover in a non-breakable manner such that during operation of the robot, the elastic sheets hold their elasticity and integrity without breaking.
Upper limb motion support apparatus and upper limb motion support system
An upper limb motion support apparatus and an upper limb motion support system which are capable of significantly improving the enhancement of an operator's work efficiency and the reduction of their workload are proposed. A controller which causes an articulated arm and an end effector to perform three-dimensional motion according to the operator's intention based on a biological signal acquired by a biological signal detection unit causes the articulated arm and the end effector to perform cooperative motion in conjunction with the operator's upper limb motion by referring to content recognized by an upper limb motion recognition unit.
Display control device, display control method, computer program product, and communication system
A control system, method and computer program product cooperate to assist control for an autonomous robot. An interface receives recognition information from an autonomous robot, said recognition information including candidate target objects to interact with the autonomous robot. A display control unit causes a display image to be displayed on a display of candidate target objects, wherein at least two of the candidate target objects are displayed with an associated indication of a target object score.
CONTROL SYSTEM AND CONTROL METHOD
The present invention includes a collection device a communication module, a first processing unit, a second processing unit and an execution unit. The benefit of the invention compared to the prior art is: by the cooperation of the using of the first processing unit and the second processing unit, the complicated operation with which the second processing unit can not accomplish is transferred to the first processing unit, the first processing unit then accomplish the operation, thus the processing speed and accuracy are increased. Meanwhile the simple logic operation processed by the first processing unit is transferred to the second processing unit the second processing unit then processes the logic operation and forms control signal to control the execution unit, so that the delayed processing of the execution unit is avoided.
APPARATUS AND METHOD FOR GENERATING ROBOT INTERACTION BEHAVIOR
Disclosed herein are an apparatus and method for generating robot interaction behavior. The method for generating robot interaction behavior includes generating co-speech gesture of a robot corresponding to utterance input of a user, generating a nonverbal behavior of the robot, that is a sequence of next joint positions of the robot, which are estimated from joint positions of the user and current joint positions of the robot based on a pre-trained neural network model for robot pose estimation, and generating a final behavior using at least one of the co-speech gesture and the nonverbal behavior.
OPERATIONAL PARAMETERS
A customer service robot may be limited to a maximum physical ability, such as speed of travel, speed of a robotic arm, etc. However, certain customers may be uncomfortable with a robot operating at the maximum capacity. Accordingly, a customer may have an attribute associated with a performance-limiting criteria. The criteria then limits the robot to operations within operational parameters associated with the performance-limiting criteria. As a benefit, a robot may be transformed to provide a better customer service experience by working quickly to address a customer service task, but within the confines of what a particular customer, or customer type, may consider comfortable or acceptable.