Patent classifications
G05B2219/40411
Method of localization using multi sensor and robot implementing same
Disclosed herein are a method of localization using multi sensors and a robot implementing the same, the method including sensing a distance between an object placed outside of a robot and the robot and generating a first LiDAR frame by a LiDAR sensor of the robot while a moving unit moves the robot, capturing an image of an object placed outside of the robot and generating a first visual frame by a camera sensor of the robot, and comparing a LiDAR frame stored in a map storage of the robot with the first LiDAR frame, comparing a visual frame registered in a frame node of a pose graph with the first visual frame, determining accuracy of comparison's results of the first LiDAR frame, and calculating a current position of the robot by a controller.
Determining how to assemble a meal
In an embodiment, a method includes determining a given material to manipulate to achieve a goal state. The goal state can be one or more deformable or granular materials in a particular arrangement. The method further includes, for the given material, determining, a respective outcome for each of a plurality of candidate actions to manipulate the given material. The determining can be performed with a physics-based model, in one embodiment. The method further can include determining a given action of the candidate actions, where the outcome of the given action reaching the goal state is within at least one tolerance. The method further includes, based on a selected action of the given actions, generating a first motion plan for the selected action.
CONTROL DEVICE, CONTROL METHOD, COMPUTER PROGRAM PRODUCT, AND ROBOT CONTROL SYSTEM
A control system, method and computer program product cooperate to assist control for an autonomous robot. The system includes a communications interface that exchanges information with the autonomous robot. A user interface displays a scene of a location in which the autonomous robot is positioned, and also receives an indication of a user selection of a user selected area within the scene. The communications interface transmits an indication of said user selected area to the autonomous robot for further processing of the area by said autonomous
ROBOT, METHOD OF CAPTURE IMAGE, ELECTRONIC DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
A robot and a method of capturing an image applied to the robot, an electronic device for implementing the method of capturing the image, and a computer-readable storage medium are provided. The robot includes: a robot body; a workbench; a telescopic structure having one end pivotally connected to the robot body and the other end connected to the workbench; a driving mechanism arranged on the robot body and configured to drive the telescopic structure to extend, retract and/or move relative to the robot body; and an image capture device arranged on the workbench. The telescopic structure is configured to allow the image capture device to capture an image of a target object from different angles with the extension, retraction and/or movement of the telescopic structure.
Machine learning method and mobile robot
A machine learning method includes: a first learning step which is performed in a phase before a neural network is installed in a mobile robot and in which a stationary first obstacle is placed in a set space and the first obstacle is placed at different positions using simulation so that the neural network repeatedly learns a path from a starting point to the destination which avoids the first obstacle; and a second learning step which is performed in a phase after the neural network is installed in the mobile robot and in which, when the mobile robot recognizes a second obstacle that operates around the mobile robot in a space where the mobile robot moves, the neural network repeatedly learns a path to the destination which avoids the second obstacle every time the mobile robot recognizes the second obstacle.
Robot for making coffee and method for controlling the same
A robot for making coffee and a method for controlling the same are provided to couple or decouple a portafilter to or from an espresso machine without damage to the espresso machine or the portafilter due to a collision between the espresso machine and the portafilter. The robot includes a robot arm to move with a predetermined degree of freedom, a gripper provided in the robot arm to grip a portafilter, a torque sensor provided in the robot arm to detect repulsive force (Fr) when the portafilter makes contact with a group head of an espresso machine, and a controller configured to set a virtual spring having a predetermined elastic modulus (C) based on the repulsive force (Fr) detected by the torque sensor, and to control driving torque (T) of the robot arm depending on the restoring force (Fe) of the virtual spring.
Grasping of an object by a robot based on grasp strategy determined using machine learning model(s)
Grasping of an object, by an end effector of a robot, based on a grasp strategy that is selected using one or more machine learning models. The grasp strategy utilized for a given grasp is one of a plurality of candidate grasp strategies. Each candidate grasp strategy defines a different group of one or more values that influence performance of a grasp attempt in a manner that is unique relative to the other grasp strategies. For example, value(s) of a grasp strategy can define a grasp direction for grasping the object (e.g., “top”, “side”), a grasp type for grasping the object (e.g., “pinch”, “power”), grasp force applied in grasping the object, pre-grasp manipulations to be performed on the object, and/or post-grasp manipulations to be performed on the object.
Apparatus and method for generating robot interaction behavior
Disclosed herein are an apparatus and method for generating robot interaction behavior. The method for generating robot interaction behavior includes generating co-speech gesture of a robot corresponding to utterance input of a user, generating a nonverbal behavior of the robot, that is a sequence of next joint positions of the robot, which are estimated from joint positions of the user and current joint positions of the robot based on a pre-trained neural network model for robot pose estimation, and generating a final behavior using at least one of the co-speech gesture and the nonverbal behavior.
Robot and operation method therefor
The present invention is the invention for providing a guidance service by using a robot. For example, the robot may provide the guidance service in an airport. The robot may receive a destination, acquire a movement path from a current position to the destination, and transmit the movement path to the mobile terminal. The mobile terminal may receive the movement path from the robot and display a guidance path representing a movement path and a user path representing a position movement of the mobile terminal and overlapping the guidance path.
Gaming service automation system with graphical user interface
A robot management system (RMS) includes a plurality of service robots deployed within an operations venue that includes a plurality of gaming devices, an operator terminal presenting a graphical user interface (GUI) to an operator, and a robot management system server (RMS server) configured in networked communication with the plurality of service robots. The RMS server is configured to: identify location data for the service robots; create an interactive overlay map of the operations venue that includes a static map of the operations venue, overlay data showing the location data of the plurality of service robots over the static map, and an interactive icon for each service robot of the plurality of service robots; display, via the GUI, the overlay map; receive a first input indicating a selection of a first interactive icon associated with a first service robot; and display current status information associated with the first service robot.