Patent classifications
G05B2219/40494
METHOD AND DEVICE FOR A COMPUTERIZED MECHANICAL DEVICE
A method for training a computerized mechanical device, comprising: receiving data documenting actions of an actuator performing a task in a plurality of iterations; calculating using the data a neural network dataset and used for performing the task; gathering in a plurality of reward iterations a plurality of scores given by an instructor to a plurality of states, each comprising at least one sensor value, while a robotic actuator performs the task according to the neural network; calculating using the plurality of scores a reward dataset used for computing a reward function; updating at least some of the neural network's plurality of parameters by receiving in each of a plurality of policy iterations a reward value computed by applying the reward function to another state comprising at least one sensor value, while the robotic actuator performs the task according to the neural network; and outputting the updated neural network.
MOBILE ROBOT SYSTEM, MOBILE ROBOT, AND METHOD OF CONTROLLING THE MOBILE ROBOT SYSTEM
Disclosed herein are a mobile robot system including a server of creating and storing traveling information about moving space and a mobile robot of travelling on the moving space, wherein the mobile robot comprises a driving portion configured to move the mobile robot, a communication device configured to receive the traveling information from the server and a controller configured to control the driving portion based on the traveling information received from the communication device and wherein the server receives information about the moving space from at least one external robot, and creates the traveling information based on the information about the moving space.
Disclosed herein are a mobile robot system and a mobile robot capable of receiving information about moving space received from another mobile robot from an external server, and then performing deep learning based on the information about the moving space so as to travel safely and flexibly in various environments.
System and method for location determination and robot control
A control system and method locates a partially or fully occluded target object in an area of interest. The location of the occluded object may be determined using visual information from a vision sensor and RF-based location information. Determining the location of the target object in this manner may effectively allow the control system to see through obstructions that are occluding the object. Model-based and/or deep-learning techniques may then be employed to move a robot into range relative to the target object to perform a predetermined (e.g., grasping) operation. This operation may be performed while the object is still in the occluded state or after a decluttering operation has been performed to remove one or more obstructions that are occluding light-of-sight vision to the object.
Learning robotic tasks using one or more neural networks
Various embodiments enable a robot, or other autonomous or semi-autonomous device or system, to receive data involving the performance of a task in the physical world. The data can be provided as input to a perception network to infer a set of percepts about the task, which can correspond to relationships between objects observed during the performance. The percepts can be provided as input to a plan generation network, which can infer a set of actions as part of a plan. Each action can correspond to one of the observed relationships. The plan can be reviewed and any corrections made, either manually or through another demonstration of the task. Once the plan is verified as correct, the plan (and any related data) can be provided as input to an execution network that can infer instructions to cause the robot, and/or another robot, to perform the task.
Anatomical feature tracking
Anatomical feature tracking involves advancing a medical instrument to a treatment site of a patient, the medical instrument comprising an imaging device, generating a first image of at least a portion of the treatment site using the imaging device of the medical instrument when a distal end of the medical instrument is in a first position, generating a first silhouette of a first target anatomical feature represented in the first image, generating a second image of at least a portion of the treatment site using the imaging device of the medical instrument when the distal end of the medical instrument is in a second position, generating a second silhouette of a second target anatomical feature represented in the second image, and determining a target position at the treatment site based at least in part on the first silhouette and the second silhouette.
ANATOMICAL FEATURE TRACKING
Anatomical feature tracking involves advancing a medical instrument to a treatment site of a patient, the medical instrument comprising an imaging device, generating a first image of at least a portion of the treatment site using the imaging device of the medical instrument when a distal end of the medical instrument is in a first position, generating a first silhouette of a first target anatomical feature represented in the first image, generating a second image of at least a portion of the treatment site using the imaging device of the medical instrument when the distal end of the medical instrument is in a second position, generating a second silhouette of a second target anatomical feature represented in the second image, and determining a target position at the treatment site based at least in part on the first silhouette and the second silhouette.