G05B2219/40494

Method and device for a computerized mechanical device

A method for training a computerized mechanical device, comprising: receiving data documenting actions of an actuator performing a task in a plurality of iterations; calculating using the data a neural network dataset and used for performing the task; gathering in a plurality of reward iterations a plurality of scores given by an instructor to a plurality of states, each comprising at least one sensor value, while a robotic actuator performs the task according to the neural network; calculating using the plurality of scores a reward dataset used for computing a reward function; updating at least some of the neural network's plurality of parameters by receiving in each of a plurality of policy iterations a reward value computed by applying the reward function to another state comprising at least one sensor value, while the robotic actuator performs the task according to the neural network; and outputting the updated neural network.

LEARNING ROBOTIC TASKS USING ONE OR MORE NEURAL NETWORKS

Various embodiments enable a robot, or other autonomous or semi-autonomous device or system, to receive data involving the performance of a task in the physical world. The data can be provided as input to a perception network to infer a set of percepts about the task, which can correspond to relationships between objects observed during the performance. The percepts can be provided as input to a plan generation network, which can infer a set of actions as part of a plan. Each action can correspond to one of the observed relationships. The plan can be reviewed and any corrections made, either manually or through another demonstration of the task. Once the plan is verified as correct, the plan (and any related data) can be provided as input to an execution network that can infer instructions to cause the robot, and/or another robot, to perform the task.

SYSTEM AND METHOD FOR LOCATION DETERMINATION AND ROBOT CONTROL
20220168899 · 2022-06-02 ·

A control system and method locates a partially or fully occluded target object in an area of interest. The location of the occluded object may be determined using visual information from a vision sensor and RF-based location information. Determining the location of the target object in this manner may effectively allow the control system to “see through” obstructions that are occluding the object. Model-based and/or deep-learning techniques may then be employed to move a robot into range relative to the target object to perform a predetermined (e.g., grasping) operation. This operation may be performed while the object is still in the occluded state or after a decluttering operation has been performed to remove one or more obstructions that are occluding light-of-sight vision to the object.

ANATOMICAL FEATURE TRACKING
20220296312 · 2022-09-22 ·

Anatomical feature tracking involves advancing a medical instrument to a treatment site of a patient, the medical instrument comprising an imaging device, generating a first image of at least a portion of the treatment site using the imaging device of the medical instrument when a distal end of the medical instrument is in a first position, generating a first silhouette of a first target anatomical feature represented in the first image, generating a second image of at least a portion of the treatment site using the imaging device of the medical instrument when the distal end of the medical instrument is in a second position, generating a second silhouette of a second target anatomical feature represented in the second image, and determining a target position at the treatment site based at least in part on the first silhouette and the second silhouette.

Anatomical feature identification and targeting

Methods of positioning a surgical instrument can involve advancing a first medical instrument to a treatment site of a patient, the first medical instrument comprising a camera, recording a target position associated with a target anatomical feature at the treatment site, generating a first image of the treatment site using the camera of the first medical instrument, identifying the target anatomical feature in the first image using a pretrained neural network, and adjusting the target position based at least in part on a position of the identified target anatomical feature in the first image.

Mobile robot system, mobile robot, and method of controlling the mobile robot system

Disclosed are a mobile robot system including a server of creating and storing traveling information about moving space and a mobile robot of travelling on the moving space, wherein the mobile robot comprises a driving portion configured to move the mobile robot, a communication device configured to receive the traveling information from the server and a controller configured to control the driving portion based on the traveling information received from the communication device and wherein the server receives information about the moving space from at least one external robot, and creates the traveling information based on the information about the moving space.

ANATOMICAL FEATURE IDENTIFICATION AND TARGETING
20210196398 · 2021-07-01 ·

Methods of positioning a surgical instrument can involve advancing a first medical instrument to a treatment site of a patient, the first medical instrument comprising a camera, recording a target position associated with a target anatomical feature at the treatment site, generating a first image of the treatment site using the camera of the first medical instrument, identifying the target anatomical feature in the first image using a pretrained neural network, and adjusting the target position based at least in part on a position of the identified target anatomical feature in the first image.

METHOD AND DEVICE FOR ASCERTAINING CONTROL PARAMETERS IN A COMPUTER-ASSISTED MANNER FOR A FAVORABLE ACTION OF A TECHNICAL SYSTEM
20210122038 · 2021-04-29 ·

A method and a device for ascertaining control parameters in a computer-assisted manner for handling a technical system is provided. A starting state and the surroundings of the technical system are detected using at least one sensor, and a physical simulation model of the technical system is generated using same. On the basis of the starting state, different combinations of handling steps of the technical system are simulated with respect to a specified target state using the simulation model, wherein control parameters of the technical system for carrying out the handling steps are varied. The simulation data is used to train a machine learning routine by an evaluation of each handling step, and the trained machine learning routine is used to ascertain an optimized combination of handling steps. The control parameters of the optimized combination of handling steps are output to control the technical system is also provided.

Learning robotic tasks using one or more neural networks

Various embodiments enable a robot, or other autonomous or semi-autonomous device or system, to receive data involving the performance of a task in the physical world. The data can be provided as input to a perception network to infer a set of percepts about the task, which can correspond to relationships between objects observed during the performance. The percepts can be provided as input to a plan generation network, which can infer a set of actions as part of a plan. Each action can correspond to one of the observed relationships. The plan can be reviewed and any corrections made, either manually or through another demonstration of the task. Once the plan is verified as correct, the plan (and any related data) can be provided as input to an execution network that can infer instructions to cause the robot, and/or another robot, to perform the task.

LEARNING ROBOTIC TASKS USING ONE OR MORE NEURAL NETWORKS

Various embodiments enable a robot, or other autonomous or semi-autonomous device or system, to receive data involving the performance of a task in the physical world. The data can be provided as input to a perception network to infer a set of percepts about the task, which can correspond to relationships between objects observed during the performance. The percepts can be provided as input to a plan generation network, which can infer a set of actions as part of a plan. Each action can correspond to one of the observed relationships. The plan can be reviewed and any corrections made, either manually or through another demonstration of the task. Once the plan is verified as correct, the plan (and any related data) can be provided as input to an execution network that can infer instructions to cause the robot, and/or another robot, to perform the task.