Y10S901/06

Autonomous correction of alignment error in a master-slave robotic system
11779419 · 2023-10-10 · ·

In some embodiments, correcting an alignment error between an end effector of a tool associated with a slave and a master actuator associated with a master in a robotic system involves receiving at the master, master actuator orientation signals (R.sub.MCURR) representing the orientation of the master actuator relative to a master reference frame and generating end effector orientation signals (R.sub.EENEW) representing the end effector orientation relative to a slave reference frame, producing control signals based on the end effector orientation signals, receiving an enablement signal for selectively enabling the control signals to be transmitted from the master to the slave, responsive to a transition of the enablement signal from not active state to active state, computing the master-slave misalignment signals (R.sub.Δ) as a difference between the master actuator orientation signals (R.sub.MCURR) and the end effector orientation signals (R.sub.EENEW), and adjusting the master-slave misalignment signals (R.sub.Δ) to reduce the alignment difference.

General purpose robotics operating system with unmanned and autonomous vehicle extensions
11782442 · 2023-10-10 · ·

The present disclosure provides a general purpose operating system (GPROS) that shows particular usefulness in the robotics and automation fields. The operating system provides individual services and the combination and interconnections of such services using built-in service extensions, built-in completely configurable generic services, and ways to plug in additional service extensions to yield a comprehensive and cohesive framework for developing, configuring, assembling, constructing, deploying, and managing robotics and/or automation applications. The disclosure includes GPROS extensions and features directed to use as an autonomous vehicle operating system. The vehicle controlled by appropriate versions of the GPROS can include unmanned ground vehicle (UGV) applications such as a driverless or self-driving car. The vehicle can likewise or instead include an unmanned aerial vehicle (UAV) such as a helicopter or drone. In cases, the vehicle can include an unmanned underwater vehicle (UUV), such as a submarine or other submersible.

Multi-panel graphical user interface for a robotic surgical system

A method for a robotic surgical system includes displaying a graphical user interface on a display to a user, wherein the graphical user interface includes a plurality of reconfigurable display panels, receiving a user input at one or more user input devices, wherein the user input indicates a selection of at least one software application relating to the robotic surgical system, and rendering content from the at least one selected software application among the plurality of reconfigurable display panels.

Object pickup strategies for a robotic device

Example embodiments may relate to methods and systems for selecting a grasp point on an object. In particular, a robotic manipulator may identify characteristics of a physical object within a physical environment. Based on the identified characteristics, the robotic manipulator may determine potential grasp points on the physical object corresponding to points at which a gripper attached to the robotic manipulator is operable to grip the physical object. Subsequently, the robotic manipulator may determine a motion path for the gripper to follow in order to move the physical object to a drop-off location for the physical object and then select a grasp point, from the potential grasp points, based on the determined motion path. After selecting the grasp point, the robotic manipulator may grip the physical object at the selected grasp point with the gripper and move the physical object through the determined motion path to the drop-off location.

Surgical tray efficiency system and related methods
11389260 · 2022-07-19 · ·

A surgical tray efficiency system comprising a vertical rack assembly for holding and displaying a plurality of surgical instrument trays, a sterile barrier covering the vertical rack assembly and including tray location identifiers, and a standardization software platform including a customizable interactive planogram is described. The customizable interactive planogram software helps operating room staff arrange the instrument trays on the vertical rack assembly according to a predetermined customizable location ID, and create/load/access information related to the surgical procedure/trays/instruments before, during, and after the surgery.

AUTONOMOUS CORRECTION OF ALIGNMENT ERROR IN A MASTER-SLAVE ROBOTIC SYSTEM
20220071722 · 2022-03-10 ·

In some embodiments, correcting an alignment error between an end effector of a tool associated with a slave and a master actuator associated with a master in a robotic system involves receiving at the master, master actuator orientation signals (R.sub.MCURR) representing the orientation of the master actuator relative to a master reference frame and generating end effector orientation signals (R.sub.EENEW) representing the end effector orientation relative to a slave reference frame, producing control signals based on the end effector orientation signals, receiving an enablement signal for selectively enabling the control signals to be transmitted from the master to the slave, responsive to a transition of the enablement signal from not active state to active state, computing the master-slave misalignment signals (R.sub.Δ) as a difference between the master actuator orientation signals (R.sub.MCURR) and the end effector orientation signals (R.sub.EENEW), and adjusting the master-slave misalignment signals (R.sub.Δ) to reduce the alignment difference.

Multi-panel graphical user interface for a robotic surgical system

A method for a robotic surgical system includes displaying a graphical user interface on a display to a user, wherein the graphical user interface includes a plurality of reconfigurable display panels, receiving a user input at one or more user input devices, wherein the user input indicates a selection of at least one software application relating to the robotic surgical system, and rendering content from the at least one selected software application among the plurality of reconfigurable display panels.

Determining a Virtual Representation of an Environment By Projecting Texture Patterns
20210187736 · 2021-06-24 ·

Example methods and systems for determining 3D scene geometry by projecting patterns of light onto a scene are provided. In an example method, a first projector may project a first random texture pattern having a first wavelength and a second projector may project a second random texture pattern having a second wavelength. A computing device may receive sensor data that is indicative of an environment as perceived from a first viewpoint of a first optical sensor and a second viewpoint of a second optical sensor. Based on the received sensor data, the computing device may determine corresponding features between sensor data associated with the first viewpoint and sensor data associated with the second viewpoint. And based on the determined corresponding features, the computing device may determine an output including a virtual representation of the environment that includes depth measurements indicative of distances to at least one object.

System and method for instructing a device
11014243 · 2021-05-25 · ·

A system and method of instructing a device is disclosed. The system includes a signal source for providing at least one visual signal where the at least one visual signal is substantially indicative of at least one activity to be performed by the device. A visual signal capturing element captures the at least one visual signal and communicates the at least one visual signal to the device where the device interprets the at least one visual signal and performs the activity autonomously and without requiring any additional signals or other information from the signal source.

Determining a virtual representation of an environment by projecting texture patterns
10967506 · 2021-04-06 · ·

Example methods and systems for determining 3D scene geometry by projecting patterns of light onto a scene are provided. In an example method, a first projector may project a first random texture pattern having a first wavelength and a second projector may project a second random texture pattern having a second wavelength. A computing device may receive sensor data that is indicative of an environment as perceived from a first viewpoint of a first optical sensor and a second viewpoint of a second optical sensor. Based on the received sensor data, the computing device may determine corresponding features between sensor data associated with the first viewpoint and sensor data associated with the second viewpoint. And based on the determined corresponding features, the computing device may determine an output including a virtual representation of the environment that includes depth measurements indicative of distances to at least one object.