Patent classifications
B25J13/00
Visual annotations in robot control interfaces
Methods, apparatus, systems, and computer-readable media are provided for visually annotating rendered multi-dimensional representations of robot environments. In various implementations, an entity may be identified that is present with a telepresence robot in an environment. A measure of potential interest of a user in the entity may be calculated based on a record of one or more interactions between the user and one or more computing devices. In some implementations, the one or more interactions may be for purposes other than directly operating the telepresence robot. In various implementations, a multi-dimensional representation of the environment may be rendered as part of a graphical user interface operable by the user to control the telepresence robot. In various implementations, a visual annotation may be selectively rendered within the multi-dimensional representation of the environment in association with the entity based on the measure of potential interest.
Method of hub communication, processing, display, and cloud analytics
A method of displaying an operational parameter of a surgical system is disclosed. The method includes receiving, by a cloud computing system of the surgical system, first usage data, from a first subset of surgical hubs of the surgical system; receiving, by the cloud computing system, second usage data, from a second subset of surgical hubs of the surgical system; analyzing, by the cloud computing system, the first and the second usage data to correlate the first and the second usage data with surgical outcome data; determining, by the cloud computing system, based on the correlation, a recommended medical resource usage configuration; and displaying, on respective displays on the first and the second subset of surgical hubs, indications of the recommended medical resource usage configuration.
Robot system and portable teaching device
A robot system includes a mobile robot configured to move, a portable teaching device including a display section configured to display information, the portable teaching device teaching the mobile robot, a first detecting section configured to detect a present position of the portable teaching device, a second detecting section configured to detect a present position of the mobile robot, and a display control section configured to cause, based on a detection result of the first detecting section and a detection result of the second detecting section, the display section to display the present position of the portable teaching device and the present position of the mobile robot.
Position accuracy robotic printing system
A system and method for improving a position accuracy of a mobile robot is disclosed. A retroreflective device is mounted to the mobile robot and used by an absolute positioning device to use a laser beam to track a position of the mobile robot. The mobile robot receives position measurements. Various optimizations may be performed to support operating the mobile robot over a 360 degree range of azimuthal headings.
CONTROL DEVICE AND CONVEYING SYSTEM
The present invention appropriately evaluates the quality of communications, and optimizes the allocation of conveyance instructions by a control device. The control device (10) is provided with: a communication quality determination unit (12) that determines the quality of wireless communications on the basis of a communication state of a telegram from a conveyance robot (20); and an instruction state determination unit (13) that determines, according to the quality of the wireless communications, an instruction state pertaining to whether to transmit an instruction for controlling the conveyance robot.
Autonomous navigation and collaboration of mobile robots in 5G/6G
5G and especially 6G will enable a multitude of applications for fixed-position and mobile wireless task-devices (“robots”). Most of these applications are based on the assumption that the robots know, or can determine, the locations and wireless identities of other robots in proximity, but this is an unsolved problem. Procedures are provided herein for an arbitrarily large number of fixed-position and mobile robots to autonomously identify each other, determine their locations with speed and precision substantially beyond that provided by navigation satellites, and then to collaborate in performing their tasks, while avoiding interference and collisions. Examples are provided in the fields of automated agriculture, remote oil-spill mitigation, autonomous fire-fighting, hospital management, construction site coordination, manufacturing (including fully autonomous manufacturing), major product warehousing, airport control, and emergency vehicle access.
Determining how to assemble a meal
In an embodiment, a method includes determining a given material to manipulate to achieve a goal state. The goal state can be one or more deformable or granular materials in a particular arrangement. The method further includes, for the given material, determining, a respective outcome for each of a plurality of candidate actions to manipulate the given material. The determining can be performed with a physics-based model, in one embodiment. The method further can include determining a given action of the candidate actions, where the outcome of the given action reaching the goal state is within at least one tolerance. The method further includes, based on a selected action of the given actions, generating a first motion plan for the selected action.
SYSTEM AND METHOD FOR SEMANTIC PROCESSING OF NATURAL LANGUAGE COMMANDS
A system, method and computer-readable storage devices are for processing natural language commands, such as commands to a robotic arm, using a Tag & Parse approach to semantic parsing. The system first assigns semantic tags to each word in a sentence and then parses the tag sequence into a semantic tree. The system can use statistical approach for tagging, parsing, and reference resolution. Each stage can produce multiple hypotheses, which are re-ranked using spatial validation. Then the system selects a most likely hypothesis after spatial validation, and generates or outputs a command. In the case of a robotic arm, the command is output in Robot Control Language (RCL).
ROBOT SERVICE METHOD AND ROBOT APPARATUS USING SOCIAL NETWORK SERVICE
The present invention relates to a robot service system and a robot apparatus using a social network service, and comprises: (a) a step in which a terminal device is connected to a robot apparatus by executing a social network service program, and displays a service screen on which an image captured by the robot apparatus is displayed; (b) a step in which the terminal device transmits a robot control command inputted to the service screen to the robot apparatus; (c) a step in which the robot apparatus performs an operation according to the robot control command and transmits operation performance data to the terminal device; and (d) a step in which the terminal device displays the operation performance data transmitted from the robot apparatus on the service screen.
SURGICAL MANIPULATOR AND METHOD OF OPERATING THE SAME USING VIRTUAL RIGID BODY MODELING
A surgical manipulator and method of operating the same. The surgical manipulator includes an arm with a plurality of links and joints, wherein an angle between adjacent links forms a joint angle. The arm includes a distal end configured to support a surgical instrument with an energy applicator. At least one controller is coupled to the arm and models the surgical instrument and the energy applicator as a virtual rigid body. The controller(s) determine a commanded pose for the surgical instrument and the energy applicator based on a summation of a plurality of forces and/or torques, wherein the plurality of forces and/or torques are selectively applied to the virtual rigid body to emulate orientation and movement of the surgical instrument and the energy applicator. The controller(s) determine commanded joint angles for the arm that place the surgical instrument and the energy applicator according to the commanded pose.