Y10S901/47

Mounting a sensor module to an unmanned ground vehicle

An unmanned ground vehicle includes a main body, a drive system supported by the main body, a manipulator arm pivotally coupled to the main body, and a sensor module. The drive system includes right and left driven track assemblies mounted on right and left sides of the main body. The manipulator arm includes a first link coupled to the main body, an elbow coupled to the first link, and a second link coupled to the elbow. The elbow is configured to rotate independently of the first and second links. The sensor module is mounted on the elbow.

Method for automatically removing obstructions from robotic floor-cleaning devices
11478118 · 2022-10-25 · ·

Some embodiments include a robot, including: a plurality of sensors; at least one encoder; a processor; a tangible, non-transitory, machine readable medium storing instructions that when executed by the processor effectuates operations including: measuring, with the at least one encoder, wheel rotation of at least one wheel; capturing, with an image sensor, images of an environment as the robot moves within the environment; identifying, with the processor, at least one characteristic of at least one object captured in the images of the environment; determining, with the processor, an object type of the at least one object based on characteristics of different types of objects stored in an object database; and instructing, with the processor, the robot to execute at least one action based on at least one of: the object type of the at least one object and the measured wheel rotation of the at least one wheel.

ROBOT NAVIGATION USING 2D AND 3D PATH PLANNING
20230123298 · 2023-04-20 ·

Methods, systems, and apparatus, including computer-readable storage devices, for robot navigation using 2D and 3D path planning. In the disclosed method, a robot accesses map data indicating two-dimensional layout of objects in a space and evaluates candidate paths for the robot to traverse. In response to determining that the candidate paths do not include a collision-free path across the space for a two-dimensional profile of the robot, the robot evaluates a three-dimensional shape of the robot with respect to a three-dimensional shape of an object in the space. Based on the evaluation of the three-dimensional shapes, the robot determines a collision-free path to traverse through the space.

Social behavior rules for a medical telepresence robot

Devices, systems, and methods for social behavior of a telepresence robot are disclosed herein. A telepresence robot may include a drive system, a control system, an object detection system, and a social behaviors component. The drive system is configured to move the telepresence robot. The control system is configured to control the drive system to drive the telepresence robot around a work area. The object detection system is configured to detect a human in proximity to the telepresence robot. The social behaviors component is configured to provide instructions to the control system to cause the telepresence robot to operate according to a first set of rules when a presence of one or more humans is not detected and operate according to a second set of rules when the presence of one or more humans is detected.

Deformable sensors and methods for detecting pose and force against an object

Systems and methods for detecting pose and force against an object are provided. A method includes receiving a signal from a deformable sensor comprising data from a deformation region in a deformable membrane resulting from contact with the object utilizing an internal sensor disposed within an enclosure and having a field of view directed through a medium and toward a bottom surface of the deformable membrane. The method also determines a pose of the object based on the deformation region of the deformable membrane. The method also determines an amount of force applied between the deformable membrane and the object is determined based on the deformation region of the deformable membrane.

User-assisted robotic control systems

Exemplary embodiments relate to user-assisted robotic control systems, user interfaces for remote control of robotic systems, vision systems in robotic control systems, and modular grippers for use by robotic systems. The systems, methods, apparatuses and computer-readable media instructions described interact with and control robotic systems, in particular pick and place systems using soft robotic actuators to grasp, move and release target objects.

DETERMINATION OF RELATIVE POSITION OF AN OBJECT
20230162385 · 2023-05-25 · ·

This application describes a method of determining a position and orientation of an object having a plurality of fiducials attached thereto. The method includes the steps of forming images of the plurality of fiducials on a sensitive surface of a sensor and determining a 2D location of each of the images of the plurality of fiducials on the sensitive surface. Information comprising 3D positions of the plurality of fiducials in a coordinate system of the object is then retrieved and each of the 2D locations of the plurality of images is associated with the 3D position of the same fiducial. A position and orientation of the object with respect to the sensitive surface of the sensor is then determined based in part on the 2D locations of the images and the 3D positions of the fiducials.

Systems and methods for establishing virtual constraint boundaries

Systems and methods are disclosed involving an instrument, an instrument tracking device for tracking movement of the instrument, a first boundary tracking device for tracking movement of a first virtual boundary associated with an anatomy of a patient to be treated by the instrument, and a second boundary tracking device for tracking movement of a second virtual boundary associated with a periphery of an opening in the patient to be avoided by the instrument. One or more controllers receive information associated with the tracking devices including positions of the instrument relative to the first and second virtual boundaries, detect movement of the first and second virtual boundaries relative to one another, and generate a response upon detecting movement of the first and second virtual boundaries relative to one another.

Deformable sensors and methods for detecting pose and force against an object

Systems and methods for detecting pose and force against an object are provided. A method includes receiving a signal from a deformable sensor comprising data from a deformation region in a deformable membrane resulting from contact with the object utilizing an internal sensor disposed within an enclosure and having a field of view directed through a medium and toward a bottom surface of the deformable membrane. The method also determines a pose of the object based on the deformation region of the deformable membrane. The method also determines an amount of force applied between the deformable membrane and the object is determined based on the deformation region of the deformable membrane.

Method for monitoring growth of plants and generating a plant grow schedule

One variation of a method for monitoring growth of plants within a facility includes: aggregating global ambient data recorded by a suite of fixed sensors, arranged proximal a grow area within the facility, at a first frequency during a grow period; extracting interim outcomes of a set of plants, occupying a module in the grow area, from module-level images recorded by a mover at a second frequency less than the first frequency while interfacing with the module during the period of time; dispatching the mover to autonomously deliver the module to a transfer station; extracting interim outcomes of the set of plants from plant-level images recorded by the transfer station while sequentially transferring plants out of the module at the conclusion of the grow period; and deriving relationships between ambient conditions, interim outcomes, and final outcomes from a corpus of plant records associated with plants grown in the facility.