G05B2219/40442

TERMINAL DEVICE
20240375268 · 2024-11-14 · ·

A terminal device includes a coordinate-system setting portion which sets a user coordinate system on the basis of a marker included in an image, photographed by a photographing portion, including an industrial robot and a work space of the industrial robot, a coordinate giving portion which gives a coordinate of the user coordinate system to point-group data obtained by a distance measuring portion which measures a distance to an object included in the image, a region specifying portion which specifies a robot region on the user coordinate system corresponding to the industrial robot on the basis of shape size information of the industrial robot corresponding to a type of the industrial robot and attitude information of the industrial robot, and a point-group creating portion for avoidance which creates point-group data for interference avoidance by removing the point-group data included in the robot region from the point-group data obtained by the distance measuring portion.

Determining a Virtual Representation of an Environment By Projecting Texture Patterns
20180093377 · 2018-04-05 ·

Example methods and systems for determining 3D scene geometry by projecting patterns of light onto a scene are provided. In an example method, a first projector may project a first random texture pattern having a first wavelength and a second projector may project a second random texture pattern having a second wavelength. A computing device may receive sensor data that is indicative of an environment as perceived from a first viewpoint of a first optical sensor and a second viewpoint of a second optical sensor. Based on the received sensor data, the computing device may determine corresponding features between sensor data associated with the first viewpoint and sensor data associated with the second viewpoint. And based on the determined corresponding features, the computing device may determine an output including a virtual representation of the environment that includes depth measurements indicative of distances to at least one object.

Determining a virtual representation of an environment by projecting texture patterns
09862093 · 2018-01-09 · ·

Example methods and systems for determining 3D scene geometry by projecting patterns of light onto a scene are provided. In an example method, a first projector may project a first random texture pattern having a first wavelength and a second projector may project a second random texture pattern having a second wavelength. A computing device may receive sensor data that is indicative of an environment as perceived from a first viewpoint of a first optical sensor and a second viewpoint of a second optical sensor. Based on the received sensor data, the computing device may determine corresponding features between sensor data associated with the first viewpoint and sensor data associated with the second viewpoint. And based on the determined corresponding features, the computing device may determine an output including a virtual representation of the environment that includes depth measurements indicative of distances to at least one object.

SYSTEMS AND METHODS FOR ROBOTIC BEHAVIOR AROUND MOVING BODIES

Systems and methods for detection of people are disclosed. In some exemplary implementations, a robot can have a plurality of sensor units. Each sensor unit can be configured to generate sensor data indicative of a portion of a moving body at a plurality of times. Based on at least the sensor data, the robot can determine that the moving body is a person by at least detecting the motion of the moving body and determining that the moving body has characteristics of a person. The robot can then perform an action based at least in part on the determination that the moving body is a person.

GENERATING A GRASP POSE FOR GRASPING OF AN OBJECT BY A GRASPING END EFFECTOR OF A ROBOT
20170326728 · 2017-11-16 ·

Generating a grasp pose for grasping of an object by an end effector of a robot. An image that captures at least a portion of the object is provided to a user via a user interface output device of a computing device. The user may select one or more pixels in the image via a user interface input device of the computing device. The selected pixel(s) are utilized to select one or more particular 3D points that correspond to a surface of the object in the robot's environment. A grasp pose is determined based on the particular 3D points. For example, a local plane may be fit based on the particular 3D point(s) and a grasp pose determined based on a normal of the local plane. Control commands can be provided to cause the grasping end effector to be adjusted to the grasp pose, after which a grasp is attempted.

Methods and systems for recognizing machine-readable information on three-dimensional objects
09707682 · 2017-07-18 · ·

Methods and systems for recognizing machine-readable information on three-dimensional (3D) objects are described. A robotic manipulator may move at least one physical object through a designated area in space. As the at least one physical object is being moved through the designated area, one or more optical sensors may determine a location of a machine-readable code on the at least one physical object and, based on the determined location, scan the machine-readable code so as to determine information associated with the at least one physical object encoded in the machine-readable code. Based on the information associated with the at least one physical object, a computing device may then determine a respective location in a physical environment of the robotic manipulator at which to place the at least one physical object. The robotic manipulator may then be directed to place the at least one physical object at the respective location.

Marsupial Robotic System

The embodiments relate to a distributed marsupial robotic system. The system includes a parent component having a sensor suite to obtain and process environment data via a parent pattern classification algorithm, and one or more child components each having a sensor suite to obtain and process environment data via a child pattern classification algorithm. Each sensor suite includes one or more sensor devices in communication with a processing unit and memory. Each child component is configured to dock to the parent component, and to separate from the parent component in response to a deployment signal. Each child component obtains environment data during separation. The parent component is configured to construct a map of the environment by receiving and integrating the data obtained by each child component.

Generating a grasp pose for grasping of an object by a grasping end effector of a robot
09687983 · 2017-06-27 · ·

Generating a grasp pose for grasping of an object by an end effector of a robot. An image that captures at least a portion of the object is provided to a user via a user interface output device of a computing device. The user may select one or more pixels in the image via a user interface input device of the computing device. The selected pixel(s) are utilized to select one or more particular 3D points that correspond to a surface of the object in the robot's environment. A grasp pose is determined based on the particular 3D points. For example, a local plane may be fit based on the particular 3D point(s) and a grasp pose determined based on a normal of the local plane. Control commands can be provided to cause the grasping end effector to be adjusted to the grasp pose, after which a grasp is attempted.

Detection and reconstruction of an environment to facilitate robotic interaction with the environment

Methods and systems for detecting and reconstructing environments to facilitate robotic interaction with such environments are described. An example method may involve determining a three-dimensional (3D) virtual environment representative of a physical environment of the robotic manipulator including a plurality of 3D virtual objects corresponding to respective physical objects in the physical environment. The method may then involve determining two-dimensional (2D) images of the virtual environment including 2D depth maps. The method may then involve determining portions of the 2D images that correspond to a given one or more physical objects. The method may then involve determining, based on the portions and the 2D depth maps, 3D models corresponding to the portions. The method may then involve, based on the 3D models, selecting a physical object from the given one or more physical objects. The method may then involve providing an instruction to the robotic manipulator to move that object.

Continuous updating of plan for robotic object manipulation based on received sensor data

Example systems and methods allow for dynamic updating of a plan to move objects using a robotic device. One example method includes determining a virtual environment by one or more processors based on sensor data received from one or more sensors, the virtual environment representing a physical environment containing a plurality of physical objects, developing a plan, based on the virtual environment, to cause a robotic manipulator to move one or more of the physical objects in the physical environment, causing the robotic manipulator to perform a first action according to the plan, receiving updated sensor data from the one or more sensors after the robotic manipulator performs the first action, modifying the virtual environment based on the updated sensor data, determining one or more modifications to the plan based on the modified virtual environment, and causing the robotic manipulator to perform a second action according to the modified plan.