G05B2219/40564

Virtual teach and repeat mobile manipulation system

A method for controlling a robotic device is presented. The method includes positioning the robotic device within a task environment. The method also includes mapping descriptors of a task image of a scene in the task environment to a teaching image of a teaching environment. The method further includes defining a relative transform between the task image and the teaching image based on the mapping. Furthermore, the method includes updating parameters of a set of parameterized behaviors based on the relative transform to perform a task corresponding to the teaching image.

Generating a model for an object encountered by a robot

Methods and apparatus related to generating a model for an object encountered by a robot in its environment, where the object is one that the robot is unable to recognize utilizing existing models associated with the robot. The model is generated based on vision sensor data that captures the object from multiple vantages and that is captured by a vision sensor associated with the robot, such as a vision sensor coupled to the robot. The model may be provided for use by the robot in detecting the object and/or for use in estimating the pose of the object.

System and method for augmenting a visual output from a robotic device
11694432 · 2023-07-04 · ·

A method for visualizing data generated by a robotic device is presented. The method includes displaying an intended path of the robotic device in an environment. The method also includes displaying a first area in the environment identified as drivable for the robotic device. The method further includes receiving an input to identify a second area in the environment as drivable and transmitting the second area to the robotic device.

Method and system for performing image classification for object recognition

Systems and methods for classifying at least a portion of an image as being textured or textureless are presented. The system receives an image generated by an image capture device, wherein the image represents one or more objects in a field of view of the image capture device. The system generates one or more bitmaps based on at least one image portion of the image. The one or more bitmaps describe whether one or more features for feature detection are present in the at least one image portion, or describe whether one or more visual features for feature detection are present in the at least one image portion, or describe whether there is variation in intensity across the at least one image portion. The system determines whether to classify the at least one image portion as textured or textureless based on the one or more bitmaps.

SYSTEMS AND METHODS FOR A VISION GUIDED END EFFECTOR

Systems and method for an object from a plurality of objects are disclosed. An image of a scene containing the plurality of objects is obtained, and a segmentation map is generated for the objects in the scene. The shapes of the objects are determined based on the segmentation map. An end effector is adjusted in response to determining the shapes of the objects. The adjusting the end effector includes shaping the end effector according to at least one of the shapes of the objects. The plurality of objects is approached in response to the shaping of the end effector, and one of the plurality of objects is picked with the end effector.

SYSTEMS, COMPUTER PROGRAM PRODUCTS, AND METHODS FOR BUILDING SIMULATED WORLDS
20220404835 · 2022-12-22 ·

Systems, computer program products, and methods for constructing models and simulations of real-world environments are described. A robot employs various sensors to collect data from its environment and provides this data to a tele-operation system. Any number of tele-artists may access the tele-operation system and use the robot sensor data to collaboratively construct a simulated scene representative of the robot's environment. The tele-artists may continue to update the simulation in real-time as the robot explores its environment and provides more sensor data. The robot may use the simulation in support of fundamental operations through its cognitive architecture, such as action planning and hypothesis generation.

An artificial intelligence controller of the robot may monitor the adaptations made to the simulation by the tele-artists in response to the sensor data in order to learn (e.g., via reinforcement learning) how to autonomously generate and update its own simulation based on its own sensor data.

PROCESSING APPARATUS, OPERATION METHOD OF PROCESSING APPARATUS, AND NON-TRANSITORY COMPUTER READABLE RECORDING MEDIUM
20230090535 · 2023-03-23 ·

A processing apparatus includes an articulated robot having an arm distal-end portion to which a processing tool configured with a cutting edge portion and a profiling portion and a shape measurement unit are attached, and a processor. The processor, in a workpiece set state, recognizes, by measuring a shape of the workpiece using the shape measurement unit, a tilt of a profiled surface portion of the workpiece and a position of a process portion of the workpiece to generate processing-target-portion information based on the position of the process portion, generates processing-point information indicating a processing point, moves the arm distal-end portion to the processing point based on the processing-point information, controls an orientation of the processing tool in accordance with the tilt of the profiled surface portion of the workpiece to perform the specified processing on the workpiece using the processing tool.

Robot motion planning accounting for object pose estimation accuracy

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for planning robotic movements to perform a given task while satisfying object pose estimation accuracy requirements. One of the methods includes generating a plurality of candidate measurement configurations for measuring an object to be manipulated by a robot; determining respective measurement accuracies for the plurality of candidate measurement configurations; determining a measurement accuracy landscape for the object including defining a high measurement accuracy region based on the respective measurement accuracies for the plurality of candidate measurement configurations; and generating a motion plan for manipulating the object in the robotic process that moves the robot, a sensor, or both, through the high measurement accuracy region when performing pose estimation for the object.

Teaching Method
20230078238 · 2023-03-16 ·

A teaching method for generating an operation program of a robot based on an operation in which an operator sequentially moves a plurality of objects to an arrangement region and arranges the objects so as to form a target arrangement pattern, the teaching method including an imaging step of imaging the object moved to the arrangement region in the operation, a recognizing step of recognizing a position of the object imaged in the imaging step, an estimating step of estimating candidate arrangement patterns based on the position of the object recognized in the recognizing step, and a display step of displaying the candidates estimated in the estimating step.

3D PRINTED OBJECT CLEANING

In one example in accordance with the present disclosure, a system is described. The system includes a reader to extract cleaning instructions associated with a three-dimensional (3D) printed object. The cleaning instructions include a termination condition to indicate when object cleaning is complete. The system also includes a controller to instruct at least one cleaning device to clean the 3D printed object based on the cleaning instructions. A measurement system of the system determines when the termination condition is met.