Patent classifications
G05B2219/40543
System and Method for Detecting Abnormal Particles
The present disclosure provides a system and method for detecting abnormal particles. The system includes a tray used to hold and display multiple particles, a manipulating device used to manipulate the tray and the particles thereon, an imaging element capable of capturing an image of the tray, and an image processor capable of determining the quantity, shape, and size of particles on the tray as displayed by the images captured by the imaging element and analyzing the images. The image processor comprises an image data processor, a memory, and an output command processor.
PICKING SYSTEM, INFORMATION PROCESSING DEVICE, AND COMPUTER READABLE STORAGE MEDIUM
According to an embodiment, a picking system includes a picking robot, a distance sensor, an analysis unit, a storage unit, and a robot controller. The picking robot includes a robot arm configured to grip and move an item according to a command including identification information of the item. The robot controller is configured to control a moving speed of the robot arm when the item passes through a measurement range in accordance with a determination of a determination unit configured to determine whether the identification information of the item gripped by the picking robot is included in a database stored in the storage unit.
Automated manipulation of transparent vessels
An actuator and end effector are controlled according to images from cameras having a surface in their field of view. Vessels (cups, bowls, etc.) and other objects are identified in the images and their configuration is assigned to a finite set of categories by a classifier that does not output a 3D bounding box or determine a 6D pose. For objects assigned to a first subset of categories, grasping parameters for controlling the actuator and end effector are determined using only 2D bounding boxes, such as oriented 2D bounding boxes. For objects not assigned to the first subset, a righting operation may be performed using only 2D bounding boxes. Objects that are still not in the first set may then be grasped by estimating a 3D bounding box and 6D pose.
Determining a virtual representation of an environment by projecting texture patterns
Example methods and systems for determining 3D scene geometry by projecting patterns of light onto a scene are provided. In an example method, a first projector may project a first random texture pattern having a first wavelength and a second projector may project a second random texture pattern having a second wavelength. A computing device may receive sensor data that is indicative of an environment as perceived from a first viewpoint of a first optical sensor and a second viewpoint of a second optical sensor. Based on the received sensor data, the computing device may determine corresponding features between sensor data associated with the first viewpoint and sensor data associated with the second viewpoint. And based on the determined corresponding features, the computing device may determine an output including a virtual representation of the environment that includes depth measurements indicative of distances to at least one object.
Positioning a robot sensor for object classification
In one embodiment, a method includes receiving, from a first sensor on a robot, first sensor data indicative of an environment of the robot. The method also includes identifying, based on the first sensor data, an object of an object type in the environment of the robot, where the object type is associated with a classifier that takes sensor data from a predetermined pose relative to the object as input. The method further includes causing the robot to position a second sensor on the robot at the predetermined pose relative to the object. The method additionally includes receiving, from the second sensor, second sensor data indicative of the object while the second sensor is positioned at the predetermined pose relative to the object. The method further includes determining, by inputting the second sensor data into the classifier, a property of the object.
METHOD FOR DETECTING AND RE-IDENTIFYING OBJECTS USING A NEURAL NETWORK
A method for detecting and re-identifying objects using a neural network. The method includes the steps: extracting features from an image, the features comprising information about at least one object in the image; detecting the at least one object in the image using an anchor-based object detection based on the extracted features, classification data being determined by a classification for detecting the object with the aid of at least one anchor and regression data being determined by a regression; and re-identifying the at least one object by determining embedding data based on the extracted features, the embedding data representing an object description for the at least one feature of the image.
SYSTEM AND METHOD FOR AUGMENTING A VISUAL OUTPUT FROM A ROBOTIC DEVICE
A method for visualizing data generated by a robotic device is presented. The method includes displaying an intended path of the robotic device in an environment. The method also includes displaying a first area in the environment identified as drivable for the robotic device. The method further includes receiving an input to identify a second area in the environment as drivable and transmitting the second area to the robotic device.
VIRTUAL TEACH AND REPEAT MOBILE MANIPULATION SYSTEM
A method for controlling a robotic device is presented. The method includes positioning the robotic device within a task environment. The method also includes mapping descriptors of a task image of a scene in the task environment to a teaching image of a teaching environment. The method further includes defining a relative transform between the task image and the teaching image based on the mapping. Furthermore, the method includes updating parameters of a set of parameterized behaviors based on the relative transform to perform a task corresponding to the teaching image.
TRAINING METHODS FOR DEEP NETWORKS
A method for training a deep neural network of a robotic device is described. The method includes constructing a 3D model using images captured via a 3D camera of the robotic device in a training environment. The method also includes generating pairs of 3D images from the 3D model by artificially adjusting parameters of the training environment to form manipulated images using the deep neural network. The method further includes processing the pairs of 3D images to form a reference image including embedded descriptors of common objects between the pairs of 3D images. The method also includes using the reference image from training of the neural network to determine correlations to identify detected objects in future images.
AUTONOMOUS TASK PERFORMANCE BASED ON VISUAL EMBEDDINGS
A method for controlling a robotic device is presented. The method includes capturing an image corresponding to a current view of the robotic device. The method also includes identifying a keyframe image comprising a first set of pixels matching a second set of pixels of the image. The method further includes performing, by the robotic device, a task corresponding to the keyframe image.