Patent classifications
G05B2219/40564
Robot control device and robot system
A robot control device that controls a robot and includes a processor which extracts a contour of a target based on an image of the target captured by an imaging device, generates a point sequence corresponding to the contour, and converts coordinates of the point sequence into coordinates in a robot coordinate system. Further, a robot control device that controls a robot includes a processor which extracts a contour of a target based on an image of the target captured by an imaging device and a predetermined instruction, generates a point sequence corresponding to the contour, and converts coordinates of the point sequence into coordinates in a robot coordinate system.
METHOD AND SYSTEM FOR HANDLING DEFORMABLE OBJECTS
A control server controls a dual-arm robotic manipulator (DARM) for handling deformable objects in a stack. The control server receives a set of images of the stack captured by a set of image sensors, and determines a contour of the stack based the set of images. Based on the contour and historical data associated with the deformable objects in the stack, the control server determines a sequence of actions to be performed by the DARM for handling a first deformable object in the stack, and controls the DARM to handle the first deformable object by communicating a set of commands corresponding to each action in sequence of actions. The first deformable object is handled such that original form factors of the first deformable object and the remaining stack are maintained.
Training methods for deep networks
A method for training a deep neural network of a robotic device is described. The method includes constructing a 3D model using images captured via a 3D camera of the robotic device in a training environment. The method also includes generating pairs of 3D images from the 3D model by artificially adjusting parameters of the training environment to form manipulated images using the deep neural network. The method further includes processing the pairs of 3D images to form a reference image including embedded descriptors of common objects between the pairs of 3D images. The method also includes using the reference image from training of the neural network to determine correlations to identify detected objects in future images.
Systems and methods of 3D scene segmentation and matching for robotic operations
A method and system, the method including receive image data representations of a set of images of a physical asset; receive a data model of at least one asset, the data model of each of the at least one assets including a semantic description of the respective modeled asset and at least one operation associated with the respective modeled asset; determine a match between the received image data and the data model of one of the at least one assets based on a correspondence therebetween; generate, for the data model determined to be a match with the received image data, an operation plan based on the at least one operation included in matched data model; execute, in response to the generation of the operation plan, the generated operation plan by the physical asset.
DEVICE FOR ADJUSTING PARAMETER, ROBOT SYSTEM, METHOD, AND COMPUTER PROGRAM
Conventionally, an operator with expertise was needed to adjust a parameter for collating a workpiece feature of a workpiece imaged by a visual sensor and a workpiece model of the workpiece. A device comprises: an image generation unit that generates image data displaying a workpiece feature of a workpiece imaged by a visual sensor; a position detection unit that uses a parameter for collating a workpiece model with a workpiece feature to obtain, as a detected position, a position of the workpiece in the image data; a matching position acquisition unit that acquires, as a matching position, a position of the workpiece model in the image data when the workpiece model is arranged so as to match the workpiece feature in the image data; and a parameter adjustment unit that adjusts the parameter on the basis of data indicating the difference between the detected position and the matching position.
Method and robotic system for manipulating instruments
An approach relates to manipulation of tools or instruments in the performance of a task by a robot. In accordance with this approach, sensor data is acquired and processed to identify a subset of instruments initially susceptible to manipulation. The instruments are then manipulated in the performance of the task based on the processed sensor data.
High speed automated capture of 3D models of packaged items
Method and apparatus for generating a three-dimensional (3D) model of a physical object. An apparatus includes stereo near-infrared camera devices, near-infrared projectors, color camera devices and control logic. The control logic detects a physical object moving along a fixed path has reached a predefined location, projects a predefined pattern onto the physical object, using the plurality of near-infrared projectors, and captures near-infrared digital images of the physical object, while the predefined pattern is being projected onto the physical object. The control logic determines a set of depth measures for each of the stereo near-infrared camera devices and generates a 3D mesh by merging the depth measurements. Color digital images are captured using the color camera devices, and a texture is applied to the 3D mesh by mapping points from each of the plurality of color digital images onto the 3D mesh.
METHOD AND SYSTEM FOR PERFORMING IMAGE CLASSIFICATION FOR OBJECT RECOGNITION
Systems and methods for classifying at least a portion of an image as being textured or textureless are presented. The system receives an image generated by an image capture device, wherein the image represents one or more objects in a field of view of the image capture device. The system generates one or more bitmaps based on at least one image portion of the image. The one or more bitmaps describe whether one or more features for feature detection are present in the at least one image portion, or describe whether one or more visual features for feature detection are present in the at least one image portion, or describe whether there is variation in intensity across the at least one image portion. The system determines whether to classify the at least one image portion as textured or textureless based on the one or more bitmaps.
METHOD AND COMPUTING SYSTEM FOR OBJECT RECOGNITION OR OBJECT REGISTRATION BASED ON IMAGE CLASSIFICATION
A computing system and method for object recognition is presented. The method includes the computing system obtaining an image for representing the one or more objects, and generating a target image portion associated with one of the one or more objects. The computing system determines whether to classify the target image portion as textured or textureless, and selects a template storage space from among a first and second template storage space, wherein the first template storage space is cleared more often relative to the second template storage space. The first template storage space is selected in response to a textureless classification, and the second template storage space is selected as the template storage space in response to a textured classification. The computing system performs object recognition based on the target image portion and the selected template storage space.
Methods and Systems for Registering a Three-Dimensional Pose of an Object
In an example, a system for registering a three-dimensional (3D) pose of a workpiece relative to a robotic device is disclosed. The system comprises the robotic device, where the robotic device comprises one or more mounted lasers. The system also comprises one or more sensors configured to detect laser returns from laser rays projected from the one or more mounted lasers and reflected by the workpiece. The system also comprises a processor configured to receive a tessellation of the workpiece, wherein the tessellation comprises a 3D representation of the workpiece made up of cells, convert the laser returns into a 3D point cloud in a robot frame, based on the 3D point cloud, filter visible cells of the tessellation of the workpiece to form a tessellation included set, and solve for the 3D pose of the workpiece relative to the robotic device based on the tessellation included set.