Patent classifications
G05B2219/40565
Systems, methods and associated components for robotic manipulation of physical objects
Systems, methods, and associated components for robotic manipulation of physical objects. The physical objects include three-dimensional gripping features configured to be detected by an optics system and gripped by an end-effector of a robotic arm with sufficient gripping force to move the physical objects against the force of gravity. Sets of the physical objects can have different sizes and shapes and, in some examples, include identically constructed three-dimensional gripping features.
System and method for autonomously defining regions of interest for a container
A system and method for autonomously defining regions of interest for a container are provided. The system comprises a platform for supporting the container, a detector for capturing feature data of the container while on the platform, and a computer system. The computer system is in communication with the detector and platform. The computer system is programmed to locate features of the container from the captured feature data, and define the regions of interest for the container based on the located features.
Appearance inspection system, setting device, image processing device, inspection method, and program
To provide an appearance inspection system capable of reduce labor for setting an imaging condition by a designer when a plurality of inspection target positions on a target is sequentially imaged. An appearance inspection system includes an imaging condition decision part and a route decision part. The imaging condition decision part decides a plurality of imaging condition candidates including a relative position between a workpiece and an imaging device for at least one inspection target position among a plurality of inspection target positions. The route decision part decides a change route of an imaging condition for sequentially imaging the plurality of inspection target positions by selecting one imaging condition among the plurality of imaging condition candidates so that a pre-decided requirement is satisfied.
Detection and tracking of item features
Technologies are generally described for detection and tracking of item features. In some examples, features of an object may be initially found through detection of one or more edges and one or more corners of the object from a first perspective. In addition, determination may be made whether the detected edges and corners of the object are also detectable from one or more other perspectives. In response to an affirmative determination, the detected edges and of the object may be marked as features to be tracked, for example, in subsequent frames of a camera feed. The perspectives may correspond to distributed locations in a substantially umbrella-shaped formation centered over the object. In other examples, lighting conditions of an environment where the object is being tracked may be programmatically controlled.
PERCEPTION MODULE FOR A MOBILE MANIPULATOR ROBOT
An imaging apparatus includes a structural support rigidly coupled to a surface of a mobile robot and a plurality of perception modules, each of which is arranged on the structural support, has a different field of view, and includes a two-dimensional (2D) camera configured to capture a color image of an environment, a depth sensor configured to capture depth information of one or more objects in the environment, and at least one light source configured to provide illumination to the environment. The imaging apparatus further includes control circuitry configured to control a timing of operation of the 2D camera, the depth sensor, and the at least one light source included in each of the plurality of perception modules, and at least one computer processor configured to process the color image and the depth information to identify at least one characteristic of one or more objects in the environment.
Appearance inspection system, image processing device, imaging device, and inspection method
An appearance inspection system includes a setting part, a movement mechanism, and a control part. The setting part sets a route passing through a plurality of imaging positions in order. The setting part sets the route so that a first time necessary for the movement mechanism to move an imaging device from a first imaging position to a second imaging position among the plurality of imaging positions is longer than a second time necessary for a process of changing a first imaging condition corresponding to the first imaging position to a second imaging condition corresponding to the second imaging position by the control part. The control part starts the process of changing the first imaging condition to the second imaging condition earlier by the second time or more than a scheduled time at which the imaging device arrives at the second imaging position.
Positioning a Robot Sensor for Object Classification
In one embodiment, a method includes receiving, from a first sensor on a robot, first sensor data indicative of an environment of the robot. The method also includes identifying, based on the first sensor data, an object of an object type in the environment of the robot, where the object type is associated with a classifier that takes sensor data from a predetermined pose relative to the object as input. The method further includes causing the robot to position a second sensor on the robot at the predetermined pose relative to the object. The method additionally includes receiving, from the second sensor, second sensor data indicative of the object while the second sensor is positioned at the predetermined pose relative to the object. The method further includes determining, by inputting the second sensor data into the classifier, a property of the object.
Robot assisted object learning vision system
According to an aspect of some embodiments of the present invention there is provided a method for generating a dataset mapping visual features of each of a plurality of objects, comprising: for each of a plurality of different objects: instructing a robotic system to move an arm holding a respective the object to a plurality of positions, and when the arm is in each of the plurality of positions: acquiring at least one image depicting the respective object in the position, receiving positional information of the arm in respective the position, analyzing the at least one image to identify at least one visual feature of the object in the respective position, and storing, in a mapping dataset, an association between the at least one visual feature and the positional information, and outputting the mapping dataset.
Positioning a robot sensor for object classification
In one embodiment, a method includes receiving, from a first sensor on a robot, first sensor data indicative of an environment of the robot. The method also includes identifying, based on the first sensor data, an object of an object type in the environment of the robot, where the object type is associated with a classifier that takes sensor data from a predetermined pose relative to the object as input. The method further includes causing the robot to position a second sensor on the robot at the predetermined pose relative to the object. The method additionally includes receiving, from the second sensor, second sensor data indicative of the object while the second sensor is positioned at the predetermined pose relative to the object. The method further includes determining, by inputting the second sensor data into the classifier, a property of the object.
SYSTEM AND METHOD FOR AUTONOMOUSLY DEFINING REGIONS OF INTEREST FOR A CONTAINER
A system and method for autonomously defining regions of interest for a container are provided. The system comprises a platform for supporting the container, a detector for capturing feature data of the container while on the platform, and a computer system. The computer system is in communication with the detector and platform. The computer system is programmed to locate features of the container from the captured feature data, and define the regions of interest for the container based on the located features.