Patent classifications
B25J9/1697
Machine vision-based method and system to facilitate the unloading of a pile of cartons in a carton handling system
A machine vision-based method and system to facilitate the unloading of a pile of cartons within a work cell are provided. The method includes the step of providing at least one 3-D or depth sensor having a field of view at the work cell. Each sensor has a set of radiation sensing elements which detect reflected, projected radiation to obtain 3-D sensor data. The 3-D sensor data including a plurality of pixels. For each possible pixel location and each possible carton orientation, the method includes generating a hypothesis that a carton with a known structure appears at that pixel location with that container orientation to obtain a plurality of hypotheses. The method further includes ranking the plurality of hypotheses. The step of ranking includes calculating a surprisal for each of the hypotheses to obtain a plurality of surprisals. The step of ranking is based on the surprisals of the hypotheses.
Autonomous navigation and collaboration of mobile robots in 5G/6G
5G and especially 6G will enable a multitude of applications for fixed-position and mobile wireless task-devices (“robots”). Most of these applications are based on the assumption that the robots know, or can determine, the locations and wireless identities of other robots in proximity, but this is an unsolved problem. Procedures are provided herein for an arbitrarily large number of fixed-position and mobile robots to autonomously identify each other, determine their locations with speed and precision substantially beyond that provided by navigation satellites, and then to collaborate in performing their tasks, while avoiding interference and collisions. Examples are provided in the fields of automated agriculture, remote oil-spill mitigation, autonomous fire-fighting, hospital management, construction site coordination, manufacturing (including fully autonomous manufacturing), major product warehousing, airport control, and emergency vehicle access.
GENERATION OF IMAGE FOR ROBOT OPERATION
A robot control system includes circuitry configured to: generate a command to a robot; receive a frame image in which a capture position changes according to a motion of the robot based on the command; extract a partial region from the frame image according to the command; superimpose a delay mark on the partial region to generate an operation image; and display the operation image on a display device, so as to represent a delay of the motion of the robot with respect to the command.
AUTONOMOUS MANIPULATION OF FLEXIBLE PRIMARY PACKAGING IN DIMENSIONALLY STABLE SECONDARY PACKAGING BY MEANS OF ROBOTS
System for automatically manipulating primary packaging in secondary packaging, comprising a robot having at least one robot arm with a clamping gripper installed at a tool centre point, wherein each tool centre point has a force-torque sensor, an image recording module for recording images of at least the upper segment of the primary packaging, comprising at least two stereo cameras for recording 3-D images, and one or more processors for providing a three-dimensional point cloud, controlling the image recording module and controlling the robot on the basis of the analysis of the three-dimensional point cloud and the measurements from the force-torque sensors.
ADJUSTMENT SUPPORT SYSTEM AND ADJUSTMENT SUPPORT METHOD
An adjustment support system comprises an arithmetic unit and a storage unit. The storage unit stores sensor information including features of a sensor that captures an image of a target, and imaging target information including dimensions, shape, and disposition of an imaging target of the sensor, and the arithmetic unit generates a plurality of candidates for an imaging position and posture of the imaging target by the sensor, and determines whether or not positional deviation of the imaging target in a plurality of directions is detectable from a captured image obtained by the sensor based on the sensor information and the imaging target information, with respect to each of the plurality of candidates for the imaging position and the imaging posture. The arithmetic unit then determines an imaging position and posture for the sensor to actually capture an image of the target from the plurality of candidates.
System and Method for Automated Movement of a Robotic Arm
A positioning system is provided for insertions and placements with increased accuracy and precision for the placement and insertion of components into elements. The system may utilize one or more sensors to provide individual images or data for each individual insertion of components into elements. The system may use known information to compare the individual images or data to provide increased accuracy and precision for insertion of components into elements.
METHOD AND SYSTEM FOR OBJECT IDENTIFICATION
A method for identifying objects by shape in close proximity to other objects of different shapes obtains point cloud information of multiple objects. The objects are arranged in at least two trays and the trays are stacked. A depth image of the objects is obtained according to the point cloud information, and the depth image of the objects is separated and layered to obtain a layer information of all the objects. An object identification system also disclosed. Three-dimensional machine vision is utilized in identifying the objects, improving the accuracy of object identification, and enabling the mechanical arm to accurately grasp the required object.
Determining how to assemble a meal
In an embodiment, a method includes determining a given material to manipulate to achieve a goal state. The goal state can be one or more deformable or granular materials in a particular arrangement. The method further includes, for the given material, determining, a respective outcome for each of a plurality of candidate actions to manipulate the given material. The determining can be performed with a physics-based model, in one embodiment. The method further can include determining a given action of the candidate actions, where the outcome of the given action reaching the goal state is within at least one tolerance. The method further includes, based on a selected action of the given actions, generating a first motion plan for the selected action.
Robotic sortation system
Methods, apparatus, computing devices, computing entities, and/or the like associated with a robotic sortation system are provided. An example method may include receiving imaging data associated with a first item at a first sorting location of a material handling system, determining a sorting destination for the first item based at least in part on the imaging data, determining a plurality of first-tier robotic devices and a plurality of second-tier robotic devices based at least in part on the first sorting location and the sorting destination, and generating a first sortation scheme for the first item.
Tactile information estimation apparatus, tactile information estimation method, and program
According to some embodiments, a tactile information estimation apparatus may include one or more memories and one or more processors. The one or more processors are configured to input at least first visual information of an object acquired by a visual sensor to a model. The model is generated based on visual information and tactile information linked to the visual information. The one or more processors are configured to extract, based on the model, a feature amount relating to tactile information of the object.