B25J9/1664

DEEP REINFORCEMENT LEARNING APPARATUS AND METHOD FOR PICK-AND-PLACE SYSTEM
20230040623 · 2023-02-09 · ·

Disclosed is a deep reinforcement learning apparatus and method for a pick-and-place system. According to the present disclosure, a simulation learning framework is configured to apply reinforcement learning to make pick-and-place decisions using a robot operating system (ROS) in a real-time environment, thereby generating stable path motion that meets various hardware and real-time constraints.

SYSTEMS AND METHODS FOR OBJECT DETECTION

A computing system including a processing circuit in communication with a camera having a field of view. The processing circuit is configured to perform operations related to detecting, identifying, and retrieving objects disposed amongst a plurality of objects. The processing circuit may be configured to perform operations related to object recognition template generation, feature generation, hypothesis generation, hypothesis refinement, and hypothesis validation.

SYSTEM AND/OR METHOD FOR ROBOTIC FOODSTUFF ASSEMBLY

The foodstuff assembly system can include: a robot arm, a frame, a set of foodstuff bins, a sensor suite, a set of food utensils, and a computing system. The system can optionally include: a container management system, a human machine interface (HMI). However, the foodstuff assembly system 100 can additionally or alternatively include any other suitable set of components. The system functions to enable picking of foodstuff from a set of foodstuff bins and placement into a container (such as a bowl, tray, or other foodstuff receptacle). Additionally or alternatively, the system can function to facilitate transferal of bulk material (e.g., bulk foodstuff) into containers, such as containers moving along a conveyor line.

Generating a robot control policy from demonstrations collected via kinesthetic teaching of a robot
11554485 · 2023-01-17 · ·

Generating a robot control policy that regulates both motion control and interaction with an environment and/or includes a learned potential function and/or dissipative field. Some implementations relate to resampling temporally distributed data points to generate spatially distributed data points, and generating the control policy using the spatially distributed data points. Some implementations additionally or alternatively relate to automatically determining a potential gradient for data points, and generating the control policy using the automatically determined potential gradient. Some implementations additionally or alternatively relate to determining and assigning a prior weight to each of the data points of multiple groups, and generating the control policy using the weights. Some implementations additionally or alternatively relate to defining and using non-uniform smoothness parameters at each data point, defining and using d parameters for stiffness and/or damping at each data point, and/or obviating the need to utilize virtual data points in generating the control policy.

Machine vision-based method and system to facilitate the unloading of a pile of cartons in a carton handling system
11557058 · 2023-01-17 · ·

A machine vision-based method and system to facilitate the unloading of a pile of cartons within a work cell are provided. The method includes the step of providing at least one 3-D or depth sensor having a field of view at the work cell. Each sensor has a set of radiation sensing elements which detect reflected, projected radiation to obtain 3-D sensor data. The 3-D sensor data including a plurality of pixels. For each possible pixel location and each possible carton orientation, the method includes generating a hypothesis that a carton with a known structure appears at that pixel location with that container orientation to obtain a plurality of hypotheses. The method further includes ranking the plurality of hypotheses. The step of ranking includes calculating a surprisal for each of the hypotheses to obtain a plurality of surprisals. The step of ranking is based on the surprisals of the hypotheses.

Automatic guiding method for self-propelled apparatus

An automatic guiding method for a self-propelled apparatus (10) is provided. The self-propelled apparatus (10) turns and irradiates when a signal light emitted by a charging dock (20) is sensed by a flank sensor (103), and changes its turn direction when another different signal light from the charging dock (20) is sensed by a forward sensor (102). The charging dock (20) switches to emit another signal light different from the signal light currently emitted when each time is triggered by the signal light emitted by the self-propelled apparatus (10). Repeatedly execute the above actions and make the self-propelled apparatus approach the light-emitting unit (202) until the self-propelled apparatus (10) reaches a charging position. It can accurately guide the self-propelled apparatus (10) to the charging position by arranging only two sensors on the self-propelled apparatus.

HANDHELD DEVICE FOR TRAINING AT LEAST ONE MOVEMENT AND AT LEAST ONE ACTIVITY OF A MACHINE, SYSTEM AND METHOD

Disclosed herein is a handheld device for training at least one movement and at least one activity of a machine. The handheld device may include a handle, an input unit configured to input activation information for activating the training of the machine, an output unit configured to output the activation information for activating the training of the machine to a device external to the handheld device, and a coupling structure for releasably coupling an interchangeable attachment configured according to the at least one activity.

AUTONOMOUS MANIPULATION OF FLEXIBLE PRIMARY PACKAGING IN DIMENSIONALLY STABLE SECONDARY PACKAGING BY MEANS OF ROBOTS

System for automatically manipulating primary packaging in secondary packaging, comprising a robot having at least one robot arm with a clamping gripper installed at a tool centre point, wherein each tool centre point has a force-torque sensor, an image recording module for recording images of at least the upper segment of the primary packaging, comprising at least two stereo cameras for recording 3-D images, and one or more processors for providing a three-dimensional point cloud, controlling the image recording module and controlling the robot on the basis of the analysis of the three-dimensional point cloud and the measurements from the force-torque sensors.

ADJUSTMENT SUPPORT SYSTEM AND ADJUSTMENT SUPPORT METHOD
20230011093 · 2023-01-12 · ·

An adjustment support system comprises an arithmetic unit and a storage unit. The storage unit stores sensor information including features of a sensor that captures an image of a target, and imaging target information including dimensions, shape, and disposition of an imaging target of the sensor, and the arithmetic unit generates a plurality of candidates for an imaging position and posture of the imaging target by the sensor, and determines whether or not positional deviation of the imaging target in a plurality of directions is detectable from a captured image obtained by the sensor based on the sensor information and the imaging target information, with respect to each of the plurality of candidates for the imaging position and the imaging posture. The arithmetic unit then determines an imaging position and posture for the sensor to actually capture an image of the target from the plurality of candidates.

System and Method for Automated Movement of a Robotic Arm

A positioning system is provided for insertions and placements with increased accuracy and precision for the placement and insertion of components into elements. The system may utilize one or more sensors to provide individual images or data for each individual insertion of components into elements. The system may use known information to compare the individual images or data to provide increased accuracy and precision for insertion of components into elements.