G05B2219/40425

Utilizing Prediction Models of an Environment
20230008007 · 2023-01-12 ·

A method, system and product for utilizing prediction models of an environment. In one embodiment, using a model of an environment and based on a first scene of the environment, a predicted second scene of the environment is predicted. An observed second scene is obtained and compared to the predicted second scene. Based on the comparison between the predicted second scene and the observed second scene, an action is performed.

Machine vision-based method and system for measuring 3D pose of a part or subassembly of parts

A machine vision-based method and system for measuring 3D pose of a part or subassembly of parts having an unknown pose are disclosed. A number of different applications of the method and system are disclosed including applications which utilize a reprogrammable industrial automation machine such as a robot. The method includes providing a reference cloud of 3D voxels which represent a reference surface of a reference part or subassembly having a known reference pose. Using at least one 2D/3D hybrid sensor, a sample cloud of 3D voxels which represent a corresponding surface of a sample part or subassembly of the same type as the reference part or subassembly is acquired. The sample part or subassembly has an actual pose different from the reference pose. The voxels of the sample and reference clouds are processed utilizing a matching algorithm to determine the pose of the sample part or subassembly.

System, method and product for utilizing prediction models of an environment

A first method comprising: predicting a scene of an environment using a model of the environment and based on a first scene of the environment obtained from sensors observing scenes of the environment; comparing the predicted scene with an observed scene from the sensors; and performing an action based on differences determined between the predicted scene and the observed scene. A second method comprising applying a vibration stimuli on an object via a computer-controlled component; obtaining a plurality of images depicting the object from a same viewpoint, captured during the application of the vibration stimuli. The second method further comprising comparing the plurality of images to detect changes occurring in response to the application of the vibration stimuli, which changes are attributed to a change of a location of a boundary of the object; and determining the boundary of the object based on the comparison.

DETECTION SYSTEM, TRANSPORT SYSTEM, DETECTION METHOD, AND DETECTION PROGRAM

An embodiment of the present disclosure relates to a detection system used in a case where a transport robot transports a package in a state where the package in which an opening portion of a container is closed by a lid is placed on a placing portion of the transport robot. The detection system is configured to detect that the lid is opened.

Automatic Teaching System

Provided is an automatic teaching system that is readily able to achieve automation, even when a small but varied number of processing objects are to undergo polishing or coating. The automatic teaching system includes a three-dimensional shape measurement apparatus, a reference marker, an image analysis apparatus, and a robot control device. The three-dimensional shape measurement apparatus acquires shape data of a processing target region on a processing object relative to the reference marker, and the image analysis apparatus divides the shape data of the processing target region into a plurality of continuous reference surfaces, in accordance with a predetermined algorithm, automatically generates a program of an operation path along which a polishing apparatus or coating apparatus of the robot is to be operated, for every reference surface, in accordance with a predetermined operation path generation rule, and transmits the program of the operation path to the robot control device.

FRAMEWORK OF ROBOTIC ONLINE MOTION PLANNING
20220063099 · 2022-03-03 ·

A robot motion planning technique using an external computer communicating with a robot controller. A camera or sensor system provides input scene information including start and goal points and obstacle data to the computer. The computer plans a robot tool motion based on the start and goal points and the obstacle environment, where the robot motion is planned using either a serial or parallel combination of sampling-based and optimization-based planning algorithms. In the serial combination, the sampling method first finds a feasible path, and the optimization method then improves the path quality. In the parallel combination, both sampling and optimization methods are used, and a path is selected based on computation time, path quality and other factors. The computer converts dense planned waypoints to sparse command points for transfer to the robot controller, and the controller computes robot kinematics and interpolation points and controls the movement of the robot.

Robotic system for grasping objects

A method is provided for grasping randomly sized and randomly located objects. The method may include assigning a score associated with the likelihood of successfully grasping an object. Other features of the method may include orientation of the end effector, a reachability check, and crash recovery.

Machine Vision-Based Method and System for Measuring 3D Pose of a Part or Subassembly of Parts

A machine vision-based method and system for measuring 3D pose of a part or subassembly of parts having an unknown pose are disclosed. A number of different applications of the method and system are disclosed including applications which utilize a reprogrammable industrial automation machine such as a robot. The method includes providing a reference cloud of 3D voxels which represent a reference surface of a reference part or subassembly having a known reference pose. Using at least one 2D/3D hybrid sensor, a sample cloud of 3D voxels which represent a corresponding surface of a sample part or subassembly of the same type as the reference part or subassembly is acquired. The sample part or subassembly has an actual pose different from the reference pose. The voxels of the sample and reference clouds are processed utilizing a matching algorithm to determine the pose of the sample part or subassembly.

Methods for dispensing a liquid or viscous material onto a substrate

Systems and methods for dispensing a liquid or viscous material onto a substrate are disclosed herein. One exemplary method of positioning an applicator of a dispensing system to apply a liquid or viscous material to an electronic substrate includes generating a two-dimensional image of the electronic substrate using a camera communicatively connected to the dispensing system. Based on the two-dimensional image of the electronic substrate, a first set of one or more sub-regions of the electronic substrate having one or more components that protrude above the surface of the electronic substrate is identified. The method further includes using height information relating to the one or more sub-regions having the one or more components to determine a control program for the dispensing system to position the applicator relative to the electronic substrate and dispense the liquid or viscous material onto the electronic substrate.

Machine vision-based method and system for measuring 3D pose of a part or subassembly of parts

A machine vision-based method and system for measuring 3D pose of a part or subassembly of parts having an unknown pose are disclosed. A number of different applications of the method and system are disclosed including applications which utilize a reprogrammable industrial automation machine such as a robot. The method includes providing a reference cloud of 3D voxels which represent a reference surface of a reference part or subassembly having a known reference pose. Using at least one 2D/3D hybrid sensor, a sample cloud of 3D voxels which represent a corresponding surface of a sample part or subassembly of the same type as the reference part or subassembly is acquired. The sample part or subassembly has an actual pose different from the reference pose. The voxels of the sample and reference clouds are processed utilizing a matching algorithm to determine the pose of the sample part or subassembly.