Patent classifications
G05B2219/37572
Information processing apparatus, control method, robot system, and storage medium
The information processing apparatus includes an estimation unit to estimate information indicating a holding success possibility from an image of a plurality of the target objects by using a pre-trained model that estimates the information indicating the holding success possibility in at least one or more partial regions, a determination unit to determine a holding region for the robot to hold the target object among the partial regions based on the information indicating the holding success possibility, and a control unit to move the robot based on the holding region and cause the robot to perform a holding operation on the target object. In a case where the holding operation performed on the target object by the robot is failed, the determination unit determines a partial region, among the partial regions, satisfying a predetermined condition as a next holding region based on the information indicating the holding success possibility.
DETERMINING A POSITION OF A BUILDING PLATFORM WITHIN A PROCESS CHAMBER OF AN ADDITIVE MANUFACTURING DEVICE
A method determines position data of a platform at a plate of an additive manufacturing device, having scanner optics for scanning a laser. The plate has holes that receive a holder, marks on the plate, and receptors for receiving laser target parts. A first position dataset is obtained with a position of a holder inserted in a hole with respect to the marks. After mounting the plate and inserting the platform into the holder, a laser mark is marked on the laser target parts using the laser at laser mark positions in the scanner optics' coordinate system. A pre-manufacturing image of the support plate is acquired with the laser marks on the laser target parts. A second position dataset having positions of the marks with respect to the laser marks is obtained from the pre-manufacturing image. The position data is determined from the position datasets and the laser mark positions.
Component supply system
A loose component supply device in which components scattered on stage are imaged by camera. Because the viewing angle of the camera is α>0, a side surface of the component is imaged. Based on the image data of the component, an index value specifying a size of the side surface of the component is calculated. Then, in a case in which the calculated index value matches a set value, it is determined that it is possible to hold the component. In other words, in a case in which there is a certain distance between multiple components, from imaging a side surface of the component, if an index value that specifies the size of the side surface of the component matches the set value, it is determined that it is possible to hold the component.
Fixtureless component assembly
A method of assembling a plurality of subcomponents to form a finished component comprises gripping a first subcomponent with a first end-of-arm tool, wherein the first end-of-arm tool is attached to a first robot arm and grasping a second subcomponent with a second end-of-arm tool, wherein the second end-of-arm tool is attached to a second robot arm. Moving the first and second end-of-arm tools to position the first subcomponent relative to the second subcomponent in a pre-assembly position and then moving the first and second end-of-arm tools to engage interface surfaces of the first and second subcomponents. Forming a joint between the first subcomponent and the second subcomponent with a joining tool attached to a joining robot arm to thereby assemble the finished component.
System and method for controlling appliances using motion gestures
A method and system of controlling an appliance includes: receiving, from a first home appliance, a request to start video image processing for detecting a motion gesture of a user; processing a sequence of image frames captured by a camera corresponding to the first home appliance to identify a first motion gesture; selecting a second home appliance as a target home appliance for the first motion gesture in accordance with one or more target selection criteria, including first target selection criteria based on a location of the user relative to the first home appliance and second target selection criteria based on a level of match between the first motion gesture and a first control gesture corresponding to the second home appliance; and generating a control command to control the second home appliance in accordance with the first control gesture corresponding to the second home appliance.
Device and method for calibrating an irradiation system of an apparatus for producing a three-dimensional work piece
A device for calibrating an irradiation system of an apparatus for producing a three-dimensional work piece includes a control unit to control the irradiation system so as to irradiate a radiation beam onto an irradiation plane according to a calibration pattern. The device also includes a sensor arrangement arranged in the irradiation plane to output signals to the control unit in response to being irradiated with the radiation beam according to the calibration pattern. The control unit generates a digital image of an actual irradiation pattern produced by the radiation beam incident on the sensor arrangement based on the signals output by the sensor arrangement, compares the digital image of the actual irradiation pattern with a digital image of a reference pattern so as to determine a deviation between the actual irradiation pattern and the reference pattern, and calibrates the irradiation system based on the determined deviation between the actual irradiation pattern and the reference pattern.
Hydroponics System and Method
A hydroponics system that uses image processing techniques to automatically manage nutrients is disclosed. The hydroponics system allows small amounts of nutrients to be added to circulated solution on a frequent basis. Images of plants located in a growing area are processed by a computer processor to determine whether the plants are growing. If the plants are growing, the computer processor causes more nutrients to be added to the system than if plants are not growing.
METHOD AND SYSTEM FOR DETECTION OF AN ABNORMAL STATE OF A MACHINE
An object recognition apparatus for automatic detection of an abnormal operation state of a machine including a machine tool operated in an operation space monitored by at least one camera configured to generate camera images of a current operation scene is provided. The generated camera images are supplied to a processor configured to analyze the current operation scene using a trained artificial intelligence module to detect objects present within the current operation scene. The processor is also configured to compare the detected objects with objects expected in an operation scene in a normal operation state of the machine to detect an abnormal operation state of the machine.
ADAPTIVE AUTOMATIC FILLING SYSTEMS FOR BEVERAGE DISPENSERS
A system for dispensing a beverage into a cup. Dispensing hardware is configured to dispense the beverage into the cup. A vision system captures an image of the cup within a dispensing area of the system. A model contains data corresponding to the cup being full. A control system compares the image of the cup to the model and controls the dispensing hardware to dispense the beverage into the cup based on the comparison of the image of the cup to the model.
Machine vision method and system for monitoring manufacturing processes
The invention relates to a method, a computer program product and a machine vision system (30), comprising at least one lighting device (34), at least one image sensor (31 a-c) and a data processing device (32), the system in a first mode illuminating a first object (35) using a first type of illumination and capturing images of the first object at a first image capturing frequency, when the first object (35) is on a second object (33), transmitting the captured image data to the data processing device for analysis, and changing the system for monitoring the second object in a second mode, if absence of the first object on the second object is detected from the image data, wherein said at least one image sensor (31 a-c) is reconfigured to capture images at a second image capturing frequency from the second object.