Patent classifications
G05B2219/40323
HANDLING ASSEMBLY COMPRISING A HANDLING DEVICE FOR CARRYING OUT AT LEAST ONE WORK STEP, METHOD, AND COMPUTER PROGRAM
A handling assembly having a handling device for carrying out at least one working step with and/or on a workpiece in a working region of the handling device, stations being situated in the working region, with at least one monitoring sensor for the optical monitoring of the working region and for provision as monitoring data, with a localization module, the localization module being designed to recognize the stations and to determine a station position for each of the stations.
Robotic arm processing method and system based on 3D image
Robotic arm processing method and system based on 3D image are provided. The processing method includes: providing robotic arm 3D model data and processing environment 3D model data; obtaining workpiece 3D model data, and generating a processing path consisting of contact points according to the workpiece 3D model data, wherein a free end of a robotic arm moves along the processing path to complete a processing procedure; generating a posture candidate group according to a relationship according to each one of the contact points corresponding to the free end of the robotic arm; selecting an actual moving posture from the posture candidate group; moving the free end of the robotic arm to each corresponding one of the contact points according to the selected actual moving posture; and moving the free end of the robotic arm along the processing path according to the actual moving postures to perform the processing procedure.
Method for Creating an Object Map for a Factory Environment
The disclosure relates to a method for creating an object map for a factory environment by using sensors present in the factory environment, wherein at least one part of an object in the factory environment has information relating to its position recorded by at least two of the sensors, wherein the information recorded by the sensors is transmitted to a server associated with the sensors, and wherein the server is used to take the information recorded and transmitted by the sensors as a basis for creating an object map for the factory environment having a position of the at least one part of the object.
VOLUMETRIC SUBSTITUTION OF REAL WORLD OBJECTS
Implementations of the present disclosure provide techniques for providing a presentation of the objects that are depicted in an image of a scene, where the presentation improves perceiving the object within the scene. Some implementations include obtaining an image of a scene; identifying an object within the image of the scene; obtaining a particular three-dimensional model that corresponds to the object; generating or updating a three-dimensional representation of the scene based at least on the particular three-dimensional model of the object; and providing at least a portion of the three-dimensional representation of the scene that was generated or updated based on the three-dimensional model of the object to a scene analyzer. The three-dimensional representation of the scene can include data indicating an attribute of the object that is not visible or is not directly derived from the image of the scene.
AUTOMATIC POSITIONING METHOD AND AUTOMATIC CONTROL DEVICE
An automatic positioning method and an automatic control device are provided. The automatic control device includes a processing unit, a memory unit, and a camera unit to automatically control a robotic arm. When the processing unit executes a positioning procedure, the camera unit obtains a first image of the robotic arm. The processing unit analyzes the first image to establish a three-dimensional working environment model and obtains first spatial positioning data. The processing unit controls the robotic arm to move a plurality of times to sequentially obtain a plurality of second images of the robotic arm by the camera unit and analyzes the second images and encoder information of the robotic arm to obtain second spatial positioning data. The processing unit determines whether an error parameter between the first spatial positioning data and the second spatial positioning data is less than a specification value to end the positioning procedure.
Robot system path planning for asset health management
A robotic system includes a processing system comprising at least one processor. The processor generates a plan to monitor the asset. The plan comprises one or more tasks to be performed by the at least one robot. The processor receives sensor data from at least one sensor indicating one or more characteristics of the asset. The processor adjusts the plan to monitor the asset by adjusting or adding one or more tasks to the plan based on one or both of the quality of the acquired data or a potential defect of the asset. The adjusted plan causes the at least one robot to acquire additional data related to the asset when executed.
Determining robot inertial properties
Methods and systems for modifying the inertial parameters used in a virtual robot model that simulates the interactions of a real-world robot with an environment to better reflect the actual inertial properties of the real-world robot. In one aspect, a method includes obtaining joint physical parameter measurements for the joints of a real-world robot, determining simulated joint physical parameter values for each of the joint physical parameter measurements, and adjusting an estimate of inertial properties of the real-world robot used by the virtual robot dynamic model to reduce a difference between the simulated joint physical parameter values and the corresponding joint physical parameter measurements.
INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD, COMPUTER PROGRAM, AND PROGRAM MANUFACTURING METHOD
Provided is an information processing apparatus used for program development. The information processing apparatus includes: a holding unit that holds a behavior verification scenario that defines the order of operation to be invoked; and an evaluation unit that compares operation that the program sequentially invokes with the behavior verification scenario to perform evaluation or verification of the program. The program execution control unit drives the program according to environment information input from the outside, the evaluation unit compares operation sequentially invoked by the program execution control unit according to the environment information with an order of operation defined in the behavior verification scenario to evaluate or verify the program.
ROBOT PROGRAM GENERATION APPARATUS
Provided is a robot program generation apparatus including: means for selecting a typical arrangement pattern of a robot system; means for individually selecting elements to be arranged in the arrangement pattern; means for automatically generating a layout where the elements in a stationary state do not interfere with each other; means for automatically generating a robot program in accordance with task details corresponding to the arrangement pattern and with the layout; means for executing the robot program in a virtual environment and automatically modifying installation positions of the elements in the layout based on whether or not the robot in an operating state interfere with any other elements and on whether or not the robot can reach a workpiece; and means for correcting the robot program based on the installation positions.
METHOD AND SYSTEM TO GENERATE A 3D MODEL FOR A ROBOT SCENE
A robot is configured to perform a task on an object using a method for generating a 3D model sufficient to determine a collision free path and identify the object in an industrial scene. The method includes determining a predefined collision free path and scanning an industrial scene around the robot. Stored images of the industrial scene are retrieved from a memory and analyzed to construct a new 3D model. After an object is detected in the new 3D model, the robot can further scan the image in the industrial scene while moving along a collision free path until the object is identified at a predefined certainty level. The robot can then perform a robot task on the object.