Patent classifications
G05B2219/40613
Arrangement and Method for the Model-Based Calibration of a Robot in a Working Space
An arrangement for the model-based calibration of a mechanism in a workspace with calibration objects that are either directed laser radiation patterns together with an associated laser radiation-pattern generator or radiation-pattern position sensors. Functional operation groups made up of at least one laser radiation pattern and at least one position sensor interact in such a way when a radiation pattern impinges on the sensor that measured sensor position information values are passed along to computing devices that determine the parameters of a mathematical mechanism model with the aid of these measured values. In the process, at least two different functional operation groups are used to calibrate the mechanism, and at least two calibration objects from different functional operation groups are rigidly connected to one another.
Appearance inspection system, setting device, image processing device, inspection method, and program
To provide an appearance inspection system capable of reduce labor for setting an imaging condition by a designer when a plurality of inspection target positions on a target is sequentially imaged. An appearance inspection system includes an imaging condition decision part and a route decision part. The imaging condition decision part decides a plurality of imaging condition candidates including a relative position between a workpiece and an imaging device for at least one inspection target position among a plurality of inspection target positions. The route decision part decides a change route of an imaging condition for sequentially imaging the plurality of inspection target positions by selecting one imaging condition among the plurality of imaging condition candidates so that a pre-decided requirement is satisfied.
Information processing apparatus, information processing method, and program
An information processing apparatus includes: a determination unit configured to determine a plurality of measurement positions and/or orientations from which a 3D measurement sensor makes three-dimensional measurements; a controller configured to successively move the 3D measurement sensor to the plurality of measurement positions and/or orientations; a measurement unit configured to generate a plurality of 3D measurement data sets through three-dimensional measurement using the 3D measurement sensor at each of the plurality of measurement positions and/or orientations; and a data integration unit configured to integrate the plurality of 3D measurement data sets.
MACHINE LEARNING CONTROL OF OBJECT HANDOVERS
A robotic control system directs a robot to take an object from a human grasp by obtaining an image of a human hand holding an object, estimating the pose of the human hand and the object, and determining a grasp pose for the robot that will not interfere with the human hand. In at least one example, a depth camera is used to obtain a point cloud of the human hand holding the object. The point cloud is provided to a deep network that is trained to generate a grasp pose for a robotic gripper that can take the object from the human's hand without pinching or touching the human's fingers.
Systems, Devices, Components, and Methods for a Compact Robotic Gripper with Palm-Mounted Sensing, Grasping, and Computing Devices and Components
Disclosed are various embodiments of a three-dimensional perception and object manipulation robot gripper configured for connection to and operation in conjunction with a robot arm. In some embodiments, the gripper comprises a palm, a plurality of motors or actuators operably connected to the palm, a mechanical manipulation system operably connected to the palm, a plurality of fingers operably connected to the motors or actuators and configured to manipulate one or more objects located within a workspace or target volume that can be accessed by the fingers. A depth camera system is also operably connected to the palm. One or more computing devices are operably connected to the depth camera and are configured and programmed to process images provided by the depth camera system to determine the location and orientation of the one or more objects within a workspace, and in accordance therewith, provide as outputs therefrom control signals or instructions configured to be employed by the motors or actuators to control movement and operation of the plurality of fingers so as to permit the fingers to manipulate the one or more objects located within the workspace or target volume. The gripper can also be configured to vary controllably at least one of a force, a torque, a stiffness, and a compliance applied by one or more of the plurality of fingers to the one or more objects.
Laser machining system including laser machining head and imaging device
A machining system includes a laser irradiation device, a camera that captures an image of a workpiece, and a display that displays the image captured by the camera. The machining system includes a robot control device that controls the camera and the display. The camera captures the image of the workpiece before machining. The robot control device calculates a laser-beam irradiation position on the workpiece and virtually displays the laser-beam irradiation position on the display so as to overlap the workpiece image captured by the camera.
Measurement system, measurement device, measurement method, and measurement program
Provided are a measurement system, a measurement device, a measurement method, and a measurement program. 3D data is registered to 3D data based on the displacements of joints of a robot at a point in time when a 3D sensor measures 3D data of a measurement object at a specific measurement point while the robot is stopped, and the displacements of the joints of the robot at a point in time when the 3D sensor measures 3D data of the measurement object at a measurement point other than the specific measurement point while that robot is in motion. The 3D data is further registered to the 3D data such that a registration error between the 3D data and the 3D data is less than a threshold value. Similarly, each of 3D data is registered to the 3D data.
MEDICAL ROBOT ARM APPARATUS, MEDICAL ROBOT ARM CONTROL SYSTEM, MEDICAL ROBOT ARM CONTROL METHOD, AND PROGRAM
Provided is a surgical imaging apparatus that includes a multi-link, multi-joint structure including a plurality of joints that interconnect a plurality of links to provide the multi-link, multi-joint structure with a plurality of degrees of freedom, at least one video camera being disposed on a distal end of the multi-link, multi-joint structure; at least one actuator that drives at least one of the plurality of joints; and circuitry that detects a joint force experienced at the at least one of the plurality of joints in response to an applied external force, and controls the at least one actuator based on the joint force so as to position the video camera.
CREATING TRAINING DATA VARIABILITY IN MACHINE LEARNING FOR OBJECT LABELLING FROM IMAGES
It is described an image labeling system (100) comprising: a support (2) for an object (3) to be labeled; a digital camera (1) configured to capture a plurality of images of a scene including said object (3); a process and control apparatus (5) configured to receive said images and generate corresponding labeling data (21-24, L1-L4) associated to said object (3); a digital display (4) associated with said support (2) and connected to the process and control apparatus (5) to selectively display additional images (7-13) selected from the group comprising: first images (7-11) in the form of backgrounds for the plurality of images and introducing a degree of variability in the scene; second images (12) indicating position and/or orientation according to which place said object (3) by a user on the support (2); third images (13) to be captured by the digital camera (1) and provided to the process and control apparatus (5) to evaluate a position of the digital camera (1) with respect the digital display (4); fourth images to be captured by the digital camera (1) and provided to the process and control apparatus (5) to evaluate at least one of the following data of the object (3): position, orientation, 3D shape.
Controlling a robot in an environment
There is provided a method of controlling a robot within an environment comprising: i) receiving, from a 3D scanner, data relating to at least a portion of the environment for constructing a 3D point cloud representing at least a portion of the environment; ii) comparing the 3D point cloud to a virtual 3D model of the environment and, based upon the comparison, determining a position of the robot; then iii) determining a movement trajectory for the robot based upon the determined position of the robot. Also provided is a control apparatus and a robot control system.