Patent classifications
G05B2219/40613
Sensorized Robotic Gripping Device
A robotic gripping device is provided. The robotic gripping device includes a palm and a plurality of digits coupled to the palm. The robotic gripping device also includes a time-of-flight sensor arranged on the palm such that the time-of-flight sensor is configured to generate time-of-flight distance data in a direction between the plurality of digits. The robotic gripping device additionally includes an infrared camera, including an infrared illumination source, where the infrared camera is arranged on the palm such that the infrared camera is configured to generate grayscale image data in the direction between the plurality of digits.
Machine vision-based method and system for measuring 3D pose of a part or subassembly of parts
A machine vision-based method and system for measuring 3D pose of a part or subassembly of parts having an unknown pose are disclosed. A number of different applications of the method and system are disclosed including applications which utilize a reprogrammable industrial automation machine such as a robot. The method includes providing a reference cloud of 3D voxels which represent a reference surface of a reference part or subassembly having a known reference pose. Using at least one 2D/3D hybrid sensor, a sample cloud of 3D voxels which represent a corresponding surface of a sample part or subassembly of the same type as the reference part or subassembly is acquired. The sample part or subassembly has an actual pose different from the reference pose. The voxels of the sample and reference clouds are processed utilizing a matching algorithm to determine the pose of the sample part or subassembly.
COMPUTER CONTROLLED POSITIONING OF DELICATE OBJECTS WITH LOW-CONTACT FORCE INTERACTION USING A ROBOT
A computer positions an object using a computer-controlled positioning device. The computer is operatively associated with the positioning device via a control interface. The positioning device has a substantially-hollow interior chamber. The computer identifies a selected object located at a primary location within the interior chamber and having a primary orientation with respect thereto. The computer identifies a first array of elements constructed and arranged to generate contact-free support forces sufficient to maintain the selected object at the primary location. The computer identifies a second array of elements constructed and arranged to provide contact-free interaction forces sufficient to move the selected object within the interior chamber. The computer interacts with the selected object, using the control interface to adjust at least one of either the supporting forces and the interaction forces, to place the selected object into at least one of a secondary location or a secondary orientation.
Sensorized robotic gripping device
A robotic gripping device is provided. The robotic gripping device includes a palm and a plurality of digits coupled to the palm. The robotic gripping device also includes a time-of-flight sensor arranged on the palm such that the time-of-flight sensor is configured to generate time-of-flight distance data in a direction between the plurality of digits. The robotic gripping device additionally includes an infrared camera, including an infrared illumination source, where the infrared camera is arranged on the palm such that the infrared camera is configured to generate grayscale image data in the direction between the plurality of digits.
METHOD AND DEVICE FOR CREATION OF THREE DIMENSIONAL TOOL FRAME
Embodiments of the disclosure include a method to create a three-dimensional (3D) tool frame for a robot. The method includes identifying a reference point on a calibration grid using a robotic vision system, such as a camera or laser. The identified reference point is used to create a user frame coordinate system with an origin at the identified reference point. The identified reference point being equal to a field of view origin created by a 3D scanner. Being at the specific location where the field of view origin is created during calibration, a 3D tool frame is created based on user frame location. The 3D tool frame indicates the location and orientation of the 3D scanner in the field of view coordinate system.
SYSTEM AND METHOD FOR SCANNING AN OBJECT USING AN ARRAY OF ULTRASONIC TRANSDUCERS
A plan for scanning an object using an array of ultrasonic transducers is prepared by identifying one or more enabled transducers for each of a plurality of selected grid positions defined on a surface of the object. The identification is made considering a direction of incidence of a planned ultrasonic signal emitted by each transducer, considering the array being positioned at each of the selected positions, and considering a surface normal vector at a grid position that would be impinged by the planned ultrasonic signal. The scan plan is complete when all grid positions defined on the surface of the array are covered by moving the array to all of the selected grid positions. The scan plan is executed by moving the array according to the scan plan while collecting, by the transducers enabled at each selected grid position, eventual responses to the ultrasonic signals from their respective grid positions.
Object detection device, control device, and object detection computer program
An object detection device detects, when a camera that generates an image representing a target object and the target object do not satisfy a predetermined positional relationship, a position of the target object on the image by inputting the image to a classifier, and detects, when the camera and the target object satisfy the predetermined positional relationship, a position of the target object on the image by comparing, with the image, a template representing a feature of an appearance of the target object when the target object is viewed from a predetermined direction.
ARTICULATED ROBOT ARM AND PRINTING METHOD USING THE SAME
Provided is an articulated robot arm capable of laser printing. The articulated robot arm includes: a communicator for receiving beverage order information; a grip part gripping and moving a cup; an articulation part having one side coupled to the grip part and including a plurality of articulation units; a controller controlling operations of the grip part and the articulation part; and a laser beam irradiation unit provided on at least a partial area of the grip part and irradiating a laser beam to print the beverage order information on the cup.
Systems, devices, components, and methods for a compact robotic gripper with palm-mounted sensing, grasping, and computing devices and components
Disclosed are various embodiments of a three-dimensional perception and object manipulation robot gripper configured for connection to and operation in conjunction with a robot arm. In some embodiments, the gripper comprises a palm, a plurality of motors or actuators operably connected to the palm, a mechanical manipulation system operably connected to the palm, a plurality of fingers operably connected to the motors or actuators and configured to manipulate one or more objects located within a workspace or target volume that can be accessed by the fingers. A depth camera system is also operably connected to the palm. One or more computing devices are operably connected to the depth camera and are configured and programmed to process images provided by the depth camera system to determine the location and orientation of the one or more objects within a workspace, and in accordance therewith, provide as outputs therefrom control signals or instructions configured to be employed by the motors or actuators to control movement and operation of the plurality of fingers so as to permit the fingers to manipulate the one or more objects located within the workspace or target volume. The gripper can also be configured to vary controllably at least one of a force, a torque, a stiffness, and a compliance applied by one or more of the plurality of fingers to the one or more objects.
Medical robot arm apparatus, medical robot arm control system, medical robot arm control method, and program
Provided is a surgical imaging apparatus that includes a multi-link, multi joint structure including a plurality of joints that interconnect a plurality of links to provide the multi-link, multi joint structure with a plurality of degrees of freedom, at least one video camera being disposed on a distal end of the multi-link, multi-joint structure; at least one actuator that drives at least one of the plurality of joints; and circuitry that detects a joint force experienced at the at least one of the plurality of joints in response to an applied external force, and controls the at least one actuator based on the joint force so as to position the video camera.