Patent classifications
G05B2219/40613
Machine learning control of object handovers
A robotic control system directs a robot to take an object from a human grasp by obtaining an image of a human hand holding an object, estimating the pose of the human hand and the object, and determining a grasp pose for the robot that will not interfere with the human hand. In at least one example, a depth camera is used to obtain a point cloud of the human hand holding the object. The point cloud is provided to a deep network that is trained to generate a grasp pose for a robotic gripper that can take the object from the human's hand without pinching or touching the human's fingers.
ULTRASOUND INSPECTION SYSTEM AND METHOD
A system for inspecting a structure includes a laser ultrasound device configured to direct laser light onto a surface of the structure that generates ultrasonic waves within the structure and to generate an array of ultrasound data representative of the ultrasonic waves. The system includes a robotic arm configured to move the laser light across the surface. The system includes a multiplex controller configured to trigger generation of the ultrasonic waves within the structure at an inspection location and to receive the array of ultrasound data for the inspection location. The system includes a computer system that includes a motion-control module configured to control movement of the laser light relative to the surface of the structure, a motion-tracking module configured determine when the laser light is at the inspection location, and an inspection module configured to process the array of ultrasound data to inspect the structure at the inspection location.
TASK-ORIENTED 3D RECONSTRUCTION FOR AUTONOMOUS ROBOTIC OPERATIONS
Autonomous operations, such as robotic grasping and manipulation, in unknown or dynamic environments present various technical challenges. For example, three-dimensional (3D) reconstruction of a given object often focuses on the geometry of the object without considering how the 3D model of the object is used in solving or performing a robot operation task. As described herein, in accordance with various embodiments, models are generated of objects and/or physical environments based on tasks that autonomous machines perform with the objects or within the physical environments. Thus, in some cases, a given object or environment may be modeled differently depending on the task that is performed using the model. Further, portions of an object or environment may be modeled with varying resolutions depending on the task associated with the model.
TEACHING SYSTEM AND TEACHING METHOD FOR LASER MACHINING
A teaching system includes a sensor that detects the intensity of reflected light and at least one processor configured to: receive inputs and perform a generation of teaching data that enables laser machining at all machining points by using a laser beam having an angle larger than or equal to the minimum value and smaller than the maximum value; determine whether or not intensities of reflected light at all the machining points include an intensity exceeding a predetermined threshold value; increase the minimum value at a corresponding machining point by a predetermined increment if the determination result indicates that the threshold value is exceeded; and repeat the generation of the teaching data using a most-recently adjusted minimum value, the determination, and adjustment of the minimum value until it is determined that the threshold value is not exceeded.
ROBOT-MOUNTED MOVING DEVICE, SYSTEM, AND MACHINE TOOL
A system includes a machine tool 10, a robot 25 having a camera 31, and a transfer device 35 having the robot 25 mounted thereon, and an identification figure is arranged in a machining area of the machine tool 10.
NON-INVASIVE ROBOTIC THERAPY SYSTEM
Disclosed herein is a robotic system comprising a robotic arm, an end effector coupled to a distal end of a robotic arm, one or more cameras, and a processors configured to construct a three-dimensional (3D) model of a user based on received data from the one or more cameras, identify automatically a target therapy point on the user based on the constructed 3D model, and actuate the end effector to apply a therapy to the target therapy point.
FOLLOWING ROBOT
A following robot including an arm, at least one visual sensor, a feature-value storage unit that stores, as target data for causing the visual sensor to follow a follow target, first feature values related to at least the position and the orientation of the follow target, a feature-value detecting unit for detecting, by using an image acquired by the visual sensor, second feature values related to at least current position and orientation of the follow target, a movement-amount computing unit computing a movement instruction based on differences between the second feature values and the first feature values and adjusting the movement instruction by using at least feedforward control, a movement instructing unit which moves the arm based on the movement instruction, and an input-value storage unit that stores a signal acquired when a specific motion of the follow target is started and an input value for the feedforward control.
MACHINE LEARNING CONTROL OF OBJECT HANDOVERS
A robotic control system directs a robot to take an object from a human grasp by obtaining an image of a human hand holding an object, estimating the pose of the human hand and the object, and determining a grasp pose for the robot that will not interfere with the human hand. In at least one example, a depth camera is used to obtain a point cloud of the human hand holding the object. The point cloud is provided to a deep network that is trained to generate a grasp pose for a robotic gripper that can take the object from the human's hand without pinching or touching the human's fingers.
AUTOMATIC REMOVAL SYSTEM FOR RESERVOIR CAP
An automatic removal system for a reservoir cap is provided. The system includes a removal robot including a polyarticular robot arm, a finger that is installed at an end of the robot arm to move forwards and backwards through a cylinder and grips or releases the reservoir cap, a rotary motor that rotates the finger, and a gripping mechanism equipped with a detection sensor that monitors a front; and a controller recognizing the reservoir cap installed in a reservoir inlet or a reservoir through a detection sensor, controlling the robot arm to move the gripping mechanism to a recognition point, and controlling the cylinder, the finger, and the rotary motor to attach or detach the reservoir cap to or from the reservoir inlet.
Process and device for direct fabrication of a part on a structure
A process for direct fabrication of a part at a predetermined structural position. The process comprises: a) scanning, via a three-dimensional scanner, the structure in the region of the predetermined position; b) comparing a virtual surface mesh of the predetermined position with a real surface mesh of the predetermined position, the real surface mesh calculated based on data obtained from the scanning; c) determining the gaps between the two meshes; d) calculating the data for modeling an inserted part, the dimensions of which fill up the determined gaps, to obtain an inserted part model; e) merging of a virtual model of a part, linked with the predetermined position, with the inserted part model, to obtain a model adjusted to the geometry of the structure in the region of the predetermined position; f) fabricating, by material deposition, an adjusted part at the predetermined position based on the adjusted model.