Patent classifications
G05B2219/40607
Robotic system with automated package scan and registration mechanism and methods of operating the same
A system and method for operating a robotic system to scan and register unrecognized objects is disclosed. The robotic system may use an image data representative of an unrecognized object located at a start location to implement operations for transferring the unrecognized object from the start location. While implementing the operations, the robotic system may obtain additional data, including scanning results of one or more portions of the unrecognized object not included in the image data. The robotic system may use the additional data to register the unrecognized object.
ALGORITHM FOR MIX-SIZE DEPALLETIZING
A system and method for identifying a box to be picked up by a robot from a stack of boxes. The method includes obtaining a 2D red-green-blue (RGB) color image of the boxes and a 2D depth map image of the boxes using a 3D camera. The method employs an image segmentation process that uses a simplified mask R-CNN executable by a central processing unit (CPU) to predict which pixels in the RGB image are associated with each box, where the pixels associated with each box are assigned a unique label that combine to define a mask for the box. The method then identifies a location for picking up the box using the segmentation image.
COLLISION HANDLING METHODS IN GRASP GENERATION
A robotic grasp generation technique for part picking applications. Part and gripper geometry are provided as inputs, typically from CAD files. Gripper kinematics are also defined as an input. A set of candidate grasps is provided using any known preliminary grasp generation tool. A point model of the part and a model of the gripper contact surfaces with a clearance margin are used in an optimization computation applied to each of the candidate grasps, resulting in an adjusted grasp database. The adjusted grasps optimize grasp quality using a virtual gripper surface, which positions the actual gripper surface a small distance away from the part. A signed distance field calculation is then performed on each of the adjusted grasps, and those with any collision between the gripper and the part are discarded. The resulting grasp database includes high quality collision-free grasps for use in a robotic part pick-and-place operation.
CALIBRATION OF A COMPUTER-NUMERICALLY-CONTROLLED MACHINE
A method for calibrating a computer-numerically-controlled machine can include capturing one or more images of at least a portion of the computer-numerically-controlled machine. The one or more images can be captured with at least one camera located inside an enclosure containing a material bed. A mapping relationship can be created which maps a pixel in the one or more images to a location within the computer-numerically controlled machine. The creation of the mapping relationship can include compensating for a difference in the one or more images relative to one or more physical parameters of the computer-numerically-controlled machine and/or a material positioned on the material bed. Related systems and/or articles of manufacture, including computer program products, are also provided.
INFORMATION PROCESSING DEVICE, DRIVING CONTROL METHOD, AND PROGRAM-RECORDING MEDIUM
In order to simplify the control configuration for controlling a driving device and improve reliability of the operation of the driving device, an information processing device includes a detection unit and a processing unit. The detection unit detects the detection target from a captured image by using reference data that is a learning result obtained by machine learning of the detection target including the position of the center of gravity of the object in the captured image. The processing unit controls the driving device to be controlled that acts on the object having the center of gravity detected by the detection unit, by using the detection result of the detection unit.
SYNTHETIC REPRESENTATION OF A SURGICAL ROBOT
A synthetic representation of a robot tool for display on a user interface of a robotic system. The synthetic representation may be used to show the position of a view volume of an image capture device with respect to the robot. The synthetic representation may also be used to find a tool that is outside of the field of view, to display range of motion limits for a tool, to remotely communicate information about the robot, and to detect collisions.
ROBOTIC SYSTEM WITH AUTOMATED OBJECT DETECTION MECHANISM AND METHODS OF OPERATING THE SAME
A system and method for operating a robotic system to register unrecognized objects is disclosed. The robotic system may use first image data representative of an unrecognized object to derive an initial minimum viable region (MVR). The robotic system may analyze second image data representative of the unrecognized object to detect a condition representative of an accuracy of the initial MVR. The robotic system may register the initial MVR or an adjustment thereof based on the detected condition.
SYNTHETIC REPRESENTATION OF A SURGICAL INSTRUMENT
A synthetic representation of a tool for display on a user interface of a robotic system. The synthetic representation may be used to show force on the tool, an actual position of the tool, or to show the location of the tool when out of a field of view. A three-dimensional pointer is also provided for a viewer in the surgeon console of a telesurgical system.
LOCATING, SEPARATING, AND PICKING BOXES WITH A SENSOR-GUIDED ROBOT
Techniques are described that enable robotic picking systems to locate and pick boxes from an unstructured pallet using computer vision and/or one or more “exploratory picks” to determine the sizes and locations of boxes on the pallet.
Method for providing power-off command to an automatic apparatus within proximity of a human and control apparatus employing the method
A method for ensuring safety of humans within operating area or in close proximity to an automatic apparatus is applied in and by a control apparatus. The control apparatus is coupled to one or more cameras arranged around the operating area of the automatic apparatus. The control apparatus uses deep learning techniques to analyze images captured by the cameras to determine whether there is a person in the operating area and powers off the automatic apparatus if any person is deemed present.