Patent classifications
G05B2219/37567
Virtual teach and repeat mobile manipulation system
A method for controlling a robotic device is presented. The method includes positioning the robotic device within a task environment. The method also includes mapping descriptors of a task image of a scene in the task environment to a teaching image of a teaching environment. The method further includes defining a relative transform between the task image and the teaching image based on the mapping. Furthermore, the method includes updating parameters of a set of parameterized behaviors based on the relative transform to perform a task corresponding to the teaching image.
Reconfigurable, fixtureless manufacturing system and method assisted by learning software
Systems and methods for AI assisted reconfigurable, fixtureless manufacturing is disclosed. The invention eliminates geometry-setting tools (hard points, pins and nets—traditionally known as 3-2-1 fixturing schemes) and to replace the physical geometry setting with virtual datums driven by learning AI algorithms. A first type of part and a second type of part may be located by a machine vision system and moved by material handling devices and robots to locations within an assembly area. The parts may be aligned with one another and the alignment may be checked by the machine vision system which is configured to locate datums, in the form of features, of the parts and compare such datums to stored virtual datums. The parts may be joined while being held by the material handling devices or robots to form a subassembly in a fixtureless fashion. The material handling devices are able to grasp a number of different types of parts so that a number of different types of subassemblies are capable of being assembled. The system enables one skilled in the art to develop a product design with self-locating parts that will eliminate and minimize the need for geometry setting dedicated line tools and fixtures. This leads to the development of a manufacturing process that utilizes the industry 4.0 technologies to once again eliminate or significantly reduces the need for geometry setting line tools.
System and method for augmenting a visual output from a robotic device
A method for visualizing data generated by a robotic device is presented. The method includes displaying an intended path of the robotic device in an environment. The method also includes displaying a first area in the environment identified as drivable for the robotic device. The method further includes receiving an input to identify a second area in the environment as drivable and transmitting the second area to the robotic device.
Calibration method and device for robotic arm system
A calibration method for a robotic arm system is provided. The method includes: capturing an image of a calibration object fixed to a front end of the robotic arm by a visual device, wherein a pedestal of the robotic arm has a pedestal coordinate system, and the front end of the robotic arm has a first relative relationship with the pedestal, the front end of the robotic arm has a second relative relationship with the calibration object; receiving the image and obtaining three-dimensional feature data of the calibration object according to the image by a computing device; and computing a third relative relationship between the visual device and the pedestal according to the three-dimensional feature data, the first relative relationship, and the second relative relationship to calibrate a position error between a physical location of the calibration object and a predictive positioning-location generated by the visual device.
RECONFIGURABLE, FIXTURELESS MANUFACTURING SYSTEM AND METHOD
Systems and methods for reconfigurable, fixtureless manufacturing are provided. Material handling robots grasp and move parts within an assembly area to adjoin one another in a predetermined orientation. While the parts remain grasped and suspended within the assembly area, out of contact with any fixtures, work surfaces, jigs, and locators, a machine vision system performs an alignment scan to determine locations of datums on the parts which are transmitted to a controller for comparison against stored virtual datums for a subassembly comprising the joined parts. The location of the datums are transmitted to a joining robot which joins the parts to form the subassembly. The machine vision system performs an inspection scan of the datums on the parts after joining.
DUAL-MAINPULATOR CONTROL METHOD AND STORAGE MEDIUM
A dual-manipulator control method is configured to be used in a dual-manipulator control system including a first manipulator, a second manipulator, and a central control module. The first manipulator and the second manipulator are controlled by the central control module, and the central control module is configured to execute the dual-manipulator control method. The dual-manipulator control method includes: generating a first instruction sequence to control the first manipulator and a second instruction sequence to control the second manipulator; and controlling the first manipulator and the second manipulator based on the first instruction sequence and the second instruction sequence. Thus, the working efficiency is improved.
Information processing system, method, and program
An apparatus is provided with a first sensor unit that obtains two-dimensional information or three-dimensional information about a target object with a first position and orientation, a second sensor unit that obtains the two-dimensional information about the target object, a three-dimensional position and orientation measurement unit that measures three-dimensional position and orientation of the target object based on the information obtained by the first sensor unit, a second sensor position and orientation determination unit that calculates second position and orientation based on a measurement result with the three-dimensional position and orientation measurement unit and model information about the target object, and a three-dimensional position and orientation measurement unit that measures the three-dimensional position and orientation of the target object based on the information obtained by the second sensor unit with the second position and orientation and the model information about the target object.
Safety in dynamic 3D healthcare environment
A medical safety-system for dynamic 3D healthcare environments, a medical examination system with motorized equipment, an image acquisition arrangement, and a method for providing safe movements in dynamic 3D healthcare environments. The medical safety-system for dynamic 3D healthcare environments includes a detection system, a processing unit, and an interface unit. The detection system includes at least one sensor arrangement to provide depth information of at least a part of an observed scene. The processing unit includes a correlation unit to assign the depth information and a generation unit to generate a 3D free space model to provide the 3D free space model.
AUTONOMOUS TASK PERFORMANCE BASED ON VISUAL EMBEDDINGS
A method for controlling a robotic device is presented. The method includes capturing an image corresponding to a current view of the robotic device. The method also includes identifying a keyframe image comprising a first set of pixels matching a second set of pixels of the image. The method further includes performing, by the robotic device, a task corresponding to the keyframe image.
Autonomous task performance based on visual embeddings
A method for controlling a robotic device is presented. The method includes capturing an image corresponding to a current view of the robotic device. The method also includes identifying a keyframe image comprising a first set of pixels matching a second set of pixels of the image. The method further includes performing, by the robotic device, a task corresponding to the keyframe image.