Patent classifications
G05B2219/40425
Robotic Cooking Device
Provided is a robotic cooking device including: a chassis; a set of wheels; a processor; an actuator; one or more sensors; one or more motors; and one or more cooking devices. An application of a communication device wirelessly connected to the robotic cooking device is used for one or more of: choosing settings of the robotic cooking device, choosing a location of the robotic cooking device, adjusting or generating a map of the environment, adjusting or generating a navigation path of the robotic cooking device, adjusting or generating boundaries of the robotic cooking device, and monitoring a food item within the one or more cooking devices.
Absolute robot-assisted positioning method
An absolute robot-assisted positioning method is provided which can be performed by a facility. The method optimises an assembly task which has been created theoretically at a computer workstation and which is implemented in reality by the facility. The disclosed facility includes at least one robot, at least one measurement system and a computer, wherein the at least one measurement system monitors the at least one robot while the assembly task is being performed, and the robot and the measurement system are connected to each other via the computer.
Robotic hand tool sharpening and cleaning apparatus
An automated hand tool sharpening and cleaning system for sharpening the two opposed cutting edges of domestic, industrial, sport, or hobby hand tool like a knife blade is provided by the invention. The apparatus comprises a six-axis robotic arm, a pneumatic gripper, a vision sensor camera for profiling the blade edges, a robotic controller, and sequentially-arranged grinding, coarse sharpening, fine sharpening, and buffing rotating wheel assemblies used to grind, sharpen, and buff or polish the cutting edges of the knife blade. The blade cutting edges are profiled by the camera image that is processed by associated software to define the blade by multiple points defined along its edge, followed by a set of algorithms that are used to clean up any discrepancies in the profile data. The resulting corrected profile data is then translated into a set of machine control commands fed to the robotic arm and pneumatic gripper via the robot controller for manipulating the knife blade edges via the robotic arm with respect to each of the grinding, coarse sharpening, fine sharpening, and buffing/polishing wheels and an associated wash station for remove bits of metal and other residue resulting from the sharpened knife blade.
Machine Vision-Based Method and System for Measuring 3D Pose of a Part or Subassembly of Parts
A machine vision-based method and system for measuring 3D pose of a part or subassembly of parts having an unknown pose are disclosed. A number of different applications of the method and system are disclosed including applications which utilize a reprogrammable industrial automation machine such as a robot. The method includes providing a reference cloud of 3D voxels which represent a reference surface of a reference part or subassembly having a known reference pose. Using at least one 2D/3D hybrid sensor, a sample cloud of 3D voxels which represent a corresponding surface of a sample part or subassembly of the same type as the reference part or subassembly is acquired. The sample part or subassembly has an actual pose different from the reference pose. The voxels of the sample and reference clouds are processed utilizing a matching algorithm to determine the pose of the sample part or subassembly.
VISION GUIDED ROBOT ARM AND METHOD FOR OPERATING THE SAME
A method for operating a vision guided robot arm system comprising a robot arm provided with an end effector at a distal end thereof, a display, an image sensor and a controller, the method comprising: receiving from the sensor image an initial image of an area comprising at least one object and displaying the initial image on the display; determining an object of interest amongst the at least one object and identifying the object of interest within the initial image; determining a potential action related to the object of interest and providing a user with an identification of the potential action; receiving a confirmation of the object of interest and the potential action from the user; and automatically moving the robot arm so as to position the end effector of the robot arm at a predefined position relative to the object of interest.
Imager for Detecting Visual Light and Projected Patterns
Methods and systems for depth sensing are provided. A system includes a first and second optical sensor each including a first plurality of photodetectors configured to capture visible light interspersed with a second plurality of photodetectors configured to capture infrared light within a particular infrared band. The system also includes a computing device configured to (i) identify first corresponding features of the environment between a first visible light image captured by the first optical sensor and a second visible light image captured by the second optical sensor; (ii) identify second corresponding features of the environment between a first infrared light image captured by the first optical sensor and a second infrared light image captured by the second optical sensor; and (iii) determine a depth estimate for at least one surface in the environment based on the first corresponding features and the second corresponding features.
Method and apparatus for conveying a meat product and using a knife for automated cutting of meat
The technology as disclosed herein includes a method and apparatus for deboning a meat item, and more particular for deboning a poultry item including performing an initial shoulder cut for removing boneless breast meat from the poultry carcass or frame. The technology as disclosed and claimed further includes a method and apparatus for removing a tender meat portion from a poultry item. The method and apparatus disclosed and claimed herein is a combination of a robotic arm including an ultrasonic knife implement and/or an annular blade knife implement and a vision system for varying the cut path based on the shape and size of the poultry item. The combination as claimed including the ultrasonic knife can perform a meat cut while penetrating the meat with less force than the typical penetration that occurs when using a traditional knife. The combination as claimed including the annular blade knife implement can remove the tender meat portion for the keel bone and posterior sheath.
APPARATUS AND METHOD FOR POSITIONING EQUIPMENT RELATIVE TO A DRILL HOLE
An automated vehicle comprising: a control unit configured to control movement of the automated vehicle to a location adjacent an estimated location of a drill hole; a scanning portion including one or more scanning devices configured to scan an area of terrain in the vicinity of the estimated location of the drill hole in order to determine an actual location of the drill hole, and to generate a point cloud representing at least a portion of the interior of the drill hole; at least one arm associated with the scanning portion, the at least one arm configured to move the scanning portion between a home position and one or more scanning positions; and an end effector associated with the at least one arm, the end effector being configured to perform one or more operations;
wherein, upon generating the point cloud, the at least one arm is configured, based on the point cloud, to position the end effector in substantial alignment with the drill hole so that the end effector can perform the one or more operations.
Detection system, transport system, detection method, and detection program
An embodiment of the present disclosure relates to a detection system used in a case where a transport robot transports a package in a state where the package in which an opening portion of a container is closed by a lid is placed on a placing portion of the transport robot. The detection system is configured to detect that the lid is opened.
Imager for detecting visual light and projected patterns
Methods and systems for depth sensing are provided. A system includes a first and second optical sensor each including a first plurality of photodetectors configured to capture visible light interspersed with a second plurality of photodetectors configured to capture infrared light within a particular infrared band. The system also includes a computing device configured to (i) identify first corresponding features of the environment between a first visible light image captured by the first optical sensor and a second visible light image captured by the second optical sensor; (ii) identify second corresponding features of the environment between a first infrared light image captured by the first optical sensor and a second infrared light image captured by the second optical sensor; and (iii) determine a depth estimate for at least one surface in the environment based on the first corresponding features and the second corresponding features.