Patent classifications
G05B2219/40564
AUTONOMOUS DEVICES, SYSTEMS, AND METHODS FOR QUEUING FOLDED LAUNDRY
Devices, systems, and methods for autonomously queueing a plurality of folded household laundry articles in a packing queue are described. A packing system includes a rotating platform configured to rotate a folded laundry article for directional placement in the packing queue, at least one packing queue platform receiving the folded laundry article into a packing queue disposed thereon, and a double ended conveyor. The double ended conveyor includes a retrieving end and a depositing end. The conveyor is configured to be mounted to a gantry for traveling along the length of the queue platform and cantilevering the retrieving end over the rotating platform for retrieving the folded laundry article and the depositing end over the packing queue platform for depositing the folded laundry article onto either a surface of the packing queue platform or another folded laundry article of the plurality of household laundry articles.
Three-dimensional measuring apparatus, robot, and robot system
A three-dimensional measuring apparatus includes a projection unit that projects a first pattern light and a second pattern light by a laser beam on a region containing an object, an imaging unit that images a captured image of the region, a vibration information receiving part that receives vibration information on a vibration of the projection unit or the imaging unit, and a measuring unit that measures a three-dimensional shape of the object based on the captured image, wherein the region on which the first pattern light having a first period is projected is imaged by the imaging unit when the vibration information is equal to or smaller than a first threshold value.
OBJECT PLACEMENT
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for performing planning for robotic placement tasks. One of the methods includes determining an initial in-hand state for a grasped object. A show pose for the grasped object is determined, and the object is moved to the show pose. A refined in-hand state for the grasped object is determined based on the show pose, and a placement plan is determined based on the refined in-hand state for the grasped object.
SYSTEMS, DEVICES, ARTICLES, AND METHODS FOR PREHENSION
An end-effector may include a base, a plurality of underactuated fingers coupled to the base; and an adhesion gripper coupled to the base. An end-effector may include a base, an actuator, a first underactuated finger comprising a proximal link and a distal link, the proximal link including a distal end, a guide for a first tendon spaced a first distance away from the distal end of the proximal link and the distal link including a lever arm disposed on a proximal side to the distal pad and which extends in a volar direction from a first axis, and a node disposed on the lever arm sized and shaped to receive a first tendon. The end-effector may include a first revolute joint compliant in a first direction disposed between the base and the proximal link; and a second revolute joint compliant in the first direction disposed between the proximal link and the distal link.
Robotic system with handling mechanism and method of operation thereof
A gripper including: an orientation sensor configured to generate an orientation reading for a target object; a first grasping blade and a second grasping blade configured to secure the target object in conjunction with the first grasping blade and at an opposite end of the target object relative to the first grasping blade; a first position sensor, of the first grasping blade, configured to generate a first position reading of the first grasping blade relative to the target object; a second position sensor, of the second grasping blade, configured to generate a second position reading of the second grasping blade relative to the target object; and a blade actuator configured to secure the target object with the first grasping blade and the second grasping blade based on a valid orientation of the orientation reading and based on the first position reading and the second position reading indicating a stable condition.
GRIPPING ELEMENT FOR ARRANGEMENT ON A COMPONENT
A gripping element for arrangement on a component, wherein the component, in particular a cable harness, is of flexible design and has a changeable form, wherein the gripping element has at least one optically detectable marker via which a spatial orientation of the gripping element is detectable by an optical recognition device, and wherein the gripping element is grippable by a gripper.
Localization system and methods
A method of determining the position and orientation of a moveable object includes a) connecting a plurality of targets to the object at known points on the object such that the targets will move with the object; b) scanning the surface of the object and at least some of the plurality of targets to obtain target data; c) comparing the scanned target data to at least one known dimension of one of the plurality of targets; and d) if scanned target data matches the at least one known dimension of one of the plurality of targets, mapping known object model data according to the target data to determine the position and orientation of the object.
Method and computing system for object recognition or object registration based on image classification
A computing system and method for object recognition is presented. The method includes the computing system obtaining an image for representing the one or more objects, and generating a target image portion associated with one of the one or more objects. The computing system determines whether to classify the target image portion as textured or textureless, and selects a template storage space from among a first and second template storage space, wherein the first template storage space is cleared more often relative to the second template storage space. The first template storage space is selected in response to a textureless classification, and the second template storage space is selected as the template storage space in response to a textured classification. The computing system performs object recognition based on the target image portion and the selected template storage space.
AUTONOMOUS DEVICES, SYSTEMS, AND METHODS FOR PACKING FOLDED LAUNDRY
An autonomously operated system is configured to load at least one unbound deformable article into a container. The system includes a refillable cartridge configured to receive therein the article, at least one extendable conveyor being configured to extend into the refillable cartridge and deposit the article within the refillable cartridge, a driven lifter configured to selectively lower and raise the refillable cartridge relative to the at least one extendable conveyor, one or more sensors configured output a signal indicative of a fill height of the refillable cartridge, and at least one controller configured to, in response a received signal form the one or more sensors, instruct the driven lifter to at least one of raise and lower the refillable cartridge to at least one of one or more loading heights for receiving the article, and lower the refillable cartridge to an unloading position within the container.
VIRTUAL TEACH AND REPEAT MOBILE MANIPULATION SYSTEM
A method for performing a task by a robotic device includes mapping a group of task image pixel descriptors associated with a first group of pixels in a task image of a task environment to a group of teaching image pixel descriptors associated with a second group of pixels in a teaching image based on positioning the robotic device within the task environment. The method also includes determining a relative transform between the task image and the teaching image based on mapping the plurality of task image pixel descriptors. The relative transform indicates a change in one or more of points of 3D space between the task image and the teaching image. The method also includes performing the task associated with the set of parameterized behaviors based on updating one or more parameters of a set of parameterized behaviors associated with the teaching image based on determining the relative transform.