Patent classifications
B25J13/08
Automatic guiding method for self-propelled apparatus
An automatic guiding method for a self-propelled apparatus (10) is provided. The self-propelled apparatus (10) turns and irradiates when a signal light emitted by a charging dock (20) is sensed by a flank sensor (103), and changes its turn direction when another different signal light from the charging dock (20) is sensed by a forward sensor (102). The charging dock (20) switches to emit another signal light different from the signal light currently emitted when each time is triggered by the signal light emitted by the self-propelled apparatus (10). Repeatedly execute the above actions and make the self-propelled apparatus approach the light-emitting unit (202) until the self-propelled apparatus (10) reaches a charging position. It can accurately guide the self-propelled apparatus (10) to the charging position by arranging only two sensors on the self-propelled apparatus.
COORDINATE POSITIONING ARM
A coordinate positioning arm includes: a base end and a head end; a drive frame for moving the head end relative to the base end; and a metrology frame for measuring a position and orientation of the head end relative to the base end. The drive frame includes a plurality of drive axes arranged in series between the base end and the head end. The metrology frame includes a plurality of metrology axes arranged in series between the base end and the head end. The metrology frame is adapted and arranged to be substantially separate and/or independent from the drive frame, for example by supporting the metrology frame substantially only at the base end and head end and by providing the metrology frame with sufficient degrees of freedom (via the metrology axes) to avoid creating an additional constraint between the metrology frame and the drive frame.
CALIBRATING A VIRTUAL FORCE SENSOR OF A ROBOT MANIPULATOR
A method of calibrating a virtual force sensor of a robot manipulator, wherein in a plurality of poses, the method comprises: applying an external wrench to the robot manipulator ascertaining an estimate of the external wrench, ascertaining a respective cost function based on a difference between the determined estimate of the external wrench and a specified external wrench, ascertaining a respective calibration function by minimizing the respective cost function, and storing the respective calibration function in a data set of all calibration functions with assignment of the respective calibration function to a respective pose for which the respective calibration function was ascertained.
GENERATION OF IMAGE FOR ROBOT OPERATION
A robot control system includes circuitry configured to: generate a command to a robot; receive a frame image in which a capture position changes according to a motion of the robot based on the command; extract a partial region from the frame image according to the command; superimpose a delay mark on the partial region to generate an operation image; and display the operation image on a display device, so as to represent a delay of the motion of the robot with respect to the command.
AUTONOMOUS MANIPULATION OF FLEXIBLE PRIMARY PACKAGING IN DIMENSIONALLY STABLE SECONDARY PACKAGING BY MEANS OF ROBOTS
System for automatically manipulating primary packaging in secondary packaging, comprising a robot having at least one robot arm with a clamping gripper installed at a tool centre point, wherein each tool centre point has a force-torque sensor, an image recording module for recording images of at least the upper segment of the primary packaging, comprising at least two stereo cameras for recording 3-D images, and one or more processors for providing a three-dimensional point cloud, controlling the image recording module and controlling the robot on the basis of the analysis of the three-dimensional point cloud and the measurements from the force-torque sensors.
METHOD AND SYSTEM FOR OBJECT IDENTIFICATION
A method for identifying objects by shape in close proximity to other objects of different shapes obtains point cloud information of multiple objects. The objects are arranged in at least two trays and the trays are stacked. A depth image of the objects is obtained according to the point cloud information, and the depth image of the objects is separated and layered to obtain a layer information of all the objects. An object identification system also disclosed. Three-dimensional machine vision is utilized in identifying the objects, improving the accuracy of object identification, and enabling the mechanical arm to accurately grasp the required object.
METHOD AND SYSTEM FOR OBJECT IDENTIFICATION
A method for identifying objects by shape in close proximity to other objects of different shapes obtains point cloud information of multiple objects. The objects are arranged in at least two trays and the trays are stacked. A depth image of the objects is obtained according to the point cloud information, and the depth image of the objects is separated and layered to obtain a layer information of all the objects. An object identification system also disclosed. Three-dimensional machine vision is utilized in identifying the objects, improving the accuracy of object identification, and enabling the mechanical arm to accurately grasp the required object.
DEFORMABLE SENSORS AND METHODS FOR MODIFYING RUN-TIME MEMBRANE STIFFNESS USING MAGNETIC ATTRACTION
Deformable sensors and methods for modifying membrane stiffness through magnetic attraction are provided. A deformable sensor may include a membrane coupled to a housing to form a sensor cavity. The deformable sensor may further include magnetically-attractable particles located on or within the membrane. The deformable sensor may additionally include a magnetic object located at a base within the sensor cavity. The magnetic object may be configured to modifiably attract the magnetically-attractable particles and modify stiffness of the deformable sensor by modifying air pressure within the sensor cavity, based on modifiable strength of the magnetic object to attract the magnetically-attractable particles.
AUTONOMOUS MOBILE GRABBING METHOD FOR MECHANICAL ARM BASED ON VISUAL-HAPTIC FUSION UNDER COMPLEX ILLUMINATION CONDITION
The present disclosure discloses an autonomous mobile grabbing method for a mechanical arm based on visual-haptic fusion under a complex illumination condition, which mainly includes approaching control over a target position and feedback control over environment information.
According to the method, under the complex illumination condition, weighted fusion is conducted on visible light and depth images of a preselected region, identification and positioning of a target object are completed based on a deep neural network, and a mobile mechanical arm is driven to continuously approach the target object; in addition, the pose of the mechanical arm is adjusted according to contact force information of a sensor module, the external environment and the target object; and meanwhile, visual information and haptic information of the target object are fused, and the optimal grabbing pose and the appropriate grabbing force of the target object are selected.
By adopting the method, the object positioning precision and the grabbing accuracy are improved, the collision damage and instability of the mechanical arm are effectively prevented, and the harmful deformation of the grabbed object is reduced.
Determining how to assemble a meal
In an embodiment, a method includes determining a given material to manipulate to achieve a goal state. The goal state can be one or more deformable or granular materials in a particular arrangement. The method further includes, for the given material, determining, a respective outcome for each of a plurality of candidate actions to manipulate the given material. The determining can be performed with a physics-based model, in one embodiment. The method further can include determining a given action of the candidate actions, where the outcome of the given action reaching the goal state is within at least one tolerance. The method further includes, based on a selected action of the given actions, generating a first motion plan for the selected action.