Patent classifications
B25J13/089
SYSTEM AND METHOD FOR CONTROLLING THE ROBOT, ELECTRONIC DEVICE AND COMPUTER READABLE MEDIUM
Systems, devices, and methods for controlling a robot. Some methods include, in response to determining that an object enters a reachable area of the robot, triggering a first sensor to sense a movement of the object; determining first position information of the object based on data received from the first sensor; determining second position information of the object based on second data received from a second sensor; and generating a first prediction of a target position at which the object is operated by the robot. In this way, the robot can complete an operation for the object on the AGV within a limit operation time during which the AGV passes through the reachable area of the robot. Meanwhile, by collecting the sensing data from different sensor groups, a target position at which the object is handled by the robot may be predicted more accurately.
CONTROLLING MECHANICAL SYSTEMS BASED ON NATURAL LANGUAGE INPUT
A method is provided. The method includes obtaining an enhanced state graph. The enhanced state graph represents a set of objects within an environment and a set of positions of the set of objects. The enhanced state graph includes a set of object nodes, a set of property nodes and a set of goal nodes to represent a set of objectives. The method also includes generating a set of instructions for a set of mechanical systems based on the enhanced state graph. The set of mechanical systems is configured to interact with one or more of the set of objects within the environment. The method further includes operating the set of mechanical systems to achieve the set of objectives based on the set of instructions.
Teaching apparatus, robot system, and teaching program
A teaching apparatus includes a display unit that displays a command display area in which a plurality of input motion commands of a robot are displayed, an extraction display area in which at least one motion command extracted from the plurality of motion commands displayed in the command display area is displayed, and a settings input area in which details of the extracted motion command are set, and a display control unit that controls actuation of the display unit, wherein the display control unit extracts and displays a motion command related to one of position information, velocity information, and acceleration information of the robot out of the plurality of motion commands displayed in the command display area in the extraction display area.
Position correction device, robot, and connection jig
A position correction device according to an embodiment includes a movable part and a pressing part. The movable part is capable of moving a holding part that holds a connection object back and forth in each of a second direction that is orthogonal to a first direction where the holding part is moved therein in order to connect the connection object to a target connector, and a rotational direction where the holding part is rotated therein around an axis along a third direction that is orthogonal to each of the first direction and the second direction as a center. The pressing part presses the movable part that moves in the second direction to move the movable part to a neutral position in the second direction and presses the movable part that moves in the rotational direction to move the movable part to a neutral position in the rotational direction.
Systems for determining location using robots with deformable sensors
Systems and methods for determining a location of a robot are provided. A method includes receiving, by a processor, a signal from a deformable sensor including data with respect to a deformation region in a deformable membrane of the deformable sensor resulting from contact with a first object. The data associated with contact with the first object is compared, by the processor, to details associated with contact with the first object to information associated with a plurality of objects stored in a database. The first object is identified, by the processor, as a first identified object of the plurality of objects stored in the database. The first identified object is an object of the plurality of objects stored in the database that is most similar to the first object. The location of the robot is determined, by the processor, based on a location of the first identified object.
Method and apparatus for managing robot system
Embodiments of the present disclosure provide methods for managing a robot system. In one method, orientations for links in the robot system may be obtained when the links are arranged in at least one posture, here each of the orientations indicates a direction pointed by one of the links. At least one image of an object placed in the robot system may be obtained from a vision device equipped on one of the links. Based on the orientations and the at least one image, a first mapping may be determined between a vision coordinate system of the vision device and a link coordination system of the link. Further, embodiments of present disclosure provide apparatuses, systems, and computer readable media for managing a robot system. The vision device may be calibrated by the first mapping and may be used to manage operations of the robot system.
ROBOTIC SYSTEM WITH IMAGE-BASED SIZING MECHANISM AND METHODS FOR OPERATING THE SAME
A system and method for estimating aspects of target objects and/or associated task implementations is disclosed.
Robot navigating through waypoints based on obstacle avoidance and method of robot's navigation
Disclosed herein are a robot navigating based on obstacle avoidance and a navigation method. In the robot or the navigation method of the robot according to an embodiment, a navigation route may be generated on the basis of position information on a waypoint and on objects sensed by a sensor, such that the robot may move via one or more waypoints.
Method of localization using multi sensor and robot implementing same
Disclosed herein are a method of localization using multi sensors and a robot implementing the same, the method including sensing a distance between an object placed outside of a robot and the robot and generating a first LiDAR frame by a LiDAR sensor of the robot while a moving unit moves the robot, capturing an image of an object placed outside of the robot and generating a first visual frame by a camera sensor of the robot, and comparing a LiDAR frame stored in a map storage of the robot with the first LiDAR frame, comparing a visual frame registered in a frame node of a pose graph with the first visual frame, determining accuracy of comparison's results of the first LiDAR frame, and calculating a current position of the robot by a controller.
GENERATION OF IMAGE FOR ROBOT OPERATION
A robot control system includes circuitry configured to: generate a command to a robot; receive a frame image in which a capture position changes according to a motion of the robot based on the command; extract a partial region from the frame image according to the command; superimpose a delay mark on the partial region to generate an operation image; and display the operation image on a display device, so as to represent a delay of the motion of the robot with respect to the command.