Patent classifications
B25J9/1679
METHOD, SYSTEM, AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM FOR CONTROLLING A TRANSPORT ROBOT
A method for controlling a transport robot is provided. The method includes the steps of: acquiring, when a user makes a request for transport of a target object, information on the user and information on the transport of the target object including a delivery place of the target object; identifying the user on the basis of the information on the user, and determining a place associated with the user as a destination where a transport robot is to transport the target object from the delivery place, with reference to a result of the identification; and causing the target object to be transported to the destination by the transport robot.
GROUND BASED ROBOT WITH AN OGI CAMERA MODULE AND COOLING SYSTEM
Provided is a process including: receiving inspection path information indicating a path for a robot to travel, and a plurality of locations along the path to inspect; determining, based on information received via a location sensor, that a distance between a location of the robot and a first location of the plurality of locations is greater than a threshold distance; in response, causing a refrigeration system of an optical gas imaging (OGI) camera to decrease cooling; moving along the path; in response to determining that the robot is at a first location of the plurality of locations, sending a second command to the sensor system, wherein the second command causes the refrigeration system of the OGI camera to increase cooling; causing the sensor system to record a first video with an OGI camera; and causing the sensor system to store the first video in memory.
Fast and Robust Initialization Method for Feature-Based Monocular Visual SLAM Using Inertial Odometry Assistance
A method and system for capturing, by a camera a sequence of frames at respective locations within a portion of an environment; capturing, by an inertial measurement unit, a sequence of inertial odometry data corresponding to the sequence of frames at the respective locations; storing in a queue a data record includes information extracted from processing the respective frame and information from the inertial measurement unit; in accordance with a determination that the sequence of inertial odometry data satisfies a first criterion: calculating a first relative pose between the first frame and the second frame; and in accordance with a determination that a difference between the first relative pose and the information extracted from processing the respective frame satisfy a first threshold: generating an initial map of the portion of the environment based on the first data record and the second data record.
Gripper, an apparatus, and a method for assembling kits of sanitary products
A gripper, an apparatus, and a method for assembling kits of sanitary products in an automated fashion are disclosed. The sanitary products are pre-loaded into containers including a plurality of independent housings for the sanitary products, each of the housings being configured to allow loading of a sanitary product therein and withdrawal of the sanitary product therefrom independently of the other housings, and the kit is assembled in batches on the gripper, and released by the latter into a package.
Robot for making coffee and method for controlling the same
A robot for making coffee and a method for controlling the same are provided to couple or decouple a portafilter to or from an espresso machine without damage to the espresso machine or the portafilter due to a collision between the espresso machine and the portafilter. The robot includes a robot arm to move with a predetermined degree of freedom, a gripper provided in the robot arm to grip a portafilter, a torque sensor provided in the robot arm to detect repulsive force (Fr) when the portafilter makes contact with a group head of an espresso machine, and a controller configured to set a virtual spring having a predetermined elastic modulus (C) based on the repulsive force (Fr) detected by the torque sensor, and to control driving torque (T) of the robot arm depending on the restoring force (Fe) of the virtual spring.
System for item placement into non-rigid containers
Examples provide a system and method for autonomously placing items into non-rigid containers. An image analysis component analyzes image data generated by one or more cameras associated with picked items ready for bagging and/or a non-rigid container, such as, but not limited to, a bag. The image analysis component generates dynamic placement data identifying how much space is available inside the bag, bag tension, and/or contents of the bag. A dynamic placement component generates a per-item assigned placement for a selected item ready for bagging based on a per-bag placement sequence and the dynamic placement data. Instructions, including the per-item assigned placement designating a location within the interior of the non-rigid container to the selected item and an orientation for the selected item after bagging, is sent to at least one robotic device. The robotic device places the selected item into the non-rigid container in accordance with the instructions.
Handling device and computer program product
A handling device according to an embodiment includes a manipulator, a normal grid generation unit, a hand kernel generation unit, a calculation unit, and a control unit. The normal grid generation unit converts a depth image into a point cloud, generates spatial data including an object to be grasped that is divided into a plurality of grids from the point cloud, and calculates a normal vector of the point cloud included in the grid using spherical coordinates. The hand kernel generation unit generates a hand kernel of each suction pad. The calculation unit calculates ease of grasping the object to be grasped by a plurality of suction pads based on a 3D convolution calculation using a grid including the spatial data and the hand kernel. The control unit controls a grasping operation of the manipulator based on the ease of grasping the object to be grasped by the plurality of suction pads.
Doubles end-effector for robotic harvesting
An example system includes a nozzle having an inlet: an outlet mechanism disposed longitudinally adjacent to the nozzle; a conduit longitudinally adjacent to the outlet mechanism where the conduit includes a distal chamber, a middle chamber, and a proximal chamber, that; are longitudinally disposed along a length of the conduit; a partition block configured to move between (i) a first position at which the partition block: is disposed laterally adjacent to the middle chamber, such that the partition block is offset from a longitudinal axis of the conduit, and (ii) a second position at which the partition block resides in the middle chamber between the distal chamber and the proximal chamber; and a deceleration structure disposed at a proximal, end of the conduit and bounding the proximal chamber, where the deceleration structure is configured to decelerate fruit feat has traversed the conduit.
Intelligent vehicle transfer robot for executing parking and unparking by loading vehicle
A vehicle transfer robot (10) of the present invention, disposed vertically on the ground, is formed to have four vertical frames (110) disposed at a predetermined distance apart from each other and formed to have a quadrangular frame, and a quadrangle by connecting the upper end parts of the four vertical frames (110), respectively, wherein the vehicle transfer robot (10) includes: a frame part (100) including an upper frame (120); a driving part (200) installed at each of the lower end parts of the vertical frames (110) for moving the frame part (100); and a carriage (300) installed in the frame part (100) for loading a vehicle.
BEAD APPEARANCE INSPECTION DEVICE, BEAD APPEARANCE INSPECTION METHOD, PROGRAM, AND BEAD APPEARANCE INSPECTION SYSTEM
A bead appearance inspection device includes an input unit configured to enter input data related to a welding bead of a workpiece produced by welding, a first determination unit configured to perform a first inspection determination related to a shape of the welding bead based on a comparison between the input data and a master data, k second determination units, where k is an integer of 1 or more, that are equipped with k types of artificial intelligence and that are configured to perform a second inspection determination related to a welding defect of the welding bead based on processings of the k types of artificial intelligence targeting the input data, and a comprehensive determination unit configured to output a result of an appearance inspection of the welding bead to an output device based on determination results of the first determination unit and the k second determination units.