Patent classifications
B25J9/1679
Method of localization using multi sensor and robot implementing same
Disclosed herein are a method of localization using multi sensors and a robot implementing the same, the method including sensing a distance between an object placed outside of a robot and the robot and generating a first LiDAR frame by a LiDAR sensor of the robot while a moving unit moves the robot, capturing an image of an object placed outside of the robot and generating a first visual frame by a camera sensor of the robot, and comparing a LiDAR frame stored in a map storage of the robot with the first LiDAR frame, comparing a visual frame registered in a frame node of a pose graph with the first visual frame, determining accuracy of comparison's results of the first LiDAR frame, and calculating a current position of the robot by a controller.
Agricultural Weed Removal System
An apparatus for efficient targeting or removal of weeds or other plants. The apparatus may include a vehicle having a frame, a motor and a plurality of ground engaging members adapted to propel the vehicle over a surface. It may also include a robotic arm comprising a distal portion and a proximal portion coupled to the frame, and an implement, such as a tool or hoe connected to the distal portion of the robotic arm. The implement can be raised and lowered, and also moved relative to the surface by the robotic arm by pivoting or rotating the robotic arm at or near the proximal portion.
Object capturing device, capture target, and object capturing system
An object capturing device includes light emission, receiving, and scanning units, and distance calculation, and object determination units. The scanning unit measures light from the emission unit to head toward a measurement target space to perform scanning, and to guide reflected light from the object with respect to the measurement light to the receiving unit. The distance calculation unit calculates a distance to the object in association with a scanning angle of the scanning unit. The object determination unit determines whether the object is a capture target based on whether a scanning angle range within which a difference between distances is equal to or less than a predetermined threshold value corresponding to a reference scanning angle range of the capture target, and a determination of whether intensity distribution of the reflected light within the scanning angle range corresponds to reference intensity distribution of the reflected light from the capture target.
Determining a Configuration of a Medical Robotic Arm
A computer implemented method for determining a configuration of a medical robotic arm, wherein the configuration comprises a pose of the robotic arm and a position of a base of the robotic arm, comprising the steps of: —acquiring treatment information data representing information about the treatment to be performed by use of the robotic arm; —acquiring patient position data representing the position of a patient to be treated; and —calculating the configuration from the treatment information data and the patient position data.
Method for automating transfer of plants within an agricultural facility
One variation of a method for automating transfer of plants within an agricultural facility includes: dispatching a loader to autonomously deliver a first module—defining a first array of plant slots at a first density and loaded with a first set of plants at a first growth stage—from a first grow location within an agricultural facility to a transfer station within the agricultural facility; dispatching the loader to autonomously deliver a second module—defining a second array of plant slots at a second density less than the first density and empty of plants—to the transfer station; recording a module-level optical scan of the first module; extracting a viability parameter of the first set of plants from features detected in the module-level optical scan; and if the viability parameter falls outside of a target viability range, rejecting transfer of the first set of plants from the first module.
AUTOMATED CREEL SYSTEMS AND METHODS FOR USING SAME
Systems and methods for loading and delivering stalk subassemblies and yarn packages are disclosed herein. Such systems and methods can have at least one processor, at least one automated guided vehicle, at least one creel assembly, and an automated creel loading assembly. The at least one automated guided vehicle can be communicatively coupled to the at least one processor. The at least one processor can be configured to selectively direct an automated guided vehicle to engage a respective stalk subassembly. Upon engagement between the automated guided vehicle and the stalk subassembly, the processor can be configured to selectively direct the automated guided vehicle to move about and between the selected operative position within the creel assembly and a loading position proximate the automated creel loading assembly.
Digital-Twin-Enabled Artificial Intelligence System for Distributed Additive Manufacturing
An information technology system for a distributed manufacturing network includes an additive manufacturing platform configured to manage workflows for a set of distributed manufacturing network entities associated with the distributed manufacturing network. The information technology system includes a set of digital twins generated by the additive manufacturing platform. The information technology system includes an artificial intelligence system configured to be executed by a data processing system in communication with the additive manufacturing platform. The artificial intelligence system is trained to generate process parameters for the workflows managed by the additive manufacturing platform using data collected from the set of distributed manufacturing network entities. The information technology system includes a control system configured to adjust the process parameters during an additive manufacturing process performed by at least one of the set of distributed manufacturing network entities.
CONTROL DEVICE, INSPECTION SYSTEM, CONTROL METHOD, AND STORAGE MEDIUM
A control device according to an embodiment receives first posture data of a posture of a first robot. The first robot includes a first manipulator and a first end effector. Furthermore, the control device sets the posture of the first robot based on the first posture data and causes the first robot to perform a first task on a first member. The first posture data is generated based on second posture data. The second posture data is of a posture when a second robot that includes a second manipulator and a second end effector performs a second task on the first member.
Smart home robot assistant
Methods and systems are described for robot transportation of objects into or out of a home automation system. One example may include determining, by a mobile robotic device, that an object is available to cross a boundary of the home automation system. The method may include deactivating at least a portion of the home automation system. The method also include retrieving, by the mobile robotic device, the object and transporting, by the mobile robotic device, the object across the boundary. The method further includes leaving, by the mobile robotic device, the object at a drop-off location. The method may also include reactivating at least the portion of the home automation system.
Method and device for controlling a robot, and robot
A method and device for controlling a robot, and a robot. The device detects whether there is an article being put into or taken out from a storage container of a robot, and if it is detected that an article is put into or taken out from the storage container, an information list is updated according to the article being put into or taken out, the information list recording relevant information about articles in the storage container.