B25J9/1697

Method and apparatus for managing robot system
11577400 · 2023-02-14 · ·

Embodiments of the present disclosure provide methods for managing a robot system. In one method, orientations for links in the robot system may be obtained when the links are arranged in at least one posture, here each of the orientations indicates a direction pointed by one of the links. At least one image of an object placed in the robot system may be obtained from a vision device equipped on one of the links. Based on the orientations and the at least one image, a first mapping may be determined between a vision coordinate system of the vision device and a link coordination system of the link. Further, embodiments of present disclosure provide apparatuses, systems, and computer readable media for managing a robot system. The vision device may be calibrated by the first mapping and may be used to manage operations of the robot system.

THREE-DIMENSIONAL IMAGE-CAPTURING DEVICE AND IMAGE-CAPTURING CONDITION ADJUSTING METHOD
20230040615 · 2023-02-09 · ·

A 3D image-capturing device that includes at least one camera that acquires a 2D image and distance information of an object, a monitor that displays the 2D image acquired by the camera, and at least one processor including hardware. The processor acquires a first area for which the distance information is not required in the 2D image displayed on the monitor, and sets an image-capturing condition so that the amount of distance information acquired by the camera in the acquired first area is less than or equal to a prescribed first threshold and the amount of distance information acquired by the camera in a second area, which is at least part of an area other than the first area, is greater than a prescribed second threshold that is larger than the first threshold.

ROBOTIC SYSTEM WITH IMAGE-BASED SIZING MECHANISM AND METHODS FOR OPERATING THE SAME

A system and method for estimating aspects of target objects and/or associated task implementations is disclosed.

FUSION OF SPATIAL AND TEMPORAL CONTEXT FOR LOCATION DETERMINATION FOR VISUALIZATION SYSTEMS

A computer-implemented method for generating a control signal by locating at least one instrument by way of a combination of machine learning systems on the basis of digital images is described. In this case, the method includes determining parameter values of a movement context by using the at least two digital images and determining an influence parameter value which controls an influence of one of the digital images and the parameter values of the movement context on the input data which are used within a first trained machine learning system, which has a first learning model, for generating the control signal.

SYSTEMS AND METHODS FOR OBJECT DETECTION

A computing system including a processing circuit in communication with a camera having a field of view. The processing circuit is configured to perform operations related to detecting, identifying, and retrieving objects disposed amongst a plurality of objects. The processing circuit may be configured to perform operations related to object recognition template generation, feature generation, hypothesis generation, hypothesis refinement, and hypothesis validation.

SYSTEM AND/OR METHOD FOR ROBOTIC FOODSTUFF ASSEMBLY

The foodstuff assembly system can include: a robot arm, a frame, a set of foodstuff bins, a sensor suite, a set of food utensils, and a computing system. The system can optionally include: a container management system, a human machine interface (HMI). However, the foodstuff assembly system 100 can additionally or alternatively include any other suitable set of components. The system functions to enable picking of foodstuff from a set of foodstuff bins and placement into a container (such as a bowl, tray, or other foodstuff receptacle). Additionally or alternatively, the system can function to facilitate transferal of bulk material (e.g., bulk foodstuff) into containers, such as containers moving along a conveyor line.

Medical holding apparatus and medical observation system

A medical holding apparatus includes: a support including a plurality of arms, and a plurality of joints configured to connect the plurality of arms, the support being configured to support an imaging unit at a distal end thereof; a load applying mechanism arranged in at least one of the joints and configured to apply a resistance load against operation of the at least one of the joints to the support; and a processor comprising hardware, the processor being configured to: set torque to be applied by the load applying mechanism based on an operating state of the imaging unit; and apply a load corresponding to the set torque to the load applying mechanism when a rotation inhibit state of each of the arms of the support is released.

Method of localization using multi sensor and robot implementing same

Disclosed herein are a method of localization using multi sensors and a robot implementing the same, the method including sensing a distance between an object placed outside of a robot and the robot and generating a first LiDAR frame by a LiDAR sensor of the robot while a moving unit moves the robot, capturing an image of an object placed outside of the robot and generating a first visual frame by a camera sensor of the robot, and comparing a LiDAR frame stored in a map storage of the robot with the first LiDAR frame, comparing a visual frame registered in a frame node of a pose graph with the first visual frame, determining accuracy of comparison's results of the first LiDAR frame, and calculating a current position of the robot by a controller.

Robot navigation using 2D and 3D path planning
11554488 · 2023-01-17 · ·

Methods, systems, and apparatus, including computer-readable storage devices, for robot navigation using 2D and 3D path planning. In the disclosed method, a robot accesses map data indicating two-dimensional layout of objects in a space and evaluates candidate paths for the robot to traverse. In response to determining that the candidate paths do not include a collision-free path across the space for a two-dimensional profile of the robot, the robot evaluates a three-dimensional shape of the robot with respect to a three-dimensional shape of an object in the space. Based on the evaluation of the three-dimensional shapes, the robot determines a collision-free path to traverse through the space.

Feature detection by deep learning and vector field estimation
11554496 · 2023-01-17 · ·

A system and method for extracting features from a 2D image of an object using a deep learning neural network and a vector field estimation process. The method includes extracting a plurality of possible feature points, generating a mask image that defines pixels in the 2D image where the object is located, and generating a vector field image for each extracted feature point that includes an arrow directed towards the extracted feature point. The method also includes generating a vector intersection image by identifying an intersection point where the arrows for every combination of two pixels in the 2D image intersect. The method assigns a score for each intersection point depending on the distance from each pixel for each combination of two pixels and the intersection point, and generates a point voting image that identifies a feature location from a number of clustered points.