Patent classifications
G05B2219/40577
AUTONOMOUS MOBILE GRABBING METHOD FOR MECHANICAL ARM BASED ON VISUAL-HAPTIC FUSION UNDER COMPLEX ILLUMINATION CONDITION
The present disclosure discloses an autonomous mobile grabbing method for a mechanical arm based on visual-haptic fusion under a complex illumination condition, which mainly includes approaching control over a target position and feedback control over environment information.
According to the method, under the complex illumination condition, weighted fusion is conducted on visible light and depth images of a preselected region, identification and positioning of a target object are completed based on a deep neural network, and a mobile mechanical arm is driven to continuously approach the target object; in addition, the pose of the mechanical arm is adjusted according to contact force information of a sensor module, the external environment and the target object; and meanwhile, visual information and haptic information of the target object are fused, and the optimal grabbing pose and the appropriate grabbing force of the target object are selected.
By adopting the method, the object positioning precision and the grabbing accuracy are improved, the collision damage and instability of the mechanical arm are effectively prevented, and the harmful deformation of the grabbed object is reduced.
Robotic Grasping Via RF-Visual Sensing And Learning
Described is the design, implementation, and evaluation of a robotic system configured to search for and retrieve RFID-tagged items in line-of-sight, non-line-of-sight, and fully-occluded settings. The robotic system comprises a robotic arm having a camera and antenna strapped around a portion thereof (e.g. a gripper) and a controller configured to receive information from the camera and (radio frequency) RF information via the antenna and configured to use the information provided thereto to implement a method that geometrically fuses at least RF and visual information. This technique reduces uncertainty about the location of a target object even when the object is fully occluded. Also described is a reinforcement-learning network that uses fused RF-visual information to efficiently localize, maneuver toward, and grasp a target object. The systems and techniques described herein find use in many applications including robotic retrieval tasks in complex environments such as warehouses, manufacturing plants, and smart homes.
METHOD FOR DETECTING TARGET OBJECT, DETECTION APPARATUS AND ROBOT
A method for detecting a target object includes: identifying a target object in a monitored region, and calculating first state information of the target object with respect to this detection apparatus; estimating second state information of the target object after a pre-set delay duration value according to the first state information; and performing a processing operation on the target object according to the estimated second state information. The present disclosure can quickly and accurately complete the tracking and state estimation of a target object, which adds new functions and satisfies automation and intelligence requirements of a user for object tracking and movement state estimation.
Robot localization in a workspace via detection of a datum
Apparatus and method is disclosed for determining position of a robot relative to objects in a workspace which includes the use of a camera, scanner, or other suitable device in conjunction with object recognition. The camera, etc is used to receive information from which a point cloud can be developed about the scene that is viewed by the camera. The point cloud will be appreciated to be in a camera centric frame of reference. Information about a known datum is used and compared to the point cloud through object recognition. For example, a link from a robot could be the identified datum so that, when recognized, the coordinates of the point cloud can be converted to a robot centric frame of reference since the position of the datum would be known relative to the robot.
Sensing system, work system, augmented-reality-image displaying method, augmented-reality-image storing method, and program
A sensing system with a detecting device that is used to detect a position of a target and a controller, where, for display on a display device or projection by a projection apparatus, the controller creates an augmented-reality image that shows: at least one of a setting related to detection of the target using the detecting device, a setting of a moving apparatus, and a setting of a robot that performs work on the target, a position of the target being recognized by the controller, a result of the detection of the target, a work plan of the moving apparatus, a work plan of the robot, a determination result of the controller and a parameter related to the target.
METHOD AND APPARATUS FOR ALERTING THREATS TO USERS
Embodiments of the present disclosure relate to a method and an apparatus for alerting threats to users. The apparatus may capture a plurality of signals including at least one of Electro-Magnetic (E-M) signals and sound signals. The E-M signal and sound signals are used to detect objects around the user. A threat to the user is predicted based on the objects around the user and one or more alerts are generated such that the user avoids the threat. The prediction of the threat enables the user to take an action even before the threat has occurred. Also, the alerts are generated based on the prediction such that the user can avoid the threat well in advance of the occurrence of the threat.
Method for detecting target object, detection apparatus and robot
A method for tracking a target object using a detection apparatus includes calculating movement state information of the target object with respect to the detection apparatus according to one or more monitored images, calculating a distance between the target object and the detection apparatus, in response to the distance being not greater than a pre-set distance threshold, estimating, based on first state information including the movement state information, second state information of the target object after a pre-set delay duration, and performing an operation with respect to the target object according to the estimated second state information.
Article searching method and robot thereof
An article searching method includes: receiving a search task for searching for an article to be searched; acquiring, based on the search task, a three dimensional model corresponding to the article to be searched; determining a search task group for searching for the article to be searched; and searching for the article to be searched based on the acquired three dimensional model and in combination with the search task group, wherein the search task group shares a search result in the process of searching for the article to be searched.
Dynamically determining workspace safe zones with speed and separation monitoring
Systems and methods for determining safe zones in a workspace calculate safe actions in real time based on all sensed relevant objects and on the current state of the machinery (e.g., a robot) in the workspace. Various embodiments forecast, in real time, both the motion of the machinery and the possible motion of a human within the space, and continuously update the forecast as the machinery operates and humans move in the workspace.
AI-BASED NEW LEARNING MODEL GENERATION SYSTEM FOR VISION INSPECTION ON PRODUCT PRODUCTION LINE
An AI-based new learning model generation system for vision inspection on a product production line is proposed. In the AI-based new learning model generation system, the candidate set extraction module extracts two or more candidate data sets on the basis of determination type information from among a plurality of training data sets that have been applied to learning of existing learning models previously generated for the vision inspection on the product production line. In addition, an additional set determination module calculates similarity between training images of new training data and a candidate data set, and determines any one greater than or equal to a reference value as an additional training data. In addition, the new model generation module may generate a new learning model by training the additional training data set and the new training data as a pre-training model.