Patent classifications
B25J9/161
Method of localization using multi sensor and robot implementing same
Disclosed herein are a method of localization using multi sensors and a robot implementing the same, the method including sensing a distance between an object placed outside of a robot and the robot and generating a first LiDAR frame by a LiDAR sensor of the robot while a moving unit moves the robot, capturing an image of an object placed outside of the robot and generating a first visual frame by a camera sensor of the robot, and comparing a LiDAR frame stored in a map storage of the robot with the first LiDAR frame, comparing a visual frame registered in a frame node of a pose graph with the first visual frame, determining accuracy of comparison's results of the first LiDAR frame, and calculating a current position of the robot by a controller.
Self-learning industrial robotic system
Example implementations described herein are directed to a simulation environment for a real world system involving one or more robots and one or more sensors. Scenarios are loaded into a simulation environment having one or more virtual robots corresponding to the one or more robots, and one or more virtual sensors corresponding to the one or more virtual system to train a control strategy model from reinforcement learning, which is subsequently deployed to the real world environment. In cases of failure of the real world environment, the failures are provided to the simulation environment to generate an updated control strategy model for the real world environment.
Feature detection by deep learning and vector field estimation
A system and method for extracting features from a 2D image of an object using a deep learning neural network and a vector field estimation process. The method includes extracting a plurality of possible feature points, generating a mask image that defines pixels in the 2D image where the object is located, and generating a vector field image for each extracted feature point that includes an arrow directed towards the extracted feature point. The method also includes generating a vector intersection image by identifying an intersection point where the arrows for every combination of two pixels in the 2D image intersect. The method assigns a score for each intersection point depending on the distance from each pixel for each combination of two pixels and the intersection point, and generates a point voting image that identifies a feature location from a number of clustered points.
AUTONOMOUS MANIPULATION OF FLEXIBLE PRIMARY PACKAGING IN DIMENSIONALLY STABLE SECONDARY PACKAGING BY MEANS OF ROBOTS
System for automatically manipulating primary packaging in secondary packaging, comprising a robot having at least one robot arm with a clamping gripper installed at a tool centre point, wherein each tool centre point has a force-torque sensor, an image recording module for recording images of at least the upper segment of the primary packaging, comprising at least two stereo cameras for recording 3-D images, and one or more processors for providing a three-dimensional point cloud, controlling the image recording module and controlling the robot on the basis of the analysis of the three-dimensional point cloud and the measurements from the force-torque sensors.
AUTONOMOUS MOBILE GRABBING METHOD FOR MECHANICAL ARM BASED ON VISUAL-HAPTIC FUSION UNDER COMPLEX ILLUMINATION CONDITION
The present disclosure discloses an autonomous mobile grabbing method for a mechanical arm based on visual-haptic fusion under a complex illumination condition, which mainly includes approaching control over a target position and feedback control over environment information.
According to the method, under the complex illumination condition, weighted fusion is conducted on visible light and depth images of a preselected region, identification and positioning of a target object are completed based on a deep neural network, and a mobile mechanical arm is driven to continuously approach the target object; in addition, the pose of the mechanical arm is adjusted according to contact force information of a sensor module, the external environment and the target object; and meanwhile, visual information and haptic information of the target object are fused, and the optimal grabbing pose and the appropriate grabbing force of the target object are selected.
By adopting the method, the object positioning precision and the grabbing accuracy are improved, the collision damage and instability of the mechanical arm are effectively prevented, and the harmful deformation of the grabbed object is reduced.
Determining how to assemble a meal
In an embodiment, a method includes determining a given material to manipulate to achieve a goal state. The goal state can be one or more deformable or granular materials in a particular arrangement. The method further includes, for the given material, determining, a respective outcome for each of a plurality of candidate actions to manipulate the given material. The determining can be performed with a physics-based model, in one embodiment. The method further can include determining a given action of the candidate actions, where the outcome of the given action reaching the goal state is within at least one tolerance. The method further includes, based on a selected action of the given actions, generating a first motion plan for the selected action.
ROBOT SERVICE METHOD AND ROBOT APPARATUS USING SOCIAL NETWORK SERVICE
The present invention relates to a robot service system and a robot apparatus using a social network service, and comprises: (a) a step in which a terminal device is connected to a robot apparatus by executing a social network service program, and displays a service screen on which an image captured by the robot apparatus is displayed; (b) a step in which the terminal device transmits a robot control command inputted to the service screen to the robot apparatus; (c) a step in which the robot apparatus performs an operation according to the robot control command and transmits operation performance data to the terminal device; and (d) a step in which the terminal device displays the operation performance data transmitted from the robot apparatus on the service screen.
Automated functional testing systems and methods of making and using the same
An automatic robot control system and methods relating thereto are described. These systems include components such as a touch screen panel (“TSP”) robot controller for controlling a TSP robot, a camera robot controller for controlling a camera robot and an audio robot controller for controlling an audio robot. The TSP robot operates inside a TSP testing subsystem, the camera robot operates inside a camera testing subsystem, and the audio robot operates inside an audio testing subsystem. Inside the audio testing subsystem, an audio signals measurement system, using a bi-directional coupling, controls the operation of the audio robot controller. In this control scheme, a test application controller is designed to control the different types of subsystem robots. Methods relating to TSP, camera, and audio robots, and their controllers, taken individually or in combination, for automatic testing of device functionalities are also described.
Initial reference generation for robot optimization motion planning
A robot optimization motion planning technique using a refined initial reference path. When a new path is to be computed using motion optimization, a candidate reference path is selected from storage which was previously computed and which has similar start and goal points and collision avoidance environment constraints to the new path. The candidate reference path is adjusted at all state points along its length to account for the difference between the start and goal points of the new path compared to those of the previously-computed path, to create the initial reference path. The initial reference path, adjusted to fit the start and goal points, is then used as a starting state for the motion optimization computation. By using an initial reference path which is similar to the final converged new path, the optimization computation converges more quickly than if a naïve initial reference path is used.
SURGICAL MANIPULATOR AND METHOD OF OPERATING THE SAME USING VIRTUAL RIGID BODY MODELING
A surgical manipulator and method of operating the same. The surgical manipulator includes an arm with a plurality of links and joints, wherein an angle between adjacent links forms a joint angle. The arm includes a distal end configured to support a surgical instrument with an energy applicator. At least one controller is coupled to the arm and models the surgical instrument and the energy applicator as a virtual rigid body. The controller(s) determine a commanded pose for the surgical instrument and the energy applicator based on a summation of a plurality of forces and/or torques, wherein the plurality of forces and/or torques are selectively applied to the virtual rigid body to emulate orientation and movement of the surgical instrument and the energy applicator. The controller(s) determine commanded joint angles for the arm that place the surgical instrument and the energy applicator according to the commanded pose.