Patent classifications
G05B2219/40604
PICKING SYSTEM
A picking system is provided, which is capable of picking up an object even when the object is not registered in advance. The picking system includes: a picking device holding the object; an RGB-D camera acquiring three-dimensional point cloud data of the object to be picked up by the picking device; and a control device controlling the picking device based on a detection result by the RGB-D camera. The control device generates a geometric model of the object by combining simple geometric primitives while referring to the three-dimensional point cloud data, and calculates a holding position of the object for the picking device based on the geometric model.
METHOD AND SYSTEM FOR FIXTURELESS ASSEMBLY OF A VEHICLE PLATFORM
A system for assembling a vehicle platform includes a robotic assembly system having at least two robotic arms, a vision system capturing images of an assembly frame, and a control system configured to control the robotic assembly system to assemble the vehicle platform based on images from the vision system, force feedback from the at least two robotic arms, and a component location model. The control system is further configured to identify assembly features of a first component and a second component of the vehicle platform from the images, operate the robotic arms to orient the first component and the second component to respective nominal positions based on the images and the component location model, and operate the robotic arms to assemble the first component to the second component based on the force feedback.
Wide-field-of-view anti-shake high-dynamic bionic eye
The present application discloses a wide-field-of-view anti-shake high-dynamic bionic eye. A trajectory tracking method based on a bionic eye robot includes: establishing a linear model according to a bionic eye robot; establishing a full state feedback control system on the basis of the linear model; in the full state feedback control system, acquiring an angle and an angular acceleration required for a joint in a target tracking process of the bionic eye on the basis of a preset trajectory expectation value and a preset joint angle expectation value; the method further includes: adopting a linear quadratic regulator (LQR) to calculate a parameter K in the full state feedback control system, and minimizing energy consumption by establishing an energy function, so as to optimize the coordinated head-eye motion control of the linear bionic eye. The present application achieves the optimal control of the target tracking.
3D COMPUTER-VISION SYSTEM WITH VARIABLE SPATIAL RESOLUTION
One embodiment can provide a robotic system. The robotic system can include a robotic arm comprising an end-effector, a robotic controller configured to control movements of the robotic arm, and a dual-resolution computer-vision system. The dual-resolution computer-vision system can include a low-resolution three-dimensional (3D) camera module and a high-resolution 3D camera module. The low-resolution 3D camera module and the high-resolution 3D camera module can be arranged in such a way that a viewing region of the high-resolution 3D camera module is located inside a viewing region of the low-resolution 3D camera module, thereby allowing the dual-resolution computer-vision system to provide 3D visual information associated with the end-effector in two different resolutions when at least a portion of the end-effector enters the viewing region of the high-resolution camera module.
Robot control system simultaneously performing workpiece selection and robot task
A robot control system includes: a selector configured to perform a selection of a task object from among a plurality of workpieces by using a first vision sensor; and an operation control section configured to control a robot to perform a task on the task object by using a tool. The selection and the task are executed simultaneously and in parallel, the selector transmits the information of the selected task object to the operation control section before the task, and the operation control section controls the robot based on the transmitted information of the task object.
Method and system for fixtureless assembly of a vehicle platform
A system for assembling a vehicle platform includes a robotic assembly system having at least two robotic arms, a vision system capturing images of an assembly frame, and a control system configured to control the robotic assembly system to assemble the vehicle platform based on images from the vision system, force feedback from the at least two robotic arms, and a component location model. The control system is further configured to identify assembly features of a first component and a second component of the vehicle platform from the images, operate the robotic arms to orient the first component and the second component to respective nominal positions based on the images and the component location model, and operate the robotic arms to assemble the first component to the second component based on the force feedback.
PRODUCTION SYSTEM
A production system includes a machine tool (10), a robot (25) having a camera (31), an automatic guided vehicle (35) having the robot (25) mounted thereon, and a controller (40) controlling the automatic guided vehicle (35) and the robot (25), and has an identification figure arranged in a machining area of the machine tool (10). The controller (40) stores, as a reference image, an image of the identification figure captured by the camera (31) with the robot (25) in an image capturing pose in a teaching operation. When repeatedly operating the automatic guided vehicle (35) and the robot (25), the controller (40) estimates an amount of error between a pose of the robot (25) in the teaching operation and a current pose of the robot (25) based on the reference image and an image of the identification figure captured by the camera (31) with the robot (25) in the image capturing pose, and corrects operating poses of the robot (25) based on the estimated amount of error.
ROBOT SYSTEM, CONTROL METHOD, IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, METHOD OF MANUFACTURING PRODUCTS, AND RECORDING MEDIUM
A robot system includes a robot, an image capture apparatus, an image processing portion, and a control portion. The image processing portion is configured to specify in an image of a plurality of objects captured by the image capture apparatus, at least one area in which a predetermined object having a predetermined posture exists, and obtain information on position and/or posture of the predetermined object in the area. The control portion is configured to control the robot, based on the information on position and/or posture of the predetermined object, for the robot to hold the predetermined object.
End of Arm Sensing Device
A sensing device is described for mounting on a movable component of a robotic device. The sensing device includes a plurality of illumination sources comprising at least one ultraviolet (UV) illumination source. The sensing device further includes at least two cameras arranged in a stereo pair. The sensing device additionally includes a camera with a UV filter, wherein the UV filter is configured to allow wavelengths corresponding to UV light and to block wavelengths corresponding to visible and near infrared light, wherein the UV filter allows transmission of light within an angular range such that the UV filter allows for the transmission of light at one end of the angular range to be equivalent to the transmission of light at an opposite end of the angular range.
Robot controller that controls robot, learned model, method of controlling robot, and storage medium
A robot controller that controls a robot by automatically obtaining a controller capable of suitably controlling a wide range of robots. An image is acquired from an image capturing apparatus that photographs an environment including the robot. The robot is driven based on an output result obtained by inputting the image to a neural network. The neural network is updated according to a reward generated in a case where a plurality of virtual images photographed while changing an environmental condition of a virtual environment generated by virtualizing the environment and a state of a virtual robot are input to the neural network, and a policy of the virtual robot, which is output from the neural network, satisfies a predetermined condition.