G05B2219/39391

Integrated robotic system and method for autonomous vehicle maintenance

A robotic system includes a controller configured to obtain image data from one or more optical sensors and to determine one or more of a location and/or pose of a vehicle component based on the image data. The controller also is configured to determine a model of an external environment of the robotic system based on the image data and to determine tasks to be performed by components of the robotic system to perform maintenance on the vehicle component. The controller also is configured to assign the tasks to the components of the robotic system and to communicate control signals to the components of the robotic system to autonomously control the robotic system to perform the maintenance on the vehicle component.

Determining a virtual representation of an environment by projecting texture patterns
10967506 · 2021-04-06 · ·

Example methods and systems for determining 3D scene geometry by projecting patterns of light onto a scene are provided. In an example method, a first projector may project a first random texture pattern having a first wavelength and a second projector may project a second random texture pattern having a second wavelength. A computing device may receive sensor data that is indicative of an environment as perceived from a first viewpoint of a first optical sensor and a second viewpoint of a second optical sensor. Based on the received sensor data, the computing device may determine corresponding features between sensor data associated with the first viewpoint and sensor data associated with the second viewpoint. And based on the determined corresponding features, the computing device may determine an output including a virtual representation of the environment that includes depth measurements indicative of distances to at least one object.

IMAGE PROCESSING APPARATUS THAT PROCESSES IMAGE PICKED UP BY IMAGE PICKUP APPARATUS ATTACHED TO ROBOT, CONTROL METHOD THEREFOR, AND STORAGE MEDIUM STORING CONTROL PROGRAM THEREFOR
20210107161 · 2021-04-15 ·

An image processing apparatus capable of simplifying operations for determining an image pickup posture of an image pickup apparatus attached to a robot. The image processing apparatus processes an image that an image pickup apparatus attached to a robot picks up. The image processing apparatus includes a memory device that stores a set of instructions, and at least one processor that executes the set of instructions to specify a working area of the robot based on teaching point information showing a plurality of designated teaching points, specify an image pickup area of the image pickup apparatus so as to include the specified working area; and determine an image pickup posture of the robot based on the specified image pickup area.

Techniques for detecting errors or loss of accuracy in a surgical robotic system
11844495 · 2023-12-19 · ·

Systems and methods for detecting an error in a surgical system. The surgical system includes a manipulator with a base and a plurality of links and the manipulator supports a surgical tool. The system includes a navigation system with a tracker and a localizer to monitor a state of the tracker. Controller(s) determine values of a first transform between a state of the base of the manipulator and a state of one or both of the localizer and the tracker of the navigation system. The controller(s) determine values of a second transform between the state of the localizer and the state of the tracker. The controller(s) combine values of the first transform and the second transform to determine whether an error has occurred relating to one or both of the manipulator and the localizer.

Robotic system with error detection and dynamic packing mechanism
10953549 · 2021-03-23 · ·

A method for operating a robotic system includes determining a discretized object model based on source sensor data; comparing the discretized object model to a packing plan or to master data; determining a discretized platform model based on destination sensor data; determining height measures based on the destination sensor data; comparing the discretized platform model and/or the height measures to an expected platform model and/or expected height measures; and determining one or more errors by (i) determining at least one source matching error by identifying one or more disparities between (a) the discretized object model and (b) the packing plan or the master data or (ii) determining at least one destination matching error by identifying one or more disparities between (a) the discretized platform model or the height measures and (b) the expected platform model or the expected height measures, respectively.

Robotic manipulation using an independently actuated vision system, an adversarial control scheme, and a multi-tasking deep learning architecture

An automation system includes a manipulation system including a manipulator for moving an object to a target location, a vision system for detecting landmarks on the object and the target location, and a learning and control module. The vision system is movable. The learning and control module is configured to control a movement of the manipulator and change a field of view of the vision system independent of the movement of the manipulator.

ROBOT DEVICE CONTROLLER FOR CONTROLLING POSITION OF ROBOT
20210031374 · 2021-02-04 · ·

A first characteristic portion of a first workpiece and a second characteristic portion of a second workpiece are previously determined. A characteristic amount detection unit detects a first characteristic amount related to the position of the first characteristic portion and a second characteristic amount related to the position of the second characteristic portion in an image captured by a camera. A calculation unit calculates, as a relative position amount, the difference between the first characteristic amount and the second characteristic amount. A command generation unit generates a movement command for operating a robot based on a relative position amount in the image captured by the camera and a relative position amount in a predetermined reference image.

METHOD FOR GRIPPING AN OBJECT AND SUCTION GRIPPER

The invention relates to a method for gripping an object by a handling system, including a robot with at least one robot arm, a gripping device which is connected to the robot arm and has a pneumatically operated suction gripper having an elastically deformable contact portion for contact with an outer surface of the object to be gripped, an identification means for identifying the outer surface of the object to be gripped and a control means which interacts with the identification means and is designed to control the robot, wherein an outer surface of an object to be gripped is identified, wherein a distinction is made between planar portions of the outer surface on the one hand and convex elevations or outer edges on the other, and wherein the suction gripper is made to approach the outer surface of the object to be gripped in such a way that at least a part of the contact portion of the suction gripper clings to a convex elevation or outer edge of the outer surface. The invention also relates to a suction gripper for use in such a method.

Method and apparatus for optimizing a target working line

A method and an apparatus for optimizing a target working line are disclosed. The target working line includes at least one robot manipulator, at least one conveyor and at least one item on the conveyor to be displaced by the robot manipulator. The method includes: obtaining an evaluation model for the target working line, the evaluation model yielding an overall success rate of moving the item from one conveyor to another conveyor based on at least one measuring parameter, the measuring parameter being a physical attribute of the target working line; yielding the overall success rate for the target working line as a function of a value for the measuring parameter for the target working line; and in case that the yielded overall success rate is lower than a predetermined threshold rate, updating a value for a configuring parameter based on the overall success rate, the configuring parameter corresponding to the measuring parameter, and the configuring parameter being states of the working line. The optimization of the evaluation model does not require an implementation of an on-site process or an involvement of an experienced engineer or worker. Instead, simulation software can be used to obtain customized parameters used for the target working line, resulting in an increased success rate within a short period of time.

VISION-BASED SENSOR SYSTEM AND CONTROL METHOD FOR ROBOT ARMS

A method for determining the joint positions of a kinematic chain uses only an imaging sensor and a computing unit. Characteristic features on the links and joints of the kinematic chain are identified and the joint positions are calculated from these visual measurements. The robot can be controlled without the use of joint encoders. A sensor system for monitoring the status of a kinematic chain includes a computing unit and an imaging sensor. The imaging sensor may be mounted to the kinematic chain or in the surroundings of the kinematic chain and monitors the kinematic chain and/or the surroundings of the kinematic chain. The computing unit determines a pose and/or movement parameters of at least one element of the kinematic chain by analyzing an output signal of the imaging sensor, in particular by analyzing characteristic features and determines a rotational joint position by analyzing the characteristic features.