G05B2219/40543

INFORMATION PROCESSING METHOD, INFORMATION PROCESSING SYSTEM, AND PROGRAM
20220253039 · 2022-08-11 ·

The easy and quick detection of a contour shape of an object from three dimensional point group data, and control a robotic arm and a tool using it. An information processing method comprising: a step of acquiring three-dimensional point group data by a sensor from an object, a step of specifying a contour point group data that constitutes a contour of the object from the three-dimensional point group data, a step of acquiring tool control information including tool position information and tool posture information for specifying a tool trajectory of the tool connected to the arm of the working robot from the contour point group data, and a step of controlling the tool based on the tool control information.

CONTROL METHOD FOR GOODS RETRIEVEMENT AND STORAGE, APPARATUS, CARRYING APPARATUS, AND TRANSPORT ROBOT

The present disclosure provides a control method for goods retrievement and storage, a control apparatus, and a transport robot. The control method for goods retrievement includes: receiving a retrievement instruction, and acquiring locating information of target goods according to the retrievement instruction; moving a transport robot to a target position according to the locating information; obtaining status information of the target goods and/or position relationship information between a carrying apparatus and the target goods; and adjusting a position and posture of the carrying apparatus according to the status information and/or the position relationship information, and causing the carrying apparatus to take out the target goods. According to the present disclosure, the position of the target goods can be accurately determined by obtaining status information of the target goods and/or position relationship information between the carrying apparatus and the target goods, so that the target goods can be accurately retrieved.

OBJECT MESH BASED ON A DEPTH IMAGE

A depth image is used to obtain a three dimensional (3D) geometry of an object as an object mesh. The object mesh is obtained using an object shell representation. The object shell representation is based on a series of depth images denoting the entry and exit points on the object surface that camera rays would pass through. Given a set of entry points in the form of a masked depth image of an object, an object shell (an entry image and an exit image) is generated. Since entry and exit images contain neighborhood information given by pixel adjacency, the entry and exit images provide partial meshes of the object which are stitched together in linear time using the contours of the entry and exit images. A complete object mesh is provided in the camera coordinate frame.

Object association using machine learning models
11440196 · 2022-09-13 · ·

A method includes receiving sensor data representing a first object in an environment and generating, based on the sensor data, a first state vector that represents physical properties of the first object. The method also includes generating, by a first machine learning model and based on the first state vector and a second state vector that represents physical properties of a second object previously observed in the environment, a metric indicating a likelihood that the first object is the same as the second object. The method further includes determining, based on the metric, to update the second state vector and updating, by a second machine learning model configured to maintain the second state vector over time and based on the first state vector, the second state vector to incorporate into the second state vector information concerning physical properties of the second object as represented in the first state vector.

SINGLE-STAGE CATEGORY-LEVEL OBJECT POSE ESTIMATION

Apparatuses, systems, and techniques to determine a pose and relative dimensions of an object from an image. In at least one embodiment, a pose and relative dimensions of an object are determined from an image based at least in part on, for example, features of the image.

Indoor location system with energy consumption controlled mobile transceiver units

An indoor location system includes mobile transceiver units to support a manufacturing control of process courses in an industrial manufacturing of workpieces in a manufacturing plant. The indoor location system includes an analysis unit configured to determine a position of a mobile transceiver unit to be localized from runtimes of electromagnetic signals between transceiver units, and an energy consumption control unit configured to output a control signal for deactivating a localizing mode of a position signal module of at least one of the mobile transceiver units if participation of the at least one mobile transceiver unit in position determination operations is not required and to output a control signal for activating the localizing mode of the position signal module of the at least one of the mobile transceiver units from a deactivated state when participation of the at least one mobile transceiver unit in a position determination operation is required.

Autonomous task performance based on visual embeddings

A method for controlling a robotic device is presented. The method includes capturing an image corresponding to a current view of the robotic device. The method also includes identifying a keyframe image comprising a first set of pixels matching a second set of pixels of the image. The method further includes performing, by the robotic device, a task corresponding to the keyframe image.

Determining a Virtual Representation of an Environment By Projecting Texture Patterns
20210187736 · 2021-06-24 ·

Example methods and systems for determining 3D scene geometry by projecting patterns of light onto a scene are provided. In an example method, a first projector may project a first random texture pattern having a first wavelength and a second projector may project a second random texture pattern having a second wavelength. A computing device may receive sensor data that is indicative of an environment as perceived from a first viewpoint of a first optical sensor and a second viewpoint of a second optical sensor. Based on the received sensor data, the computing device may determine corresponding features between sensor data associated with the first viewpoint and sensor data associated with the second viewpoint. And based on the determined corresponding features, the computing device may determine an output including a virtual representation of the environment that includes depth measurements indicative of distances to at least one object.

Positioning a Robot Sensor for Object Classification
20210187735 · 2021-06-24 ·

In one embodiment, a method includes receiving, from a first sensor on a robot, first sensor data indicative of an environment of the robot. The method also includes identifying, based on the first sensor data, an object of an object type in the environment of the robot, where the object type is associated with a classifier that takes sensor data from a predetermined pose relative to the object as input. The method further includes causing the robot to position a second sensor on the robot at the predetermined pose relative to the object. The method additionally includes receiving, from the second sensor, second sensor data indicative of the object while the second sensor is positioned at the predetermined pose relative to the object. The method further includes determining, by inputting the second sensor data into the classifier, a property of the object.

Training methods for deep networks

A method for training a deep neural network of a robotic device is described. The method includes constructing a 3D model using images captured via a 3D camera of the robotic device in a training environment. The method also includes generating pairs of 3D images from the 3D model by artificially adjusting parameters of the training environment to form manipulated images using the deep neural network. The method further includes processing the pairs of 3D images to form a reference image including embedded descriptors of common objects between the pairs of 3D images. The method also includes using the reference image from training of the neural network to determine correlations to identify detected objects in future images.