G06T2207/10028

Depth estimation using biometric data

Method of generating depth estimate based on biometric data starts with server receiving positioning data from first device associated with first user. First device generates positioning data based on analysis of a data stream comprising images of second user that is associated with second device. Server then receives a biometric data of second user from second device. Biometric data is based on output from a sensor or a camera included in second device. Server then determines a distance of second user from first device using positioning data and biometric data of the second user. Other embodiments are described herein.

Viewpoint dependent brick selection for fast volumetric reconstruction

A method to culling parts of a 3D reconstruction volume is provided. The method makes available to a wide variety of mobile XR applications fresh, accurate and comprehensive 3D reconstruction data with low usage of computational resources and storage spaces. The method includes culling parts of the 3D reconstruction volume against a depth image. The depth image has a plurality of pixels, each of which represents a distance to a surface in a scene. In some embodiments, the method includes culling parts of the 3D reconstruction volume against a frustum. The frustum is derived from a field of view of an image sensor, from which image data to create the 3D reconstruction is obtained.

Virtual teach and repeat mobile manipulation system

A method for controlling a robotic device is presented. The method includes positioning the robotic device within a task environment. The method also includes mapping descriptors of a task image of a scene in the task environment to a teaching image of a teaching environment. The method further includes defining a relative transform between the task image and the teaching image based on the mapping. Furthermore, the method includes updating parameters of a set of parameterized behaviors based on the relative transform to perform a task corresponding to the teaching image.

Methods and apparatus for inspecting an engine

A computer-implemented method comprising: receiving data comprising two-dimensional data and three-dimensional data of a component of an engine; identifying a feature of the component using the two-dimensional data; determining coordinates of the feature in the two-dimensional data; determining coordinates of the feature in the three-dimensional data using: the determined coordinates of the feature in the two-dimensional data; and a pre-determined transformation between coordinates in two-dimensional data and coordinates in three-dimensional data; and measuring a parameter of the feature of the component using the determined coordinates of the feature in the three-dimensional data.

System and method for large-scale lane marking detection using multimodal sensor data
11580754 · 2023-02-14 · ·

A system and method for large-scale lane marking detection using multimodal sensor data are disclosed. A particular embodiment includes: receiving image data from an image generating device mounted on a vehicle; receiving point cloud data from a distance and intensity measuring device mounted on the vehicle; fusing the image data and the point cloud data to produce a set of lane marking points in three-dimensional (3D) space that correlate to the image data and the point cloud data; and generating a lane marking map from the set of lane marking points.

Spatial construction using guided surface detection
11580658 · 2023-02-14 · ·

Described herein are a system and methods for efficiently using depth and image information for a space to generate a 3D representation of that space. In some embodiments, an indication of one or more points is received with respect to image information, which is then mapped to corresponding points within depth information. A boundary may then be calculated to be associated with each of the points based on the depth information at, and surrounding, each point. Each of the boundaries are extended outward until junctions are identified as bounding the boundaries in a direction. The system may determine whether the process is complete or not based on whether any of the calculated boundaries are currently unlimited in extent in any direction. Once the system determines that each of the boundaries is limited in extent, a 3D representation of the space may be generated based on the identified junctions and/or boundaries.

Electrical power grid modeling

Methods, systems, and apparatus, including computer programs encoded on a storage device, for electric grid asset detection are enclosed. An electric grid asset detection method includes: obtaining overhead imagery of a geographic region that includes electric grid wires; identifying the electric grid wires within the overhead imagery; and generating a polyline graph of the identified electric grid wires. The method includes replacing curves in polylines within the polyline graph with a series of fixed lines and endpoints; identifying, based on characteristics of the fixed lines and endpoints, a location of a utility pole that supports the electric grid wires; detecting an electric grid asset from street level imagery at the location of the utility pole; and generating a representation of the electric grid asset for use in a model of the electric grid.

Surveying data processing device, surveying data processing method, and surveying data processing program
11580696 · 2023-02-14 · ·

A surveying data processing device includes a point cloud data acquiring unit, a three-dimensional model acquiring unit, a first correspondence relationship determining unit, an extended three-dimensional data generating unit, and a second correspondence relationship determining unit. The point cloud data acquiring unit acquires first point cloud data obtained by laser scanning, at a first viewpoint, and acquires second point cloud data obtained by laser scanning, at a second viewpoint. The three-dimensional model acquiring unit acquires data of a three-dimensional model. The first correspondence relationship determining unit obtains a correspondence relationship between the first point cloud data and the three-dimensional model. The extended three-dimensional data generating unit generates extended three-dimensional data in which the first point cloud data is extended, on the basis of the correspondence relationship. The second correspondence relationship determining unit determines a correspondence relationship between the extended three-dimensional data and the second point cloud data.

THREE-DIMENSIONAL OPTICAL MEASURING MOBILE APPARATUS FOR ROPES WITH ROPE ATTACHMENT DEVICE

A calibrated three-dimensional optical measuring apparatus for the three-dimensional measurement of geometric parameters of a rope has a frame defining and arranged around a rope receiving cavity. A plurality of image acquisition devices is configured to acquire a plurality of digital images of at least one region of an outer surface of the rope. The image acquisition devices are fixed to the frame and arranged around the rope when the calibrated three-dimensional optical measuring apparatus receives the rope in the rope receiving cavity. An attachment device is configured to constrain the calibrated three-dimensional optical measuring apparatus to the rope in a relatively translatable manner with respect to the rope. An electronic digital image processing device is configured to process a multiplicity of digital images and obtain a three-dimensional photogrammetric reconstruction of points of the digital images of the rope acquired by the image acquisition devices.

System and Methods for Depth Estimation
20230037958 · 2023-02-09 ·

A system includes a computing device. The computing device is configured to perform a set of functions. The set of functions includes receiving an image, wherein the image comprises a two-dimensional array of data. The set of functions includes extracting, by a two-dimensional neural network, a plurality of two-dimensional features from the two-dimensional array of data. The set of functions includes generating a linear combination of the plurality of two-dimensional features to form a single three-dimensional input feature. The set of functions includes extracting, by a three-dimensional neural network, a plurality of three-dimensional features from the single three-dimensional input feature. The set of functions includes determining a two-dimensional depth map. The two-dimensional depth map contains depth information corresponding to the plurality of three-dimensional features.