G06T7/579

DETERMINING CAMERA ROTATIONS BASED ON KNOWN TRANSLATIONS
20230007962 · 2023-01-12 ·

In example embodiments, techniques are provided for calculating camera rotation using translations between sensor-derived camera positions (e.g., from GPS) and pairwise information, producing a sensor-derived camera pose that may be integrated in an early stage of SfM reconstruction. A software process of a photogrammetry application may obtain metadata including sensor-derived camera positions for a plurality of cameras for a set of images and determine optical centers based thereupon. The software process may estimate unit vectors along epipoles from a given camera of the plurality of cameras to two or more other cameras. The software process then may determine a camera rotation that best maps unit vectors defined based on differences in the optical centers to the unit vectors along the epipoles. The determined camera rotation and the sensor-derived camera position form a sensor-derived camera pose that may be returned and used.

Methods and systems for predicting pressure maps of 3D objects from 2D photos using deep learning
11574421 · 2023-02-07 · ·

A structured 3D model of a real-world object is generated from a series of 2D photographs of the object, using photogrammetry, a keypoint detection deep learning network (DLN), and retopology. In addition, object parameters of the object are received. A pressure map of the object is then generated by a pressure estimation DLN based on the structured 3D model and the object parameters. The pressure estimation DLN was trained on structured 3D models, object parameters, and pressure maps of a plurality of objects belonging to a given object category. The pressure map of the real-world object can be used in downstream processes, such as custom manufacturing.

Method, system and apparatus for dynamic loop closure in mapping trajectories

A method for dynamic loop closure in a mobile automation apparatus includes: obtaining mapping trajectory data defining a plurality of trajectory segments traversing a facility to be mapped; controlling a locomotive mechanism of the apparatus to traverse a current segment; generating a sequence of keyframes for the current segment using sensor data captured via a navigational sensor of the apparatus; and, for each keyframe: determining an estimated apparatus pose based on the sensor data and a preceding estimated pose corresponding to a preceding keyframe; and, determining a noise metric defining a level of uncertainty associated with the estimated pose relative to the preceding estimated pose; determining, for a selected keyframe, an accumulated noise metric based on the noise metrics for the selected keyframe and each previous keyframe; and when the accumulated noise metric exceeds a threshold, updating the mapping trajectory data to insert a repetition of one of the segments.

Multipoint SLAM capture

“Feature points” in “point clouds” that are visible to multiple respective cameras (i.e., aspects of objects imaged by the cameras) are reported via wired and/or wireless communication paths to a compositing processor which can determine whether a particular feature point “moved” a certain amount relative to another image. In this way, the compositing processor can determine, e.g., using triangulation and recognition of common features, how much movement occurred and where any particular camera was positioned when a latter image from that camera is captured. Thus, “overlap” of feature points in multiple images is used so that the system can close the loop to generate a SLAM map. The compositing processor, which may be implemented by a server or other device, generates the SLAM map by merging feature point data from multiple imaging devices.

Multipoint SLAM capture

“Feature points” in “point clouds” that are visible to multiple respective cameras (i.e., aspects of objects imaged by the cameras) are reported via wired and/or wireless communication paths to a compositing processor which can determine whether a particular feature point “moved” a certain amount relative to another image. In this way, the compositing processor can determine, e.g., using triangulation and recognition of common features, how much movement occurred and where any particular camera was positioned when a latter image from that camera is captured. Thus, “overlap” of feature points in multiple images is used so that the system can close the loop to generate a SLAM map. The compositing processor, which may be implemented by a server or other device, generates the SLAM map by merging feature point data from multiple imaging devices.

System for generating a three-dimensional scene of a physical environment

A system configured to assist a user in scanning a physical environment in order to generate a three-dimensional scan or model. In some cases, the system may include an interface to assist the user in capturing data usable to determine a scale or depth of the physical environment and to perform a scan in a manner that minimizes gaps.

SELECTION OF OBJECTS IN THREE-DIMENSIONAL SPACE

A user may select or interact with objects in a scene using gaze tracking and movement tracking. In some examples, the scene may comprise a virtual reality scene or a mixed reality scene. A user may move an input object in an environment and be facing in a direction towards the movement of the input object. A computing device may use sensors to obtain movement data corresponding to the movement of the input object, and gaze tracking data including to a location of eyes of the user. One or more modules of the computing device may use the movement data and gaze tracking data to determine a three-dimensional selection space in the scene. In some examples, objects included in the three-dimensional selection space may be selected or otherwise interacted with.

Product purchase support system, product purchase support device and method, POS terminal device, and non-transitory computer readable medium
11710114 · 2023-07-25 · ·

Provided are a product purchase support system (100), a product purchase support device (10), and a POS terminal device (20) that improve the efficiency of checkout processing for product sales and enhance the convenience of customers when purchasing a product. The product purchase support system (100) according to the present invention includes the product purchase support device (10) including a depth camera (12) and a first display device (13), and a POS terminal device (20) that performs checkout processing for a product. When the product purchase support device (10) detects a product selection motion of selecting a product by a customer, it identifies a product selected by the customer and generates product information related to this product, and displays the product information on the first display device (13). Further, the product purchase support device (10) outputs the product information to the POS terminal device (20).

Product purchase support system, product purchase support device and method, POS terminal device, and non-transitory computer readable medium
11710114 · 2023-07-25 · ·

Provided are a product purchase support system (100), a product purchase support device (10), and a POS terminal device (20) that improve the efficiency of checkout processing for product sales and enhance the convenience of customers when purchasing a product. The product purchase support system (100) according to the present invention includes the product purchase support device (10) including a depth camera (12) and a first display device (13), and a POS terminal device (20) that performs checkout processing for a product. When the product purchase support device (10) detects a product selection motion of selecting a product by a customer, it identifies a product selected by the customer and generates product information related to this product, and displays the product information on the first display device (13). Further, the product purchase support device (10) outputs the product information to the POS terminal device (20).

Image Registration with Device Data
20180012371 · 2018-01-11 ·

Systems and methods for image registration using data collected by an electronic device, such as a mobile device, capable of simultaneous localization and mapping are provided. An electronic device, such as a mobile device, can be can be configured to collect data using a variety of sensors as the device is carried or transported through a space. The collected data can be processed and analyzed to generate a three-dimensional representation of the space and objects in the space in near real time as the device is carried through the space. The data can be used for a variety of purposes, including registering imagery for localization and image processing.