G01B11/22

Compact metalens depth sensors

Disclosed is a depth sensor for determining depth. The depth sensor can include a photosensor, a metalens configured to manipulate light to simultaneously produce at least two images having different focal distances on a surface of the photosensor, and processing circuitry configured to receive, from the photosensor, a measurement of the at least two images having different focal distances. The depth sensor can determine, according to the measurement, a depth associated with at least one feature in the at least two images.

Compact metalens depth sensors

Disclosed is a depth sensor for determining depth. The depth sensor can include a photosensor, a metalens configured to manipulate light to simultaneously produce at least two images having different focal distances on a surface of the photosensor, and processing circuitry configured to receive, from the photosensor, a measurement of the at least two images having different focal distances. The depth sensor can determine, according to the measurement, a depth associated with at least one feature in the at least two images.

Methods and apparatus for absolute and relative depth measurements using camera focus distance

A depth measuring apparatus includes a camera assembly configured to capture a plurality of images of a target at a plurality of distances from the target. The depth measuring apparatus further includes a controller configured to, for each of a plurality of regions within the plurality of images: determine corresponding gradient values within the plurality of images; determine a corresponding maximum gradient value from the corresponding gradient values; and determine, based on the corresponding maximum gradient value, a depth measurement for a region of the plurality of regions.

Image sensor comprising entangled pixel

A depth sensor includes a first pixel including a plurality of first photo transistors each receiving a first photo gate signal, a second pixel including a plurality of second photo transistors each receiving a second photo gate signal, a third pixel including a plurality of third photo transistors each receiving a third photo gate signal, a fourth pixel including a plurality of fourth photo transistors each receiving a fourth photo gate signal, and a photoelectric conversion element shared by first to fourth photo transistors of the plurality of first to fourth photo transistors.

Image sensor comprising entangled pixel

A depth sensor includes a first pixel including a plurality of first photo transistors each receiving a first photo gate signal, a second pixel including a plurality of second photo transistors each receiving a second photo gate signal, a third pixel including a plurality of third photo transistors each receiving a third photo gate signal, a fourth pixel including a plurality of fourth photo transistors each receiving a fourth photo gate signal, and a photoelectric conversion element shared by first to fourth photo transistors of the plurality of first to fourth photo transistors.

Training of joint depth prediction and completion

System, methods, and other embodiments described herein relate to training a depth model for joint depth completion and prediction. In one arrangement, a method includes generating depth features from sparse depth data according to a sparse auxiliary network (SAN) of a depth model. The method includes generating a first depth map from a monocular image and a second depth map from the monocular image and the depth features using the depth model. The method includes generating a depth loss from the second depth map and the sparse depth data and an image loss from the first depth map and the sparse depth data. The method includes updating the depth model including the SAN using the depth loss and the image loss.

Training of joint depth prediction and completion

System, methods, and other embodiments described herein relate to training a depth model for joint depth completion and prediction. In one arrangement, a method includes generating depth features from sparse depth data according to a sparse auxiliary network (SAN) of a depth model. The method includes generating a first depth map from a monocular image and a second depth map from the monocular image and the depth features using the depth model. The method includes generating a depth loss from the second depth map and the sparse depth data and an image loss from the first depth map and the sparse depth data. The method includes updating the depth model including the SAN using the depth loss and the image loss.

TIME OF FLIGHT SENSING METHOD
20230011969 · 2023-01-12 ·

A method of time of flight sensing. The method comprises using an emitter to emit pulses of radiation and using an array of photo-detectors to detect radiation reflected from an object. For a given group of photo-detectors of the array, the method determines based upon measured times of flight of the radiation, whether to use a first mode of operation in which outputs from individual photo-detectors of the group are combined together or to use a second mode of operation in which outputs from individual photo-detectors are processed separately. The array of photo-detectors comprises a plurality of groups of photo-detectors. One or more groups of photo-detectors operate in the first mode whilst in parallel one or more groups of photo-detectors operate in the second mode.

TIME OF FLIGHT SENSING METHOD
20230011969 · 2023-01-12 ·

A method of time of flight sensing. The method comprises using an emitter to emit pulses of radiation and using an array of photo-detectors to detect radiation reflected from an object. For a given group of photo-detectors of the array, the method determines based upon measured times of flight of the radiation, whether to use a first mode of operation in which outputs from individual photo-detectors of the group are combined together or to use a second mode of operation in which outputs from individual photo-detectors are processed separately. The array of photo-detectors comprises a plurality of groups of photo-detectors. One or more groups of photo-detectors operate in the first mode whilst in parallel one or more groups of photo-detectors operate in the second mode.

Mesh updates via mesh frustum cutting

Various implementations or examples set forth a method for scanning a three-dimensional (3D) environment. The method includes generating, based on sensor data captured by a depth sensor on a device, one or more 3D meshes representing a physical space, wherein each of the 3D meshes comprises a corresponding set of vertices and a corresponding set of faces comprising edges between pairs of vertices; determining that a mesh is visible in a current frame captured by an image sensor on the device; determining, based on the corresponding set of vertices and the corresponding set of faces for the mesh, a portion of the mesh that lies within a view frustum associated with the current frame; and updating the one or more 3D meshes by texturing the portion of the mesh with one or more pixels in the current frame onto which the portion is projected.