H04N13/271

THREE-DIMENSIONAL MODELING USING HEMISPHERICAL OR SPHERICAL VISIBLE LIGHT-DEPTH IMAGES
20210398303 · 2021-12-23 ·

Three-dimensional modeling includes obtaining a hemispherical or spherical visible light-depth image capturing an operational environment of a user device, generating a perspective converted hemispherical or spherical visible light-depth image, generating a three-dimensional model of the operational environment based on the perspective converted hemispherical or spherical visible light-depth image, and outputting the three-dimensional model. Obtaining the hemispherical or spherical visible light-depth image includes obtaining a hemispherical or spherical visual light image and obtaining a hemispherical or spherical non-visual light depth image. Generating the perspective converted hemispherical or spherical visible light-depth image includes generating a perspective converted hemispherical or spherical visual light image and generating a perspective converted hemispherical or spherical non-visual light depth image.

SCENE CROP VIA ADAPTIVE VIEW-DEPTH DISCONTINUITY
20210398343 · 2021-12-23 · ·

A method, apparatus, and system provide the ability to crop a three-dimensional (3D) scene. The 3D scene is acquired and includes multiple 3D images (with each image from a view angle of an image capture device) and a depth map for each image. The depth values in each depth map are sorted. Multiple initial cutoff depths are determined for the scene based on the view angles of the images (in the scene). A cutoff relaxation depth is determined based on a jump between depth values. A confidence map is generated for each depth map and indicates whether each depth value is above or below the cutoff relaxation depth. The confidence maps are aggregated into an aggregated model. A bounding volume is generated out of the aggregated model. Points are cropped from the scene based on the bounding volume.

SCENE CROP VIA ADAPTIVE VIEW-DEPTH DISCONTINUITY
20210398343 · 2021-12-23 · ·

A method, apparatus, and system provide the ability to crop a three-dimensional (3D) scene. The 3D scene is acquired and includes multiple 3D images (with each image from a view angle of an image capture device) and a depth map for each image. The depth values in each depth map are sorted. Multiple initial cutoff depths are determined for the scene based on the view angles of the images (in the scene). A cutoff relaxation depth is determined based on a jump between depth values. A confidence map is generated for each depth map and indicates whether each depth value is above or below the cutoff relaxation depth. The confidence maps are aggregated into an aggregated model. A bounding volume is generated out of the aggregated model. Points are cropped from the scene based on the bounding volume.

IMAGING METHOD, IMAGING SYSTEM, MANUFACTURING SYSTEM, AND METHOD FOR MANUFACTURING A PRODUCT
20210400252 · 2021-12-23 ·

An imaging system includes a plurality of cameras, and a controller, wherein the controller detects synchronous deviation of image capturing timing of the plurality of cameras by using images respectively captured by the plurality of cameras.

Image processing apparatus and image processing method for aligning polarized images based on a depth map and acquiring a polarization characteristic using the aligned polarized images

A depth map generation unit generates a depth map from images obtained by picking up a subject at a plurality of viewpoint positions by an image pickup unit. On the basis of the depth map generated by the depth map generation unit, an alignment unit aligns polarized images obtained by the image pickup unit picking up the subject at the plurality of viewpoint positions through polarizing filters in different polarization direction at the different viewpoint positions. A polarization characteristic acquisition unit acquires a polarization characteristic of the subject from a desired viewpoint position by using the polarized images aligned by the alignment unit to obtain the high-precision polarization characteristic with little degradation in temporal resolution and spatial resolution. It becomes possible to acquire the polarization characteristic of the subject at the desired viewpoint position.

Image processing apparatus and image processing method for aligning polarized images based on a depth map and acquiring a polarization characteristic using the aligned polarized images

A depth map generation unit generates a depth map from images obtained by picking up a subject at a plurality of viewpoint positions by an image pickup unit. On the basis of the depth map generated by the depth map generation unit, an alignment unit aligns polarized images obtained by the image pickup unit picking up the subject at the plurality of viewpoint positions through polarizing filters in different polarization direction at the different viewpoint positions. A polarization characteristic acquisition unit acquires a polarization characteristic of the subject from a desired viewpoint position by using the polarized images aligned by the alignment unit to obtain the high-precision polarization characteristic with little degradation in temporal resolution and spatial resolution. It becomes possible to acquire the polarization characteristic of the subject at the desired viewpoint position.

SYSTEMS AND METHODS FOR CORRECTING ROLLING SHUTTER ARTIFACTS

Systems having rolling shutter sensors with a plurality of sensor rows are configured for compensating for rolling shutter artifacts that result from different sensor rows in the plurality of sensor rows outputting sensor data at different times. The systems compensate for the rolling shutter artifacts by identifying readout timepoints for the plurality of sensor rows of the rolling shutter sensor while the rolling shutter sensor captures an image of an environment and identifying readout poses each readout timepoint, as well as obtaining a depth map based on the image. The depth map includes a plurality of different rows of depth data that correspond to the different sensor rows. The system further compensates for the rolling shutter artifacts by generating a 3D representation of the environment while unprojecting the rows of depth data into 3D space using the readout poses.

NAVIGATED SURGICAL SYSTEM WITH EYE TO XR HEADSET DISPLAY CALIBRATION
20210386503 · 2021-12-16 ·

A camera tracking system for computer assisted navigation during surgery operatively determines a first pose of a second extended-reality (XR) headset relative to stereo tracking cameras located on a first XR headset based on first tracking information from the stereo tracking cameras. The camera tracking system determines a second pose of eyes of a user wearing the second XR headset relative to the stereo tracking cameras located on the first XR headset based on second tracking information from the stereo tracking cameras. The camera tracking system also calibrates an eye-to-display relationship defining pose of the eyes of the user wearing the second XR headset to a display device of the second XR headset based on the determined first and second poses. The camera tracking system also controls where symbols are displayed on the display device of the second XR headset based on the eye-to-display relationship.

Systems and Methods for Producing a Light Field from a Depth Map

A system includes an electronic display, a computer processor, one or more memory units, and a module stored in the one or more memory units. The module is configured to access a source image stored in the one or more memory units and determine depth data for each pixel of a plurality of pixels of the source image. The module is further configured to map, using the plurality of pixels and the determined depth data for each of the plurality of pixels, the source image to a four-dimensional light field. The module is further configured to send instructions to the electronic display to display the mapped four-dimensional light field.

Systems and Methods for Producing a Light Field from a Depth Map

A system includes an electronic display, a computer processor, one or more memory units, and a module stored in the one or more memory units. The module is configured to access a source image stored in the one or more memory units and determine depth data for each pixel of a plurality of pixels of the source image. The module is further configured to map, using the plurality of pixels and the determined depth data for each of the plurality of pixels, the source image to a four-dimensional light field. The module is further configured to send instructions to the electronic display to display the mapped four-dimensional light field.