H04N13/10

Methods and apparatus for determining adjustment parameter during encoding of spherical multimedia content

Provided are methods and apparatus for determining an adjustment parameter during encoding of a spherical multimedia content which comprises finding the region of maximum concentrated energy is concentrated in the generated energy map of the spherical multimedia content, measuring the width of the maximum concentrated energy region in the generated energy map, and deriving optimal adjustment parameter from the width of the maximum concentrated energy region in the generated energy map.

Augmented Three Dimensional Point Collection of Vertical Structures

Automated methods and systems are disclosed, including a method comprising: obtaining a first three-dimensional-data point cloud of a horizontal surface of an object of interest, the first three-dimensional-data point cloud having a first resolution and having a three-dimensional location associated with each point in the first three-dimensional-data point cloud; capturing one or more aerial image, at one or more oblique angle, depicting at least a vertical surface of the object of interest; analyzing the one or more aerial image with a computer system to determine three-dimensional locations of additional points on the object of interest; and updating the first three-dimensional-data point cloud with the three-dimensional locations of the additional points on the object of interest to create a second three-dimensional-data point cloud having a second resolution greater than the first resolution of the first three-dimensional-data point cloud.

Augmented Three Dimensional Point Collection of Vertical Structures

Automated methods and systems are disclosed, including a method comprising: obtaining a first three-dimensional-data point cloud of a horizontal surface of an object of interest, the first three-dimensional-data point cloud having a first resolution and having a three-dimensional location associated with each point in the first three-dimensional-data point cloud; capturing one or more aerial image, at one or more oblique angle, depicting at least a vertical surface of the object of interest; analyzing the one or more aerial image with a computer system to determine three-dimensional locations of additional points on the object of interest; and updating the first three-dimensional-data point cloud with the three-dimensional locations of the additional points on the object of interest to create a second three-dimensional-data point cloud having a second resolution greater than the first resolution of the first three-dimensional-data point cloud.

Image processing apparatus, image processing system, image processing method, and storage medium
11128813 · 2021-09-21 · ·

An image processing system includes an image obtaining unit that obtains images based on capturing from plural directions by plural cameras, an information obtaining unit that obtains viewpoint information indicating a virtual viewpoint, and a generation unit configured to generate virtual viewpoint images on a basis of the obtained images and viewpoint information. The generation unit generates a first virtual viewpoint image outputted to a display apparatus that displays an image for a user to specify a virtual viewpoint and a second virtual viewpoint image outputted to an output destination different from the display apparatus by using at least one of data generated in a process for generating the first virtual viewpoint image by image processing using the plural images obtained by the image obtaining unit and the first virtual viewpoint image, the second virtual viewpoint image having a higher image quality than that of the first virtual viewpoint image.

Image processing apparatus, image processing system, image processing method, and storage medium
11128813 · 2021-09-21 · ·

An image processing system includes an image obtaining unit that obtains images based on capturing from plural directions by plural cameras, an information obtaining unit that obtains viewpoint information indicating a virtual viewpoint, and a generation unit configured to generate virtual viewpoint images on a basis of the obtained images and viewpoint information. The generation unit generates a first virtual viewpoint image outputted to a display apparatus that displays an image for a user to specify a virtual viewpoint and a second virtual viewpoint image outputted to an output destination different from the display apparatus by using at least one of data generated in a process for generating the first virtual viewpoint image by image processing using the plural images obtained by the image obtaining unit and the first virtual viewpoint image, the second virtual viewpoint image having a higher image quality than that of the first virtual viewpoint image.

Employing three-dimensional (3D) data predicted from two-dimensional (2D) images using neural networks for 3D modeling applications and other applications

The disclosed subject matter is directed to employing machine learning models configured to predict 3D data from 2D images using deep learning techniques to derive 3D data for the 2D images. In some embodiments, a system is described comprising a memory that stores computer executable components, and a processor that executes the computer executable components stored in the memory. The computer executable components comprise a reception component configured to receive two-dimensional images, and a three-dimensional data derivation component configured to employ one or more three-dimensional data from two-dimensional data (3D-from-2D) neural network models to derive three-dimensional data for the two-dimensional images.

Employing three-dimensional (3D) data predicted from two-dimensional (2D) images using neural networks for 3D modeling applications and other applications

The disclosed subject matter is directed to employing machine learning models configured to predict 3D data from 2D images using deep learning techniques to derive 3D data for the 2D images. In some embodiments, a system is described comprising a memory that stores computer executable components, and a processor that executes the computer executable components stored in the memory. The computer executable components comprise a reception component configured to receive two-dimensional images, and a three-dimensional data derivation component configured to employ one or more three-dimensional data from two-dimensional data (3D-from-2D) neural network models to derive three-dimensional data for the two-dimensional images.

Data Processing Method and Electronic Device
20210201568 · 2021-07-01 ·

Embodiments of the present application provide a data processing method and an electronic device. The data processing method includes: determining whether a current collection scene satisfies a condition for enabling a high-dynamic range (HDR) collection function; automatically enabling the HDR collection function in response to the current collection scene satisfying the condition for enabling the HDR collection function; and collecting at least two two-dimensional images with different exposures within a collection time of one frame of three-dimensional video data based on the HDR collection function; wherein the at least two two-dimensional images are configured to enable a mobile edge computing (MEC) server to build a three-dimensional video.

CAPTURING AND ALIGNING THREE-DIMENSIONAL SCENES
20210166495 · 2021-06-03 ·

Systems and methods for building a three-dimensional composite scene are disclosed. Certain embodiments of the systems and methods may include the use of a three-dimensional capture device that captures a plurality of three-dimensional images of an environment. Some embodiments may further include elements concerning aligning and/or mapping the captured images. Various embodiments may further include elements concerning reconstructing the environment from which the images were captured. The methods disclosed herein may be performed by a program embodied on a non-transitory computer-readable storage medium when executed the program is executed a processor.

CAPTURING AND ALIGNING THREE-DIMENSIONAL SCENES
20210166495 · 2021-06-03 ·

Systems and methods for building a three-dimensional composite scene are disclosed. Certain embodiments of the systems and methods may include the use of a three-dimensional capture device that captures a plurality of three-dimensional images of an environment. Some embodiments may further include elements concerning aligning and/or mapping the captured images. Various embodiments may further include elements concerning reconstructing the environment from which the images were captured. The methods disclosed herein may be performed by a program embodied on a non-transitory computer-readable storage medium when executed the program is executed a processor.