Patent classifications
H04N13/10
DIRECTED INTERPOLATION AND DATA POST-PROCESSING
An encoding device evaluates a plurality of processing and/or post-processing algorithms and/or methods to be applied to a video stream, and signals a selected method, algorithm, class or category of methods/algorithms either in an encoded bitstream or as side information related to the encoded bitstream. A decoding device or post-processor utilizes the signaled algorithm or selects an algorithm/method based on the signaled method or algorithm. The selection is based, for example, on availability of the algorithm/method at the decoder/post-processor and/or cost of implementation. The video stream may comprise, for example, downsampled multiplexed stereoscopic images and the selected algorithm may include any of upconversion and/or error correction techniques that contribute to a restoration of the downsampled images.
Capturing and aligning three-dimensional scenes
Systems and methods for building a three-dimensional composite scene are disclosed. Certain embodiments of the systems and methods may include the use of a three-dimensional capture device that captures a plurality of three-dimensional images of an environment. Some embodiments may further include elements concerning aligning and/or mapping the captured images. Various embodiments may further include elements concerning reconstructing the environment from which the images were captured. The methods disclosed herein may be performed by a program embodied on a non-transitory computer-readable storage medium when executed the program is executed a processor.
Capturing and aligning three-dimensional scenes
Systems and methods for building a three-dimensional composite scene are disclosed. Certain embodiments of the systems and methods may include the use of a three-dimensional capture device that captures a plurality of three-dimensional images of an environment. Some embodiments may further include elements concerning aligning and/or mapping the captured images. Various embodiments may further include elements concerning reconstructing the environment from which the images were captured. The methods disclosed herein may be performed by a program embodied on a non-transitory computer-readable storage medium when executed the program is executed a processor.
Data processing method and apparatus, acquisition device, and storage medium
Disclosed is a data processing method, comprising: acquiring space information of audio acquisition devices of an acquisition device, an acquisition space corresponding to the acquisition device being formed into a geometry, the spatial orientation deployed by video acquisition devices of the acquisition device covering the entire geometry, and the setting orientation of each video acquisition device being correspondingly provided with N audio acquisition devices, wherein N is a positive integer; regarding the N audio acquisition devices provided corresponding to the setting orientation of each video acquisition device, encoding audio data acquired by the N audio acquisition devices according to the space information of the audio acquisition devices, to form M pieces of audio data, the M pieces of audio data carrying space information of audios. Embodiments of the present invention further provide an acquisition device, a data processing device, and a storage medium.
Imaging control apparatus, method for controlling imaging control apparatus, and mobile body
The present disclosure relates to an imaging control apparatus, a method for controlling the imaging control apparatus, and a mobile body that can improve the distance measurement accuracy of a stereo camera mounted in a vehicle. A set of cameras included in a stereo camera system is arranged in line, on a side surface of a main body of a vehicle, in a vertical direction relative to a road surface. Further, in order from a front side of columns of pixels arranged in an array, imaged pixel signals are sequentially read in the vertical direction in units of pixels for each of the columns of the pixels. The present disclosure can be applied to an in-vehicle system.
Imaging control apparatus, method for controlling imaging control apparatus, and mobile body
The present disclosure relates to an imaging control apparatus, a method for controlling the imaging control apparatus, and a mobile body that can improve the distance measurement accuracy of a stereo camera mounted in a vehicle. A set of cameras included in a stereo camera system is arranged in line, on a side surface of a main body of a vehicle, in a vertical direction relative to a road surface. Further, in order from a front side of columns of pixels arranged in an array, imaged pixel signals are sequentially read in the vertical direction in units of pixels for each of the columns of the pixels. The present disclosure can be applied to an in-vehicle system.
Stereoscopic image projection device and stereoscopic display glasses
A stereoscopic image projection device and stereoscopic display glasses include a light source system for sequentially generating a first broad spectrum light and a second broad spectrum light; a light splitter for splitting the first broad spectrum light into a first wavelength light and a second wavelength light, each having different wavelengths, and splitting the second broad spectrum light into a third wavelength light and a fourth wavelength light, each having different wavelengths; and a controller for simultaneously controlling the first wavelength light to display a corresponding color in a left-eye image and the second wavelength light to display a corresponding color in a right-eye image, and simultaneously controlling the third wavelength light to display a corresponding color in the left-eye image and the fourth wavelength light to display a corresponding color in the right-eye image. At the same time, the left eye of a viewer can see the first wavelength light or the third wavelength light, and the right eye thereof can see the second wavelength light or the fourth wavelength light, such that the left and right eyes of the viewer simultaneously receive light rays, thus relieving fatigue to the eyes.
Mapping of spherical image data into rectangular faces for transport and decoding across networks
A system captures a first hemispherical image and a second hemispherical image, each hemispherical image including an overlap portion, the overlap portions capturing a same field of view, the two hemispherical images collectively comprising a spherical FOV and separated along a longitudinal plane. The system maps a modified first hemispherical image to a first portion of the 2D projection of a cubic image, the modified first hemispherical image including a non-overlap portion of the first hemispherical image, and maps a modified second hemispherical image to a second portion of the 2D projection of the cubic image, the modified second hemispherical image also including a non-overlap portion. The system maps the overlap portions of the first hemispherical image and the second hemispherical image to the 2D projection of the cubic image, and encodes the 2D projection of the cubic image to generate an encoded image representative of the spherical FOV.
Mapping of spherical image data into rectangular faces for transport and decoding across networks
A system captures a first hemispherical image and a second hemispherical image, each hemispherical image including an overlap portion, the overlap portions capturing a same field of view, the two hemispherical images collectively comprising a spherical FOV and separated along a longitudinal plane. The system maps a modified first hemispherical image to a first portion of the 2D projection of a cubic image, the modified first hemispherical image including a non-overlap portion of the first hemispherical image, and maps a modified second hemispherical image to a second portion of the 2D projection of the cubic image, the modified second hemispherical image also including a non-overlap portion. The system maps the overlap portions of the first hemispherical image and the second hemispherical image to the 2D projection of the cubic image, and encodes the 2D projection of the cubic image to generate an encoded image representative of the spherical FOV.
WEARABLE DISPLAY DEVICE AND CONTROL METHOD THEREFOR
A wearable apparatus may include an image display; a content receiver; and a processor configured to process image data received through the content receiver to generate an image frame, and control the image display to display the image frame. The processor may be configured to compare a vertical pixel line of a left edge portion of the image frame with a vertical pixel line of a right edge portion of the image frame, and when it is determined that the image frame is a 360-degree Virtual Reality (VR) image, process the 360-degree VR image.