Patent classifications
G06T9/00
Three-dimensional data encoding method, three-dimensional data decoding method, three-dimensional data encoding device, and three-dimensional data decoding device
A three-dimensional data encoding method includes: extracting, from first three-dimensional data, second three-dimensional data having an amount of a feature greater than or equal to a threshold; and encoding the second three-dimensional data to generate first encoded three-dimensional data. For example, the three-dimensional data encoding method may further include encoding the first three-dimensional data to generate the second encoded three-dimensional data.
Three-dimensional data encoding method, three-dimensional data decoding method, three-dimensional data encoding device, and three-dimensional data decoding device
A three-dimensional data encoding method includes: extracting, from first three-dimensional data, second three-dimensional data having an amount of a feature greater than or equal to a threshold; and encoding the second three-dimensional data to generate first encoded three-dimensional data. For example, the three-dimensional data encoding method may further include encoding the first three-dimensional data to generate the second encoded three-dimensional data.
IMAGE PROCESSING DEVICE AND METHOD
The present invention relates to an image processing device and method enabling noise removal to be performed according to images and bit rates. A low-pass filter setting unit 93 sets, from filter coefficients stored in a built-in filter coefficient memory 94, a filter coefficient corresponding to intra prediction mode information and a quantization parameter. A neighboring image setting unit 81 uses the filter coefficient set by the low-pass filter setting unit 93 to subject neighboring pixel values of a current block from frame memory 72 to filtering processing. A prediction image generating unit 82 performs intra prediction using the neighboring pixel values subjected to filtering processing, from the neighboring image setting unit 81, and generates a prediction image. The present invention can be applied to an image encoding device which encodes with the H.264/AVC format, for example.
SUBSET BASED COMPRESSION AND DECOMPRESSION OF GRAPHICS DATA
Techniques related to graphics rendering including techniques for compression and/or decompression of graphics data by use of indexed subsets are described.
IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD
The present disclosure relates to an image processing device and an image processing method for instantaneously displaying an image of a user's field of view.
An encoder encodes a celestial sphere image of a cube formed by images of multiple planes generated from omnidirectional images, the encoding being performed plane by plane at a high resolution, to generate a high-resolution encoded stream corresponding to each of the planes. The encoder further encodes, at a low resolution, the celestial sphere image to generate a low-resolution encoded stream. The present disclosure may be applied, for example, to image display systems that generate a celestial sphere image so as to display an image of the user's field of view derived therefrom.
ENCODING, DECODING, AND REPRESENTING HIGH DYNAMIC RANGE IMAGES
Techniques are provided to encode and decode image data comprising a tone mapped (TM) image with HDR reconstruction data in the form of luminance ratios and color residual values. In an example embodiment, luminance ratio values and residual values in color channels of a color space are generated on an individual pixel basis based on a high dynamic range (HDR) image and a derivative tone-mapped (TM) image that comprises one or more color alterations that would not be recoverable from the TM image with a luminance ratio image. The TM image with HDR reconstruction data derived from the luminance ratio values and the color-channel residual values may be outputted in an image file to a downstream device, for example, for decoding, rendering, and/or storing. The image file may be decoded to generate a restored HDR image free of the color alterations.
Systems and methods for compressing three-dimensional image data
Disclosed is a compression system for compressing image data. The compression system receives an uncompressed image file with data points that are defined with absolute values for elements representing the data point position in a space. The compression system stores the absolute values defined for a first data point in a compressed image file, determines a difference between the absolute values of the first data point and the absolute values of a second data point, derives a relative value for the absolute values of the second data point from the difference, and stores the relative value in place of the absolute values of the second data point in the compressed image file.
Systems and methods for compressing three-dimensional image data
Disclosed is a compression system for compressing image data. The compression system receives an uncompressed image file with data points that are defined with absolute values for elements representing the data point position in a space. The compression system stores the absolute values defined for a first data point in a compressed image file, determines a difference between the absolute values of the first data point and the absolute values of a second data point, derives a relative value for the absolute values of the second data point from the difference, and stores the relative value in place of the absolute values of the second data point in the compressed image file.
MACHINE LEARNING IMAGE PROCESSING
A machine learning image processing system performs natural language processing (NLP) and auto-tagging for an image matching process. The system facilitates an interactive process, e.g., through a mobile application, to obtain an image and supplemental user input from a user to execute an image search. The supplemental user input may be provided from a user as speech or text, and NLP is performed on the supplemental user input to determine user intent and additional search attributes for the image search. Using the user intent and the additional search attributes, the system performs image matching on stored images that are tagged with attributes through an auto-tagging process.
TOMOGRAPHIC IMAGE CAPTURING DEVICE
The tomographic image capturing device of the present invention includes a tomographic image capturing means that scans measurement light on a subject's eye fundus (E) to capture tomographic images of the subject's eye fundus and an image processing means that compresses a picture of the captured tomographic images in a scan direction to generate a new tomographic picture. The tomographic image capturing means performs scan at a second scan pitch (P.sub.L) narrower than a first scan pitch (P.sub.H) to capture the tomographic images of the subject's eye fundus. The image processing means compresses the picture (B11) of the tomographic images captured at the second scan pitch (P.sub.L) in the scan direction to generate the new tomographic picture (B12). The measurement width in the scan direction of the new tomographic picture (B12) is a width of a picture corresponding to a measurement width in the scan direction of a tomographic picture (Bn (n=1 to 10)) obtained by scan at the first scan pitch (P.sub.H).