Patent classifications
H04N13/172
Directed interpolation and data post-processing
An encoding device evaluates a plurality of processing and/or post-processing algorithms and/or methods to be applied to a video stream, and signals a selected method, algorithm, class or category of methods/algorithms either in an encoded bitstream or as side information related to the encoded bitstream. A decoding device or post-processor utilizes the signaled algorithm or selects an algorithm/method based on the signaled method or algorithm. The selection is based, for example, on availability of the algorithm/method at the decoder/post-processor and/or cost of implementation. The video stream may comprise, for example, downsampled multiplexed stereoscopic images and the selected algorithm may include any of upconversion and/or error correction techniques that contribute to a restoration of the downsampled images.
Directed interpolation and data post-processing
An encoding device evaluates a plurality of processing and/or post-processing algorithms and/or methods to be applied to a video stream, and signals a selected method, algorithm, class or category of methods/algorithms either in an encoded bitstream or as side information related to the encoded bitstream. A decoding device or post-processor utilizes the signaled algorithm or selects an algorithm/method based on the signaled method or algorithm. The selection is based, for example, on availability of the algorithm/method at the decoder/post-processor and/or cost of implementation. The video stream may comprise, for example, downsampled multiplexed stereoscopic images and the selected algorithm may include any of upconversion and/or error correction techniques that contribute to a restoration of the downsampled images.
Method for processing immersive video and method for producing immersive video
An immersive video processing method according to the present disclosure includes determining a priority order of pruning for source view videos, generating a residual video for an additional view video based on the priority order of pruning, packing a patch generated based on the residual video into an atlas video, and encoding the atlas video.
Method for processing immersive video and method for producing immersive video
An immersive video processing method according to the present disclosure includes determining a priority order of pruning for source view videos, generating a residual video for an additional view video based on the priority order of pruning, packing a patch generated based on the residual video into an atlas video, and encoding the atlas video.
Methods, systems, and media for generating an immersive light field video with a layered mesh representation
Mechanisms for generating compressed images are provided. More particularly, methods, systems, and media for capturing, reconstructing, compressing, and rendering view-dependent immersive light field video with a layered mesh representation are provided.
Methods, systems, and media for generating an immersive light field video with a layered mesh representation
Mechanisms for generating compressed images are provided. More particularly, methods, systems, and media for capturing, reconstructing, compressing, and rendering view-dependent immersive light field video with a layered mesh representation are provided.
Recording device, image observation device, observation system, control method of observation system, and computer-readable recording medium
A recording device includes: a memory; and a processor including hardware. The processor is configured to generate, based on temporal change in plural sets of image data that have been generated by an endoscope and arranged chronologically, biological information on a subject, associate the plural sets of image data with the biological information to record the plural sets of image data with the biological information into the memory, and select, based on the biological information, image data from the plural sets of image data that have been recorded in the memory to generate three-dimensional image data.
Recording device, image observation device, observation system, control method of observation system, and computer-readable recording medium
A recording device includes: a memory; and a processor including hardware. The processor is configured to generate, based on temporal change in plural sets of image data that have been generated by an endoscope and arranged chronologically, biological information on a subject, associate the plural sets of image data with the biological information to record the plural sets of image data with the biological information into the memory, and select, based on the biological information, image data from the plural sets of image data that have been recorded in the memory to generate three-dimensional image data.
METHODS, SYSTEMS, AND MEDIA FOR GENERATING AN IMMERSIVE LIGHT FIELD VIDEO WITH A LAYERED MESH REPRESENTATION
Mechanisms for generating compressed images are provided. More particularly, methods, systems, and media for capturing, reconstructing, compressing, and rendering view-dependent immersive light field video with a layered mesh representation are provided.
Apparatus, a method and a computer program for omnidirectional video
There are disclosed various methods, apparatuses and computer program products for video encoding and decoding. In some embodiments an encoding method comprises obtaining information of a region (63) for overlaying at least a part of an omnidirectional video (61), information of a current viewport (62) in the omnidirectional video (61), and information of an overlaying method to determine whether to overlay a part of the current viewport (62) or the whole current viewport (62) by the overlaying region (63). The method may further comprise encoding information of the overlaying region (63), the current viewport (62) and the overlaying method. In some embodiments a decoding method comprises receiving and decoding information of a region (63) for overlaying at least a part of an omnidirectional video (61), information of a current viewport (62) in the omnidirectional video (61), and information of an overlaying method to determine whether to overlay a part of the current viewport (62) or the whole current viewport (62) by the overlaying region (63). The method may further comprise examining the decoded information of the overlaying method. If the examining reveals that the overlaying method is a partial overlaying method, overlaying a part of the image information of the current viewport (62) by the image information of the overlaying region (63), or if the examining reveals that the overlaying method is a whole overlaying method, overlaying the whole current viewport (62) by the image information of the overlaying region (63).