H04N19/112

Live Teleporting System and Apparatus
20180007314 · 2018-01-04 ·

A method of producing a Pepper's Ghost, includes projecting an image of a subject onto a reflective and transparent screen to create a virtual image of the subject alongside an object, the subject in the virtual image having a colour temperature. The object is illuminated with light having a colour and intensity that results in a colour temperature of the object at least approximately matching the colour temperature of the subject in the virtual image. The subject in the virtual image has a luminance and may be illuminated with light having a colour and intensity that results in a luminance of the object at least approximately matching the luminance of the subject in the virtual image.

Live Teleporting System and Apparatus
20180007314 · 2018-01-04 ·

A method of producing a Pepper's Ghost, includes projecting an image of a subject onto a reflective and transparent screen to create a virtual image of the subject alongside an object, the subject in the virtual image having a colour temperature. The object is illuminated with light having a colour and intensity that results in a colour temperature of the object at least approximately matching the colour temperature of the subject in the virtual image. The subject in the virtual image has a luminance and may be illuminated with light having a colour and intensity that results in a luminance of the object at least approximately matching the luminance of the subject in the virtual image.

Encoding device and encoding method

An encoding method includes determining video format information, (i) setting each of all frames or all fields which are included in the video, as a picture, regardless of whether the video format is the interlace format or the progressive format, (ii) setting a POC indicating display order to each of all of the set pictures one by one, the POC being different each other, and encoding a picture to be encoded which is the frame or the field with reference to a picture previously encoded before encoding the picture to be encoded. In the encoding, the video is encoded with a syntax structure which is not dependent on the video format, the video format information is encoded in a header of a sequence which is a unit of the video, and the encoded bit stream is generated.

Encoding device and encoding method

An encoding method includes determining video format information, (i) setting each of all frames or all fields which are included in the video, as a picture, regardless of whether the video format is the interlace format or the progressive format, (ii) setting a POC indicating display order to each of all of the set pictures one by one, the POC being different each other, and encoding a picture to be encoded which is the frame or the field with reference to a picture previously encoded before encoding the picture to be encoded. In the encoding, the video is encoded with a syntax structure which is not dependent on the video format, the video format information is encoded in a header of a sequence which is a unit of the video, and the encoded bit stream is generated.

VIDEO DECODING IMPLEMENTATIONS FOR A GRAPHICS PROCESSING UNIT

Video decoding innovations for multithreading implementations and graphics processor unit (“GPU”) implementations are described. For example, for multithreaded decoding, a decoder uses innovations in the areas of layered data structures, picture extent discovery, a picture command queue, and/or task scheduling for multithreading. Or, for a GPU implementation, a decoder uses innovations in the areas of inverse transforms, inverse quantization, fractional interpolation, intra prediction using waves, loop filtering using waves, memory usage and/or performance-adaptive loop filtering. Innovations are also described in the areas of error handling and recovery, determination of neighbor availability for operations such as context modeling and intra prediction, CABAC decoding, computation of collocated information for direct mode macroblocks in B slices, reduction of memory consumption, implementation of trick play modes, and picture dropping for quality adjustment.

VIDEO DECODING IMPLEMENTATIONS FOR A GRAPHICS PROCESSING UNIT

Video decoding innovations for multithreading implementations and graphics processor unit (“GPU”) implementations are described. For example, for multithreaded decoding, a decoder uses innovations in the areas of layered data structures, picture extent discovery, a picture command queue, and/or task scheduling for multithreading. Or, for a GPU implementation, a decoder uses innovations in the areas of inverse transforms, inverse quantization, fractional interpolation, intra prediction using waves, loop filtering using waves, memory usage and/or performance-adaptive loop filtering. Innovations are also described in the areas of error handling and recovery, determination of neighbor availability for operations such as context modeling and intra prediction, CABAC decoding, computation of collocated information for direct mode macroblocks in B slices, reduction of memory consumption, implementation of trick play modes, and picture dropping for quality adjustment.

DISPLAY DEVICE
20230011698 · 2023-01-12 ·

A display device includes a first display layer including display elements disposed on an object; a power supply that supplies a power signal to the display elements; and a signal controller having an encoder that encodes first image data into second image data and that supplies it to the display elements. Each display element includes a base member; a pixel a driving circuit unit having a decoder that decodes the second image data into the first image data and that provides a pixel driving signal to the pixel; a first antenna unit that receives the power signal and that provides the power signal to the driving circuit unit; a second antenna unit that receives the second image data and that provides the second image data to the decoder; and a third antenna unit that transmits and receives an addressing signal for detecting a relative position between the display elements.

CODING AND DECODING OF INTERLEAVED IMAGE DATA

Sampled data is packaged in checkerboard format for encoding and decoding. The sampled data may be quincunx sampled multi-image video data (e.g., 3D video or a multi-program stream), and the data may also be divided into sub-images of each image which are then multiplexed, or interleaved, in frames of a video stream to be encoded and then decoded using a standardized video encoder. A system for viewing may utilize a standard video decoder and a formatting device that de-interleaves the decoded sub-images of each frame reformats the images for a display device. A 3D video may be encoded using a most advantageous interleaving format such that a preferred quality and compression ratio is reached. In one embodiment, the invention includes a display device that accepts data in multiple formats.

Transmission apparatus, transmission method, reception apparatus, reception method, recording apparatus, and recording method
11523120 · 2022-12-06 · ·

The present technology makes it easy to present an image having appropriate image quality at a receiver side that receives high-frame-rate moving image data. A video stream obtained by encoding moving image data having a high frame rate is generated. A container containing the video stream is transmitted. Blur control information for controlling blur is inserted into a layer of the container and/or a layer of the video stream. The blur control information gives, for example, weighting coefficients for individual frames in a blurring process for adding image data of neighboring frames to image data of a current frame.

Transmission apparatus, transmission method, reception apparatus, reception method, recording apparatus, and recording method
11523120 · 2022-12-06 · ·

The present technology makes it easy to present an image having appropriate image quality at a receiver side that receives high-frame-rate moving image data. A video stream obtained by encoding moving image data having a high frame rate is generated. A container containing the video stream is transmitted. Blur control information for controlling blur is inserted into a layer of the container and/or a layer of the video stream. The blur control information gives, for example, weighting coefficients for individual frames in a blurring process for adding image data of neighboring frames to image data of a current frame.