H04N19/40

Perceptual luminance nonlinearity-based image data exchange across different display capabilities

A handheld imaging device has a data receiver that is configured to receive reference encoded image data. The data includes reference code values, which are encoded by an external coding system. The reference code values represent reference gray levels, which are being selected using a reference grayscale display function that is based on perceptual non-linearity of human vision adapted at different light levels to spatial frequencies. The imaging device also has a data converter that is configured to access a code mapping between the reference code values and device-specific code values of the imaging device. The device-specific code values are configured to produce gray levels that are specific to the imaging device. Based on the code mapping, the data converter is configured to transcode the reference encoded image data into device-specific image data, which is encoded with the device-specific code values.

Perceptual luminance nonlinearity-based image data exchange across different display capabilities

A handheld imaging device has a data receiver that is configured to receive reference encoded image data. The data includes reference code values, which are encoded by an external coding system. The reference code values represent reference gray levels, which are being selected using a reference grayscale display function that is based on perceptual non-linearity of human vision adapted at different light levels to spatial frequencies. The imaging device also has a data converter that is configured to access a code mapping between the reference code values and device-specific code values of the imaging device. The device-specific code values are configured to produce gray levels that are specific to the imaging device. Based on the code mapping, the data converter is configured to transcode the reference encoded image data into device-specific image data, which is encoded with the device-specific code values.

Reuse of block tree pattern in video compression
11601660 · 2023-03-07 · ·

A method includes transcoding a first block of a video. The first block is associated with a first block tree pattern defining a structure of splitting a block into smaller blocks. A bit string of bits for the first block tree pattern is included in an encoded bitstream for the video. The method determines that the first block tree pattern of the first block can be reused for a second block tree pattern of a second block and includes information in the encoded bitstream that indicates that the first block tree pattern is to be used to decode the second block from the encoded bitstream.

Reuse of block tree pattern in video compression
11601660 · 2023-03-07 · ·

A method includes transcoding a first block of a video. The first block is associated with a first block tree pattern defining a structure of splitting a block into smaller blocks. A bit string of bits for the first block tree pattern is included in an encoded bitstream for the video. The method determines that the first block tree pattern of the first block can be reused for a second block tree pattern of a second block and includes information in the encoded bitstream that indicates that the first block tree pattern is to be used to decode the second block from the encoded bitstream.

System and Method of Controlling Equipment Based on Data Transferred In-Band in Video via Optically Encoded Images
20230119262 · 2023-04-20 ·

Data is encoded into one or more optically encoded images. The optically encoded images are then inserted as image data into a video sequence - i.e., in video frames. Data are transmitted in-band within the video, via any conceivable video distribution channel or format. The video may be trans-coded as required - because the data are optically encoded, any video processing that even crudely preserves the frame images will preserve the optically encoded data. This scheme of in-band data transfer in video is very robust. A video receiving apparatus receives the video, inspects the image data from video frames in memory, detects optically encoded images in the image data, and decodes the optically encoded images to recover the data. The frames carrying optically encoded images are typically discarded and not rendered to a display. The receiver controls connected equipment, other than a display (e.g., a musical instrument), based on the extracted data.

System and Method of Controlling Equipment Based on Data Transferred In-Band in Video via Optically Encoded Images
20230119262 · 2023-04-20 ·

Data is encoded into one or more optically encoded images. The optically encoded images are then inserted as image data into a video sequence - i.e., in video frames. Data are transmitted in-band within the video, via any conceivable video distribution channel or format. The video may be trans-coded as required - because the data are optically encoded, any video processing that even crudely preserves the frame images will preserve the optically encoded data. This scheme of in-band data transfer in video is very robust. A video receiving apparatus receives the video, inspects the image data from video frames in memory, detects optically encoded images in the image data, and decodes the optically encoded images to recover the data. The frames carrying optically encoded images are typically discarded and not rendered to a display. The receiver controls connected equipment, other than a display (e.g., a musical instrument), based on the extracted data.

METHOD AND APPARATUS FOR DECODING VIDEO, AND METHOD AND APPARATUS FOR ENCODING VIDEO

Provided are a video decoding method and apparatus for, in a video encoding and decoding procedure, when a merge candidate list of a current block is configured, determining whether the number of merge candidates included in the merge candidate list is greater than 1 and is smaller than a predetermined maximum merge candidate number, when the number of the merge candidates included in the merge candidate list is greater than 1 and is smaller than the predetermined maximum merge candidate number, determining an additional merge candidate by using a first merge candidate and a second merge candidate of the merge candidate list of the current block, configuring the merge candidate list by adding the determined additional merge candidate to the merge candidate list, and performing prediction on the current block, based on the merge candidate list.

METHOD AND APPARATUS FOR DECODING VIDEO, AND METHOD AND APPARATUS FOR ENCODING VIDEO

Provided are a video decoding method and apparatus for, in a video encoding and decoding procedure, when a merge candidate list of a current block is configured, determining whether the number of merge candidates included in the merge candidate list is greater than 1 and is smaller than a predetermined maximum merge candidate number, when the number of the merge candidates included in the merge candidate list is greater than 1 and is smaller than the predetermined maximum merge candidate number, determining an additional merge candidate by using a first merge candidate and a second merge candidate of the merge candidate list of the current block, configuring the merge candidate list by adding the determined additional merge candidate to the merge candidate list, and performing prediction on the current block, based on the merge candidate list.

DYNAMIC INSERTION OF CONTENT VIA MACROBLOCK MODIFICATION
20230067258 · 2023-03-02 ·

Systems, methods, and devices for inserting content into a video frame are disclosed herein. A frame of video data encoded to include a plurality of macroblocks is received. An insertion region of the frame for inserting content is defined, the insertion region spanning a subset of the macroblocks. The frame is augmented with a duplication region configured as a non-displayed region, the duplication region including duplicated macroblocks that duplicate the macroblocks of insertion region. The macroblocks of the insertion region are replaced with replacement macroblocks that encode replacement content.

Virtual file system for cloud-based shared content

A server in a cloud-based environment interfaces with storage devices that store shared content accessible by two or more users. Individual items within the shared content are associated with respective object metadata that is also stored in the cloud-based environment. Download requests initiate downloads of instances of a virtual file system module to two or more user devices associated with two or more users. The downloaded virtual file system modules capture local metadata that pertains to local object operations directed by the users over the shared content. Changed object metadata attributes are delivered to the server and to other user devices that are accessing the shared content. Peer-to-peer connections can be established between the two or more user devices. Object can be divided into smaller portions such that processing the individual smaller portions of a larger object reduces the likelihood of a conflict between user operations over the shared content.