H04N5/505

Display-side adaptive video processing

Adaptive video processing for a target display panel may be implemented in or by a decoding/display pipeline associated with the target display panel. The adaptive video processing methods may take into account video content, display characteristics, and environmental conditions including but not limited to ambient lighting and viewer location when processing and rendering video content for a target display panel in an ambient setting or environment. The display-side adaptive video processing methods may use this information to adjust one or more video processing functions as applied to the video data to render video for the target display panel that is adapted to the display panel according to the ambient viewing conditions.

Non-linear display brightness adjustment
10264266 · 2019-04-16 · ·

Display brightness adjustment apparatus and methods are described in which the average brightness of a display may be scaled up or down using a non-linear function. When applying the non-linear function to scale down brightness, the contrast of the output signal may not be reduced so that the dynamic range and highlights are preserved. The non-linear brightness adjustment may be performed automatically, for example in response to ambient light level as detected by sensor(s), but may also be applied in response to a user adjustment to a brightness control knob or slider. The non-linear brightness adjustment may be performed globally, or alternatively may be performed on local regions of an image or display panel. The non-linear function may be a piecewise linear function, or some other non-linear function.

High dynamic range video capture with backward-compatible distribution
10212429 · 2019-02-19 · ·

Video processing techniques and pipelines that support capture, distribution, and display of high dynamic range (HDR) image data to both HDR-enabled display devices and display devices that do not support HDR imaging. A sensor pipeline may generate standard dynamic range (SDR) data from HDR data captured by a sensor using tone mapping, for example local tone mapping. Information used to generate the SDR data may be provided to a display pipeline as metadata with the generated SDR data. If a target display does not support HDR imaging, the SDR data may be directly rendered by the display pipeline. If the target display does support HDR imaging, then an inverse mapping technique may be applied to the SDR data according to the metadata to render HDR data for display. Information used in performing color gamut mapping may also be provided in the metadata and used to recover clipped colors for display.

Digital TV reception using OTT backchannel communication

Techniques are described for expanding and/or improving the Advanced Television Systems Committee (ATSC) 3.0 television protocol. In an ATSC 3.0 environment, receivers (including consumer and professional receivers) have signal reception parameters and antenna factors available to them. These reception parameters, together with time and location data are transmitted to one or more servers that maintain databases of reception characteristics. This data is analyzed such that a set of likely receivable signals (based on reception parameters, date/time, location, geographical features, transmitter information, etc.) is identified. Receivers query the servers to receive information indicating the set of likely receivable signals to reduce channel scan time by scanning only or first for more-receivable channels. Also, difficult reception locations identified in the data collected by the servers are used in aggregate to guide RF improvements (e.g., adding SFN transmitters). Further, the data collected by the servers may provide data to be used to feed MFN data.

DIGITAL TV RECEPTION USING OTT BACKCHANNEL COMMUNICATION

Techniques are described for expanding and/or improving the Advanced Television Systems Committee (ATSC) 3.0 television protocol. In an ATSC 3.0 environment, receivers (including consumer and professional receivers) have signal reception parameters and antenna factors available to them. These reception parameters, together with time and location data are transmitted to one or more servers that maintain databases of reception characteristics. This data is analyzed such that a set of likely receivable signals (based on reception parameters, date/time, location, geographical features, transmitter information, etc.) is identified. Receivers query the servers to receive information indicating the set of likely receivable signals to reduce channel scan time by scanning only or first for more-receivable channels. Also, difficult reception locations identified in the data collected by the servers are used in aggregate to guide RF improvements (e.g., adding SFN transmitters). Further, the data collected by the servers may provide data to be used to feed MFN data.

Adaptive transfer function for video encoding and decoding
12244820 · 2025-03-04 · ·

A video encoding and decoding system that implements an adaptive transfer function method internally within the codec for signal representation. A focus dynamic range representing an effective dynamic range of the human visual system may be dynamically determined for each scene, sequence, frame, or region of input video. The video data may be cropped and quantized into the bit depth of the codec according to a transfer function for encoding within the codec. The transfer function may be the same as the transfer function of the input video data or may be a transfer function internal to the codec. The encoded video data may be decoded and expanded into the dynamic range of display(s). The adaptive transfer function method enables the codec to use fewer bits for the internal representation of the signal while still representing the entire dynamic range of the signal in output.

Adaptive Transfer Function for Video Encoding and Decoding
20250159190 · 2025-05-15 ·

A video encoding and decoding system that implements an adaptive transfer function method internally within the codec for signal representation. A focus dynamic range representing an effective dynamic range of the human visual system may be dynamically determined for each scene, sequence, frame, or region of input video. The video data may be cropped and quantized into the bit depth of the codec according to a transfer function for encoding within the codec. The transfer function may be the same as the transfer function of the input video data or may be a transfer function internal to the codec. The encoded video data may be decoded and expanded into the dynamic range of display(s). The adaptive transfer function method enables the codec to use fewer bits for the internal representation of the signal while still representing the entire dynamic range of the signal in output.