H04N2017/008

Testing rendering of screen objects
11647249 · 2023-05-09 · ·

The present disclosure relates to methods and devices for testing video data being rendered at or using a media device. A plurality of video frames to be rendered is received, each frame comprising one or more primary screen objects and at least one further screen object. The received frames are rendered at or using the media device wherein the at least one further screen object is superimposed on the one or more primary screen objects of a given frame during rendering. The rendered frames are provided to a data model. Extracted metadata indicating the presence or absence of further screen objects in the rendered video frames is the output of the data model. The data model is also provided with original metadata associated with the video frames prior to rendering. The rendering of each further screen object is then tested based on the original metadata and extracted metadata relating to a given video frame. The disclosure also extends to associated methods and devices for generating training data for testing rendering of video frame and training a data model using the training data.

Video quality assessment method and device

A video quality assessment method and device are provided. The video quality assessment method includes: obtaining a to-be-assessed video, where the to-be-assessed video includes a forward error correction (FEC) redundancy data packet; when a quantity of lost data packets of a first source block in the to-be-assessed video is less than or equal to a quantity of FEC redundancy data packets of the first source block, generating a first summary packet for a non-lost data packet of the first source block, and generating a second summary packet for a lost data packet of the first source block; and calculating a mean opinion score of video (MOSV) of the to-be-assessed video based on the first summary packet and the second summary packet. The MOSV calculated according to the method is more consistent with real video experience of a user, so accuracy of video quality assessment can be improved.

TESTING RENDERING OF SCREEN OBJECTS
20230353814 · 2023-11-02 ·

The present disclosure relates to methods and devices for testing video data being rendered at or using a media device. A plurality of video frames to be rendered is received, each frame comprising one or more primary screen objects and at least one further screen object. The received frames are rendered at or using the media device wherein the at least one further screen object is superimposed on the one or more primary screen objects of a given frame during rendering. The rendered frames are provided to a data model. Extracted metadata indicating the presence or absence of further screen objects in the rendered video frames is the output of the data model. The data model is also provided with original metadata associated with the video frames prior to rendering. The rendering of each further screen object is then tested based on the original metadata and extracted metadata relating to a given video frame. The disclosure also extends to associated methods and devices for generating training data for testing rendering of video frame and training a data model using the training data.

TESTING RENDERING OF SCREEN OBJECTS
20210314651 · 2021-10-07 ·

The present disclosure relates to methods and devices for testing video data being rendered at or using a media device. A plurality of video frames to be rendered is received, each frame comprising one or more primary screen objects and at least one further screen object. The received frames are rendered at or using the media device wherein the at least one further screen object is superimposed on the one or more primary screen objects of a given frame during rendering. The rendered frames are provided to a data model. Extracted metadata indicating the presence or absence of further screen objects in the rendered video frames is the output of the data model. The data model is also provided with original metadata associated with the video frames prior to rendering. The rendering of each further screen object is then tested based on the original metadata and extracted metadata relating to a given video frame. The disclosure also extends to associated methods and devices for generating training data for testing rendering of video frame and training a data model using the training data.

Digital closed caption corruption reporting
10681343 · 2020-06-09 · ·

Concepts and technologies disclosed herein are directed to closed caption corruption detection and reporting. In accordance with one aspect disclosed herein, a system can ingest a digital video channel bitstream. The system can locate a digital closed caption flag in the digital video channel bitstream. The digital closed caption flag can indicate that a digital closed caption content packet is present within the digital video channel bitstream. The system can determine that at least a portion of the digital closed caption content packet cannot be rendered to display closed caption content associated with at least the portion of the digital closed caption content packet. The system can instantiate an alert based on the determination that at least the portion of the digital closed caption content packet cannot be rendered to display the closed caption content associated with at least the portion of the digital closed caption content packet.

VIDEO QUALITY ASSESSMENT METHOD AND DEVICE
20200067629 · 2020-02-27 ·

A video quality assessment method and device are provided. The video quality assessment method includes: obtaining a to-be-assessed video, where the to-be-assessed video includes a forward error correction (FEC) redundancy data packet; when a quantity of lost data packets of a first source block in the to-be-assessed video is less than or equal to a quantity of FEC redundancy data packets of the first source block, generating a first summary packet for a non-lost data packet of the first source block, and generating a second summary packet for a lost data packet of the first source block; and calculating a mean opinion score of video (MOSV) of the to-be-assessed video based on the first summary packet and the second summary packet. The MOSV calculated according to the method is more consistent with real video experience of a user, so accuracy of video quality assessment can be improved.

Digital Closed Caption Corruption Reporting
20190089951 · 2019-03-21 · ·

Concepts and technologies disclosed herein are direct to closed caption corruption detection and reporting. In accordance with one aspect disclosed herein, a system can ingest a digital video channel bitstream. The system can locate a digital closed caption flag in the digital video channel bitstream. The digital closed caption flag can indicate that a digital closed caption content packet is present within the digital video channel bitstream. The system can determine that at least a portion of the digital closed caption content packet cannot be rendered to display closed caption content associated with at least the portion of the digital closed caption content packet. The system can instantiate an alert based on the determination that at least the portion of the digital closed caption content packet cannot be rendered to display the closed caption content associated with at least the portion of the digital closed caption content packet.

Using closed-captioning data to output an alert indicating a functional state of a back-up video-broadcast system

In one aspect, an example method for outputting an alert indicating a functional state of a back-up video-broadcast system involves a computing device receiving first closed-captioning data that corresponds to a first video-stream; the computing device receiving second closed-captioning data that corresponds to a second video-stream; the computing device making a determination that the received first closed-captioning data and the received second closed-captioning data lack a threshold extent of similarity; and responsive to the determination that the received first closed-captioning data and the received second closed-captioning data lack the threshold extent of similarity, the computing device outputting an alert.

Caption rendering automation test framework

Exemplary methods for automatically testing closed caption (CC) rendering includes receiving a set of one or more reference audio video (AV) streams from an AV source, and generating reference CC images from the set of one or more reference AV streams starting from a recording start time to a recording stop time. In one embodiment, the method further includes receiving a set of one or more test AV streams from the AV source, and generating test CC images from the set of one or more test AV streams starting from the recording start time to the recording stop time. In one embodiment, the methods further include determining whether the AV source is performing CC rendering properly by automatically comparing the test CC images against the reference CC images.

Methods and systems for real time automated caption rendering testing

Exemplary methods and processing systems for determining whether an audio video (AV) source is performing closed captioning (CC) rendering properly, are described. An AV stream including one or more AV frames is received from the AV source; for each frame from the AV stream the following operations are performed: detecting a CC image in the frame; cropping the CC image from the frame; and outputting the CC image and metadata associated with the frame. A caption file is generated based on the AV stream, where the caption file includes captioning information for the AV stream. The CC image and metadata output for a frame from the AV stream are compared with CC information within the caption file related to that frame to determine whether the AV source is performing CC rendering properly.