H04N5/92

Automatic generation of video playback effects

In various examples, recordings of gameplay sessions are enhanced by the application of special effects to relatively high(er) and/or low(er) interest durations of the gameplay sessions. Durations of relatively high(er) or low(er) predicted interest in a gameplay session are identified, for instance, based upon level of activity engaged in by a gamer during a particular gameplay session duration. Once identified, different variations of video characteristic(s) are applied to at least a portion of the identified durations for implementation during playback. The recordings may be generated and/or played back in real-time with a live gameplay session, or after completion of the gameplay session. Further, video data of the recordings themselves may be modified to include the special effects and/or indications of the durations and/or variations may be included in metadata and used for playback.

System and method for reinforcing proficiency skill using multi-media
10741090 · 2020-08-11 · ·

A novel system and method to implement a modified Learn-by-Teaching (LdL) method within a video-graphic context. The system and method include a narration step that is structured to cause the student or Group completing a Video Project to simultaneously engage different parts of the brain so as to work in concert to reinforce the proficiency skill being taught. During the narration step, the student must view a silent video while composing a textual script to serve as a voice-over narration. The student must then practice audibly reciting the script. The student is positively challenged during the narration step in two ways: achieve synchronicity with video, and to teaching an inanimate object in the form of an abstract audience of unknown scope. The combination of multi-modal actions and challenges facilitates implementation of a LdL method in a new and powerful way.

Luminance characteristics generation method

A luminance characteristics generation method includes: determining, for each of frames included in a video, a value as first luminance characteristics, the value being obtained by dividing the number of pixels having luminances less than or equal to a first luminance among all pixels included in the frame by the number of all the pixels included in the frame; and outputting the first luminance characteristics determined in the determining of the value.

Luminance characteristics generation method

A luminance characteristics generation method includes: determining, for each of frames included in a video, a value as first luminance characteristics, the value being obtained by dividing the number of pixels having luminances less than or equal to a first luminance among all pixels included in the frame by the number of all the pixels included in the frame; and outputting the first luminance characteristics determined in the determining of the value.

METHOD TO CHANGE THE SERVICE OF A DIGITAL TELEVISION DECODER EQUIPPED WITH A PLURALITY OF TUNERS
20200252675 · 2020-08-06 ·

A method for changing service within a digital television decoder, the digital decoder including a plurality of tuners, each tuner being capable of receiving a stream of signals including data relating to television services, the method including configuring each tuner for receiving a particular service; restoring, on a screen, a first service corresponding to the service for the reception of which a first tuner has been configured; receiving, via the decoder, a first change of service command with a view to displaying a second service for the reception of which a second tuner has been configured; receiving, via the decoder, a second change of service command with a view to displaying a third service for the reception of which a third tuner has been configured; applying a forced delay before displaying the second service and/or the third service.

Systems and Methods for Time-Based Athletic Activity Measurement and Display
20200246662 · 2020-08-06 ·

An athletic parameter measurement device worn by an athlete during an athletic activity session includes a housing which attaches to the athlete, a display, a processor associated with the display, and an athletic parameter measurement sensor. During the athletic activity, the device detects, using the sensor, a vertical jump height of the athlete, and displays, during the performance of the athletic activity session, a representation of the vertical jump height on the display.

VIDEO DISPLAY DEVICE AND VIDEO DISPLAY METHOD

A video display device includes: a video receiver that receives video data including a video and dynamic luminance characteristics indicating a time-dependent change in luminance characteristics of the video; a tone mapping processor that, in the case where a luminance region having a luminance less than or equal to a first luminance is defined as a low luminance region, and a luminance region having a luminance exceeding the first luminance is defined as a high luminance region, (i) performs first tone mapping using first conversion characteristics when first luminance characteristics exceed a predetermined threshold value, and (ii) performs second tone mapping using second conversion characteristics when the first luminance characteristics are less than or equal to the predetermined threshold value.

VIDEO DISPLAY DEVICE AND VIDEO DISPLAY METHOD

A video display device includes: a video receiver that receives video data including a video and dynamic luminance characteristics indicating a time-dependent change in luminance characteristics of the video; a tone mapping processor that, in the case where a luminance region having a luminance less than or equal to a first luminance is defined as a low luminance region, and a luminance region having a luminance exceeding the first luminance is defined as a high luminance region, (i) performs first tone mapping using first conversion characteristics when first luminance characteristics exceed a predetermined threshold value, and (ii) performs second tone mapping using second conversion characteristics when the first luminance characteristics are less than or equal to the predetermined threshold value.

Method and system for synchronously reproducing multimedia multi-information

Disclosed is a method and system for synchronously reproducing multi-media multi-information. The method is a method of combining a plurality of relevant files or a plurality of information streams that have an associated information relationship using a multi-information modulation unit, and then synchronously reproducing the relevant files or the information streams using a dedicated player capable of synchronously reproducing and playing back the multi-information. The step of synchronously recording the multi-information in the method is to insert non-audio/video information into an audio/video steam before compression or after compression or a file thereof using the multi-information modulation unit, that is, to embed some additional information blocks carrying the non-audio/video information into necessary video frames or audio frames and/or create or insert some additional information frames carrying the non-audio/video information between the necessary video frames and audio frames.

Augmented reality in a virtual reality environment

Methods, systems, and techniques for projecting streamed video are provided. An Example Surround Video Projection System (SVPS) provides support for displaying augmented reality elements in a virtual reality environment such that the AR elements appear inside the target viewing environment upon which the virtual reality environment is rendered. Also, the SVPS may change the displayed VR environment responsive to attributes and/or characteristics of the user. In one example, the SVPS comprises a real time, interactive rendering system, a display system, and one or more display units. The rendering system comprises a high resolution graphics engine capable of generating high resolution video. The projection system comprises video capture cards to capture the generated video stream and forward it to a projection mapping engine. The projection mapping engine consolidates and stitches together the received video stream as appropriate to render the video stream over display units to the target viewing environment.