Patent classifications
H04N21/4307
DYNAMIC VISUAL INTENSITY RENDERING
The present technology can provide a mechanism for adjusting a visual effect that is associated with an audio artifact at a given frequency bandwidth that is attenuated by speaker characteristics. The intensity of the visual effects that is adjusted can also be attributed to a change in volume settings of a processing device as well as an intensity of a multimedia skin in which the visual effect is encoded. The multimedia skin includes filters, transitions/animations, and/or image universal processing, that can be applied to any set of photos, videos, and/or songs, in order to create, in real-time, many variations of the same digital multimedia file, wherein each multimedia skin leads to a specific video rendering.
MULTIMEDIA CONTENT PROTECTION
A method is disclosed for protecting a multimedia content distributed by a content service to a user device, the multimedia content being related to a live event, wherein the method comprises, during the live event: identifying a significant segment of the live event; generating a trigger signal associated with the significant segment; at the user device, on the basis of the trigger signal, generating a marking comprising information identifying the user device; and, at the user device, applying the marking to a selected portion of the multimedia content, the selected portion corresponding to the significant segment of the live event.
ELECTRONIC DEVICE FOR PERFORMING SYNCHRONIZATION OF VIDEO DATA AND AUDIO DATA, AND CONTROL METHOD THEREFOR
An electronic device for use with an external electronic device includes a touchscreen display, at least one speaker, and at least one processor. The at least one processor may obtain a user input for outputting video data of a first medium while audio data of the first medium is output through the at least one speaker, identify a point of time when the audio data is output through the at least one speaker, based on the obtained user input, determine a point of time when the video data is to be output through the touchscreen display or an external electronic device, by a delay time calculated at least based on the identified point of time, and control the touchscreen display or the external electronic device such that the video data is output through the touchscreen display or the external electronic device at the determined point of time.
Synchronized playback and control of media
Methods and systems provide synchronized sharing of multimedia between multiple devices. The multiple devices may form an ad-hoc network for sharing of multimedia. In an embodiment, group members may have playlist manipulation privileges such as pausing, rewinding, fast forwarding, or adding tracks to the playlist. A system may stream or distribute content according to the shared playlist. Playback may be synchronized for group members so that everyone is exposed to a same part of the content as the same time.
SYSTEM AND METHOD TO SYNCHRONIZE RENDERING OF MULTI-CHANNEL AUDIO TO VIDEO PRESENTATION
A system and method are provided for an AV device for use with a video player, one or more speakers, and encoded AV data. The encoded AV data includes multiplexed encoded video data and encoded audio data. The AV device is connected to the speakers via wireless channels. The AV device is able to determine channel delays associated with each wireless channel; synchronize program clocks of the video player and speakers; determine and modify buffer levels of each speaker; demultiplex the encoded AV data to obtain encode video data and encoded audio data; and provide prefetched portions of encoded audio data based on buffer levels.
Dynamic client buffering and usage of received video frames for cloud gaming
A method is disclosed including setting, at a server, a server VSYNC signal to a server VSYNC frequency defining a plurality of frame periods. The server VSYNC signal corresponding to generation of a plurality of video frames at the server during the plurality of frame periods. The method including setting, at a client, a client VSYNC signal to a client VSYNC frequency. The method including sending a plurality of compressed video frames based on the plurality of video frames from the server to the client over a network using the server VSYNC signal. The method including decoding and displaying, at the client, the plurality of compressed video frames. The method including analyzing the timing of one or more client operations to set the amount of frame buffering used by the client, as the client receives the plurality of compressed video frames.
Interactive Media Events
An Interactive Media Event (IME) system includes a sync server, a first user device, and a second user device, each device is coupled to the server. The server executes computer instructions instantiating a content segment engine which outputs a Party matter to the second user device and instantiates an IME engine which receives, from the second user device, a later reaction to the Party matter. The IME engine synchronizes the later reaction with the Party matter. The Party matter may include a media event and a prior reaction to the media event received from the first user device. The media event includes a primary content segment and synchronization information associated therewith. The prior reaction and/or the later reaction may be synchronized to the primary content segment and/or to each other using the synchronization information. A reaction may include chat data captured during the Party.
Audio transitions when streaming audiovisual media titles
A playback application is configured to analyze audio frames associated with transitions between segments within a media title to identify one or more portions of extraneous audio. The playback application is configured to analyze the one or more portions of extraneous audio and then determine which of the one or more corresponding audio frames should be dropped. In doing so, the playback application can analyze a topology associated with the media title to determine whether any specific portions of extraneous audio are to be played outside of a logical ordering of audio samples set forth in the topology. These specific portions of extraneous audio are preferentially removed.
Systems and methods for video splicing and displaying
The present disclosure relates to a system and method for synchronous video display on at least one display. The method may comprises receiving a channel of video signal from each data acquisition port of a plurality of data acquisition ports during a time interval, each channel of video signal comprising a plurality of video frames captured during the time interval. The method may also comprises assigning a count value for each video frame of the channel of video signal as synchronization information for each video frame of the channel of video signal to form a pool of video frames each corresponding to a count value. The method may further comprises selecting video frames with the same count value from the pool of video frames as synchronized video frames, and transmitting, through the plurality of output ports, the synchronized video frames for synchronous display on the at least one display.
METHOD, SYSTEM, AND COMPUTER-READABLE RECORDING MEDIUM FOR IMPLEMENTING FAST-SWITCHING MODE BETWEEN CHANNELS IN MULTI-LIVE TRANSMISSION ENVIRONMENT
A method, a system, and a computer-readable recording medium implement a fast-switching mode between channels in a multi-live transmission environment. A composite image in which images of multiple channels are synthesized into one image in a live transmission environment is received as one stream to configure a multi-view composed of the images of the multi-channels and, as an image of a specific channel is selected in the multi-view, the original image of the specific channel is received and the multi-view may be switched to a full-view of the image of the specific channel.