Patent classifications
H04N21/4302
SYNCHRONIZATION OF A VSYNC SIGNAL OF A FIRST DEVICE TO A VSYNC SIGNAL OF A SECOND DEVICE
A method is disclosed including setting, at a server, a server VSYNC signal to a server VSYNC frequency. The server VSYNC signal corresponding to generation of video frames during frame periods for the server VSYNC frequency. The method including setting, at a client, a client VSYNC signal to a client VSYNC frequency. The method including sending compressed video frames from the server to the client over a network using the server VSYNC signal, wherein the compressed video frames are based on the generated video frames. The method including decoding and displaying, at the client, the compressed video frames. The method including analyzing the timing of one or more client operations to adjust the relative timing between the server VSYNC signal and the client VSYNC signal, as the client receives the compressed video frames.
SYSTEMS AND METHODS FOR DYNAMICALLY SYNCING FROM TIME-SHIFTED FRAME TO LIVE STREAM OF CONTENT
Systems and methods for skipping a presentation of a portion of segments to catch up to live based on a priority level value is disclosed. For example, a streaming application generates a content item for live streaming where the content item comprises a plurality of segments. In response to determining that playing of the content item lags behind the live streaming of the content item, the streaming application identifies a duration of lag and determines a priority threshold based on the lag. Based on a manifest which includes priority level information, the streaming application determines whether a respective segment needs to be cached. For example, if a priority level for a segment that is within the duration of the lag has a priority level that is higher the priority threshold, then the segment is stored in a cache from a respective network address and is played from the cache.
CONSISTENCE OF ACOUSTIC AND VISUAL SCENES
Media content data of an object is received. Whether a first parameter indicated by a first description of the object in an acoustic scene and a second parameter indicated by a second description of the object in a visual scene are inconsistent is determined. Based on the first parameter indicated by the first description of the object in the acoustic scene and the second parameter indicated by the second description of the object in the visual scene being inconsistent, one of the first description of the object in the acoustic scene and the second description of the object in the visual scene is modified based on another one of the first description and the second description that is not modified, wherein the modified one of the first description and the second description is consistent with the other one of the first description and the second description that is not modified.
Distributed control of media content item during webcast
Disclosed embodiments include systems and methods for distributed control of media-playback components of a webcast. In an example, a webcast presenter's device can include a webcast compositing engine for creating a webcast from a variety of data sources, a media-playback engine for playing media content items, and a message processing engine for processing messages sent from audience members to the presenter. The message processing engine can obtain the messages and parse the messages for tokens indicative of a requested media content item. The message processing engine, having identified the requested media content item, can then cause the media-playback engine to play the requested content or add it to a queue.
ELECTRONIC DEVICE AND MULTIMEDIA PLAYBACK METHOD THEREOF
An example electronic device according to various embodiments of this document may include: at least one auxiliary processor; and a main processor operably connected to the at least one auxiliary processor, wherein the main processor may be configured to: obtain moving image data; separate the moving image data into image data including plural consecutive image frames, first audio data including plural consecutive audio frames, and plural timestamps corresponding respectively to the plural consecutive audio frames; generate second audio data using the first audio data by adding header data to each of the plural audio frames; transmit the second audio data to the at least one auxiliary processor; generate, based on first time information successively received from the at least one auxiliary processor, second time information; and play back the image data based on the second time information, and wherein the at least one auxiliary processor may be configured to: play back an audio signal based on the received second audio data; and generate the first time information about a playback time of the audio signal based on the header data.
Content-modification system with jitter effect mitigation feature
One high-level aspect of a content-modification system and related methods may involve facilitating content modification by a content-presentation device of received broadcast content in a controlled manner, while under circumstances in which some events that can impact timing may be unpredictable. In particular, certain operations by a content-presentation device may involve matching received content with specific expected content as determined by one or another component of the content-modification system, in order to confirm proper conditions are met for the content-presentation device to proceed with, or continue, content-modification operations. It can happen the matching procedure becomes subject or susceptible to timing irregularities, or jitter. In some instances, jitter may impact the ability to derive the benefits of content modification. Accordingly, example embodiments herein are directed to systems and method for compensation and/or mitigating the effects of jitter.
ELECTRONIC DEVICE AND METHOD FOR SYNCHRONIZATION OF VIDEOS
An electronic device including circuitry is provided. The circuitry generates a synchronization signal. The circuitry controls drive of one or more light-emitting devices based on the synchronization signal to generate a pattern of alternating light pulses. The circuitry further acquires a plurality of videos of the pattern of alternating light pulses from a plurality of imaging devices. The one or more light-emitting devices are positioned in a field of view of each of the plurality of imaging devices. The circuitry determines a frame in each video of the plurality of videos that includes a specific portion of the pattern of alternating light pulses. The determined frame in each of the plurality of videos corresponds to the same time instant. The circuitry synchronizes the plurality of videos based on the determination.
Expiring synchronized supplemental content in time-shifted media
Systems and methods are described for providing interactive content contextually related to an occurrence. An illustrative method generates for display, at a media consumption device, a display of the live event, wherein the display of the live event comprises the occurrence, determines a beginning of the occurrence in the display of the live event, in response to determining the beginning of the occurrence in the display of the live event, generates for simultaneous display, with the display of the live event, interactive content related to the occurrence, determines whether the occurrence in the live event has ended in real time, and in response to determining that the occurrence in the live event has ended in real time, ceases the generating for display of the interactive content related to the occurrence.
ADAPTIVE VIDEO SLEW RATE FOR VIDEO DELIVERY
Systems and methods for adaptively adjusting a slew rate of a dejitter buffer in a remote device in a distributed access architecture. The slew rate may be adjusted based on measurements of a fullness state of a buffer made over time. The measurements may be used to calculate a frequency offset value between the rate at which data leaves the buffer relative to the rate at which data enters the buffer and/or used to calculate a current working depth of the buffer. The adaptive slew rate adjustments may be based on the frequency offset value and/or the current working depth.
MEDICAL-USE CONTROL SYSTEM, IMAGE PROCESSING SERVER, IMAGE CONVERTING APPARATUS, AND CONTROL METHOD
A sending-side image converting apparatus sends a transmission image to a network. An image processing server performs image processing on the transmission image and sends the image generated by the image processing to the network. A receiving-side image converting apparatus outputs display images converted from the transmission image and the image generated by the image processing, to a display apparatus. Further, a controlled delay time based on a difference between a delay time in a first transmission path without via the image processing server and a delay time in a second transmission path via the image processing server is obtained on the basis of a characteristic of the display apparatus, and a timing at which the images are output to the display apparatus is controlled. The present technology is applicable to, for example, a medical-use image transmission system.