Patent classifications
H04N5/931
Digital media player behavioral parameter modification
One embodiment of the present invention is a method for playing a portion of a media work which includes steps of: (a) playing the media work; (b) receiving input from a user; (c) analyzing parameters to determine the portion of the media work to play; (d) altering at least a part of the portion; and (e) playing the portion.
System and method of generating subtitling for media
A method for media subtitling is described, wherein subtitles and/or captions for media are first created on a web interface in a first language along with the appropriate synchronization information with respect to the media. The document content may be created via the web interface, or it may be created locally and uploaded to the interface. Subsequent to creation and/or upload of at least a portion of the subtitling, personnel in different locations (e.g., different terminals or different countries) then access the web interface, which includes the first language and the synchronization information, to create foreign/alternative subtitling.
System and method of generating subtitling for media
A method for media subtitling is described, wherein subtitles and/or captions for media are first created on a web interface in a first language along with the appropriate synchronization information with respect to the media. The document content may be created via the web interface, or it may be created locally and uploaded to the interface. Subsequent to creation and/or upload of at least a portion of the subtitling, personnel in different locations (e.g., different terminals or different countries) then access the web interface, which includes the first language and the synchronization information, to create foreign/alternative subtitling.
Synchronization of cameras using wireless beacons
Timing metadata is generated and added to captured video to compensate for synchronization error between video captured concurrently from multiple cameras. A wireless beacon including timer data is transmitted from an access point to each station camera. A radio circuit of the station camera synchronizes to the timer of the access point based on timing information in the wireless beacon. An image processor in each station camera includes an image processor timer separate from the radio circuit timer. During video capture, timing metrics are generated indicating deviation between the image processor timer and the radio circuit timer. The timing metrics are stored as metadata and can be used to compensate for synchronization error in post-processing.
Self-organized time-synchronization network
Devices, systems and methods are disclosed for creating a self-organized time-synchronization network, enabling accurate synchronization. For example, multiple devices may wirelessly broadcast and receive beacon signals and self-organize into an ad hoc network. The wireless ad hoc network may be optimized as individual devices choose new connections to reduce a number of hops to a source device. Individual devices may include a counter that may increment continuously for timing purposes, and counters may be synchronized between connected devices. For example, a local device may adjust contents of a local counter on the local device based on a drift between the local counter and a remote counter on a remote device. Thus, counters between connected devices may be synchronized, beginning at the source device and propagating through the ad hoc network. Based on the synchronization, the devices may offer additional functionality and/or more sophisticated applications.
System and methods for recording a compressed video and audio stream
A system for recording a compressed video-audio stream includes a decoder for decoding the video and audio packets of the stream, a multimedia recorder for recording the video and audio portions of the stream, and a video frame editor. In one embodiment the multimedia recorder receives and ignores initial delta frames of the video portion of the stream while buffering the audio portion of the stream received until a first key frame arrives and is buffered and decoded and wherein upon receiving a command to record, the system writes a copy of the key frame at a predefined interval the first interval corresponding with the start of the recording of the audio portions of the stream, the write interval repeated successively until a next key frame arrives whereby the video and audio is then recorded as received.
Video editing
A method, an apparatus and a computer program are provided. The method comprises: causing a computing device to enter a video editing mode in which first video content, captured by a first camera in a location, is played back on a display of the computing device and included in a video edit; and causing the display to overlay at least one visual indicator on the first video content, the at least one visual indicator indicating that second video content, captured by a second camera at the location, is available for inclusion in the video edit.
Method and apparatus for adapting audio delays to picture frame rates
To synchronize sound information with corresponding picture information for digital cinema compositions at different frame rates in a play list during play out of the digital cinema compositions, associated audio latency settings are first established for the corresponding picture information of the digital cinema compositions in the play list in accordance with the digital cinema composition frame rates. The timing between the sound information and the picture information is then adjusted during play out of the digital cinema compositions in accordance with the associated audio latency settings for the corresponding digital cinema composition frame rates.
SIGNATURE DEVICE AND SIGNATURE METHOD
A signature device including a storage unit configured to store moving image data, and a processor configured to extract original metadata from moving image data for the image data of each of a plurality of images forming the moving image data, the original metadata including location data of the image data and identification data of the moving image data, to encode the image data of each of the images into still image data in accordance with an image format, to write the still image data into a first area, to write the original metadata extracted by the extractor into a second area, the first area and the second area being included in a storage area of a still image data file in which the still image data is filed, and to generate summary data for the still image data file.