Patent classifications
G11B27/3036
Methods and apparatus for ordered serial synchronization of multimedia streams upon sensor changes
An apparatus includes a processor with first and second input ports and a memory operably coupled to the processor. The processor can detect streams of media samples at the input ports and determine, in response to the detection of the streams of media samples, a capture start time. The processor can also capture a first frame of a first stream of media samples beginning at the capture start time, and a first frame of a second stream of media samples beginning at a first time subsequent to the capture start time. The processor can also calculate a relative offset time based on the capture start time, the first time, and a rate associated with the second stream of media samples, and store, in the memory, an indication of an association between the captured first frame of the second stream of media samples and the relative offset time.
System and method for enhanced video image recognition using motion sensors
Disclosed are systems and methods for improving image recognition by using information from sensor data. In one embodiment, the method comprises receiving one or more sensor records, the sensor records representing timestamped sensor data collected by a sensor recording device; selecting an event based on the sensor records; identifying a time associated with the event; retrieving a plurality of timestamped video frames; synchronizing the sensor records and the video frames, wherein synchronizing the sensor records and the video frames comprises synchronizing the timestamped sensor data with individual frames of the timestamped video frames according to a common timeframe; and selecting a subset of video frames from the plurality of timestamped video frames based on the selected event.
Accessibility-compatible control elements for media players
A computing device includes: an input assembly; a display; and a controller configured to: execute a browser application to control the display to render a page containing a media presentation element to present multimedia content received at the computing device, a primary seek element selectable to control a playback position in the media presentation element, wherein the primary seek element includes an attribute hiding the primary seek element from a screen reader process of the computing device, and an auxiliary seek element selectable to control the playback position in the media presentation element; receive input data indicating a selection of the auxiliary seek element generated via execution of the screen reader process; and in response, control the display to adjust the playback position in the media presentation element according to the input data.
IMAGE ACQUISITION SYSTEM AND METHOD
A method of capturing free viewpoint content at a location includes recording video on each of a plurality of portable video recording devices at the location; each portable video recording device detecting a wireless synchronisation signal transmitted at the location; and each portable video recording device periodically adding a timestamp to its respective recorded video; where the timestamp is responsive to the detected wireless synchronisation signal, thereby enabling synchronisation of a plurality of recorded videos responsive to the timestamps.
Dynamic pairing of device data based on proximity for event data retrieval
A system for dynamic pairing includes an interface and a processor. The interface is configured to receive an indication to identify paired event data based on an event time and an event location. The processor is configured to determine a paired video data region; retrieve a subset of video data stored in a pairing database or from a vehicle event recorder; and provide the subset of the video data as the paired event data. The paired video data region includes locations spatially nearby the event location and times temporally nearby the event time. The pairing database or the vehicle event recorder stores a video data. The video data is retrievable based on an associated time data and/or location data. The video data is placed in the subset of video data in response to the associated time data and location data being within the paired video data region.
RECORDING DEVICE, RECORDING METHOD, REPRODUCING DEVICE, REPRODUCING METHOD, AND RECORDING/REPRODUCING DEVICE
It is possible for the viewer to readily and accurately reach a desired image/audio reproduction start position in reproduction.
A time code is added to moving image data obtained by imaging a state in which a person who writes a description is explaining while writing a description in a description portion and audio data corresponding to the moving image data to record the data in a recording unit. The moving image data is processed to determine a written portion written in the description portion, and index image data is generated to display each portion determined as the written portion as an index description, and the index image data is recorded in the recording portion. To the index image data, a value of the time code corresponding to description time is added as a timestamp, in association with each pixel constituting the index description.
Toolboxes, systems, kits and methods relating to supplying precisely timed, synchronized music
Systems, devices, and methods, etc., that provide digital audio toolboxes, music kits, digital audio tracks, etc., herein supply digital audio tracks such as music for combination with and synchronization with digital pre-existing media tracks. The toolkits, etc., herein provide users with visual tracks in media, to create, provide and/or synchronize precisely timed tracks used in audio media productions, or otherwise to provide multiple, precisely timed and synced tracks where a music/sound design track from the toolkits is added to a pre-made media track such as a visual footage.
Recording device, recording method, reproducing device, reproducing method, and recording/reproducing device
It is possible for the viewer to readily and accurately reach a desired image/audio reproduction start position in reproduction. A time code is added to moving image data obtained by imaging a state in which a person who writes a description is explaining while writing a description in a description portion and audio data corresponding to the moving image data to record the data in a recording unit. The moving image data is processed to determine a written portion written in the description portion, and index image data is generated to display each portion determined as the written portion as an index description, and the index image data is recorded in the recording portion. To the index image data, a value of the time code corresponding to description time is added as a timestamp, in association with each pixel constituting the index description.
METHODS AND APPARATUS FOR ORDERED SERIAL SYNCHRONIZATION OF MULTIMEDIA STREAMS UPON SENSOR CHANGES
An apparatus includes a processor with first and second input ports and a memory operably coupled to the processor. The processor can detect streams of media samples at the input ports and determine, in response to the detection of the streams of media samples, a capture start time. The processor can also capture a first frame of a first stream of media samples beginning at the capture start time, and a first frame of a second stream of media samples beginning at a first time subsequent to the capture start time. The processor can also calculate a relative offset time based on the capture start time, the first time, and a rate associated with the second stream of media samples, and store, in the memory, an indication of an association between the captured first frame of the second stream of media samples and the relative offset time.
Modular software based video production server, method for operating the video production server and distributed video production system
A video production server comprising at least one processor and a storage is suggested. Software modules composed of executable program code are loaded into a working memory of the at least one processor. Each software module, when executed by the at least one processor, provides an elementary service. A concatenation of elementary services provides for a functionality involving processing of video and/or audio signals needed for producing a broadcast program. The video production server includes a set of software components that runs on conventional hardware. Each functionality of the video production server is achieved by using a specific piece of software that is assembled from reusable functional software blocks and that can run on any compatible hardware platform. Furthermore, a method for operating the video production server and a distributed video production system including the video production server is suggested.