Patent classifications
H04N21/80
Viewer-authored content acquisition and management system for in-the-moment broadcast in conjunction with media programs
A method, apparatus, and system for providing viewer-derived content for broadcast presentation in conjunction with a broadcast of a media program by a provider of the media program is disclosed. The disclosed system and method (1) simplifies the process for viewers to provide viewer-authored media to broadcasters, while minimizing the data transmission requirements between portable viewer devices and the broadcaster, (2) allows advance approval for the broadcasters to use that viewer-generated content to generate and disseminate viewer-authored-content and (3) provides for management of viewer-generated content (4) integrates with social networks that can be used to at least preliminarily assess the popularity and suitability of the viewer-generated content for broadcast to other viewers.
Method and apparatus for video composition, synchronization, and control
A system includes a smart-phone processor configured to receive a video recording feed from a camera. The processor is also configured to receive a vehicle data feed from a vehicle connected to the processor. The processor is further configured to convert the vehicle data feed into images. The processor is additionally configured to add the images to the video recording feed in real-time and save a combined feed including the video and images resulting from the adding.
Methods, Systems, and Products for Indexing Scenes in Digital Media
Methods, systems, and products index digital scenes in digital media. A uniform resource locator is assigned to each different digital scene within the digital media. The uniform resource locator uniquely identifies a resource from which each different digital scene may be retrieved. Individual scenes may thus be retrieved, thus conserving bandwidth and memory.
METHOD AND APPARATUS FOR VIDEO COMPOSITION, SYNCHRONIZATION, AND CONTROL
A system includes a smart-phone processor configured to receive a video recording feed from a camera. The processor is also configured to receive a vehicle data feed from a vehicle connected to the processor. The processor is further configured to convert the vehicle data feed into images. The processor is additionally configured to add the images to the video recording feed in real-time and save a combined feed including the video and images resulting from the adding.
METHOD, APPARATUS AND SYSTEM FOR DISCOVERING AND DISPLAYING INFORMATION RELATED TO VIDEO CONTENT
Methods, apparatus and systems for processing and tagging at least a portion of a video with metadata are provided herein. In some embodiments, a method for processing and tagging at least a portion of a video with metadata includes extracting a plurality of frames from the video, generating a fingerprint for each frame of the plurality of frames, or for a set of frames of the plurality of frames, determining contextual data within at least one frame or set of frames, associating the generated fingerprint of each frame or set of frames with the determined contextual data, and storing the association of the fingerprint of each frame or set of frames and the contextual data.
METHOD, APPARATUS AND SYSTEM FOR DISCOVERING AND DISPLAYING INFORMATION RELATED TO VIDEO CONTENT
Methods, apparatus and systems for processing and tagging at least a portion of a video with metadata are provided herein. In some embodiments, a method for processing and tagging at least a portion of a video with metadata includes extracting a plurality of frames from the video, generating a fingerprint for each frame of the plurality of frames, or for a set of frames of the plurality of frames, determining contextual data within at least one frame or set of frames, associating the generated fingerprint of each frame or set of frames with the determined contextual data, and storing the association of the fingerprint of each frame or set of frames and the contextual data.
SYSTEM AND METHOD FOR MATCHING AUDIO CONTENT TO VIRTUAL REALITY VISUAL CONTENT
A system and method for matching audio content to virtual reality visual content. The method includes analyzing received visual content and received metadata to determine an optimal audio source associated with the received visual content; configuring the optimal audio source to capture audio content; synthesizing the captured audio content with the received visual content; and providing the synthesized captured audio content and received visual content to a virtual reality (VR) device.
PREDICTION MODEL TRAINING VIA LIVE STREAM CONCEPT ASSOCIATION
In certain embodiments, training of a neural network or other prediction model may be facilitated via live stream concept association. In some embodiments, a live video stream may be loaded on a user interface for presentation to a user. A user selection related to a frame of the live video stream may be received via the user interface during the presentation of the live video stream on the user interface, where the user selection indicates a presence of a concept in the frame of the live video stream. In response to the user selection related to the frame, an association of at least a portion of the frame of the live video stream and the concept may be generated, and the neural network or other prediction model may be trained based on the association of at least the portion of the frame with the concept.
PREDICTION MODEL TRAINING VIA LIVE STREAM CONCEPT ASSOCIATION
In certain embodiments, training of a neural network or other prediction model may be facilitated via live stream concept association. In some embodiments, a live video stream may be loaded on a user interface for presentation to a user. A user selection related to a frame of the live video stream may be received via the user interface during the presentation of the live video stream on the user interface, where the user selection indicates a presence of a concept in the frame of the live video stream. In response to the user selection related to the frame, an association of at least a portion of the frame of the live video stream and the concept may be generated, and the neural network or other prediction model may be trained based on the association of at least the portion of the frame with the concept.
SYSTEM AND METHOD FOR CREATING METADATA MODEL TO IMPROVE MULTI-CAMERA PRODUCTION
A system and method is provided for using camera metadata from multiple cameras in a live environment to improve video production workflow. Each camera of the system is provided to media content of a live scene and store camera metadata that includes camera lens, position and gyro setting. This metadata can then be provided to other cameras in the system and/or a control that can generate a 3D metadata feed using the camera metadata. Moreover, based on the metadata feed, control instructions can be generated and transmitted to one or more of the cameras to control camera operations for capturing the media content.