Patent classifications
G11B27/19
Comprehensive video collection and storage
A video collection system comprising a body-wearable video camera, a camera dock, and a video collection manager. The camera dock is configured to interface with the body-wearable video camera having a camera-memory element. The camera dock includes a dock-memory element configured to receive and store video data from the camera-memory element. The video collection manager is communicatively coupled with the camera dock. The camera dock sends at least a portion of the video data to the video collection manager.
Comprehensive video collection and storage
A video collection system comprising a body-wearable video camera, a camera dock, and a video collection manager. The camera dock is configured to interface with the body-wearable video camera having a camera-memory element. The camera dock includes a dock-memory element configured to receive and store video data from the camera-memory element. The video collection manager is communicatively coupled with the camera dock. The camera dock sends at least a portion of the video data to the video collection manager.
Data synchronisation
The present invention relates to a method and apparatus to synchronise audio and video data. More particularly, the present invention relates to a loop-based audio-visual mixing apparatus and method for synchronising a plurality of videos and their corresponding audio streams to create audio-visual compositions. According to one aspect, there is provided a method for creating a synchronised lineal sequence from multiple inputs of audio and video data, comprising the steps of: providing a first input, comprising audio and video data; providing one or more subsequent inputs, comprising audio and video data; determining at least one rhythm metric unit for each input; queueing the or each subsequent inputs such that the or each subsequent input is triggered at a beginning of a next said rhythm metric unit of a determined input.
Data synchronisation
The present invention relates to a method and apparatus to synchronise audio and video data. More particularly, the present invention relates to a loop-based audio-visual mixing apparatus and method for synchronising a plurality of videos and their corresponding audio streams to create audio-visual compositions. According to one aspect, there is provided a method for creating a synchronised lineal sequence from multiple inputs of audio and video data, comprising the steps of: providing a first input, comprising audio and video data; providing one or more subsequent inputs, comprising audio and video data; determining at least one rhythm metric unit for each input; queueing the or each subsequent inputs such that the or each subsequent input is triggered at a beginning of a next said rhythm metric unit of a determined input.
VIDEO INFORMATION GENERATION METHOD, APPARATUS, AND SYSTEM AND STORAGE MEDIUM
This application provides a video information generation method, apparatus, and system and a storage medium. The video information generation method includes: obtaining a plurality of temporally consecutive target images; obtaining first information of a target object in the target images; and associating first information of a same target object located in different target images to generate target information. In the video information generation method provided in this application, the first information of the target object in the target images is obtained, and the first information of the same target object located in different target images is associated. In this way, target information with a relatively small amount of data can be obtained, thereby improving the efficiency of remotely viewing a video by a user.
VIDEO INFORMATION GENERATION METHOD, APPARATUS, AND SYSTEM AND STORAGE MEDIUM
This application provides a video information generation method, apparatus, and system and a storage medium. The video information generation method includes: obtaining a plurality of temporally consecutive target images; obtaining first information of a target object in the target images; and associating first information of a same target object located in different target images to generate target information. In the video information generation method provided in this application, the first information of the target object in the target images is obtained, and the first information of the same target object located in different target images is associated. In this way, target information with a relatively small amount of data can be obtained, thereby improving the efficiency of remotely viewing a video by a user.
Independent content tagging of media files
Methods for generating meta-tagged media files, in which features of an event recorded in the media file are tagged to identify content, are disclosed herein. The methods include independent and simultaneous generation of a media file and meta-tags, and the combination of the media file and the meta tags based on a correlation of device times to generate the meta-tagged media file.
Independent content tagging of media files
Methods for generating meta-tagged media files, in which features of an event recorded in the media file are tagged to identify content, are disclosed herein. The methods include independent and simultaneous generation of a media file and meta-tags, and the combination of the media file and the meta tags based on a correlation of device times to generate the meta-tagged media file.
Systems and methods for modifying a segment of an uploaded media file
Systems and techniques for modifying a subsection of uploaded media are presented. An instruction component receives a media file and a media enhancement instruction that includes enhancement data and media interval data for a first segment of the media file. A processing component modifies the first segment of the media file associated with the media interval data based on the enhancement data to generate an edited first segment of the media file. A finalization component generates an edited version of the media file that includes the edited first segment of the media file and at least a second segment of the media file that is not modified based on the enhancement data.
Systems and methods for modifying a segment of an uploaded media file
Systems and techniques for modifying a subsection of uploaded media are presented. An instruction component receives a media file and a media enhancement instruction that includes enhancement data and media interval data for a first segment of the media file. A processing component modifies the first segment of the media file associated with the media interval data based on the enhancement data to generate an edited first segment of the media file. A finalization component generates an edited version of the media file that includes the edited first segment of the media file and at least a second segment of the media file that is not modified based on the enhancement data.