Patent classifications
H04N5/9202
Method and system for synchronously reproducing multimedia multi-information
Disclosed is a method and system for synchronously reproducing multi-media multi-information. The method is a method of combining a plurality of relevant files or a plurality of information streams that have an associated information relationship using a multi-information modulation unit, and then synchronously reproducing the relevant files or the information streams using a dedicated player capable of synchronously reproducing and playing back the multi-information. The step of synchronously recording the multi-information in the method is to insert non-audio/video information into an audio/video steam before compression or after compression or a file thereof using the multi-information modulation unit, that is, to embed some additional information blocks carrying the non-audio/video information into necessary video frames or audio frames and/or create or insert some additional information frames carrying the non-audio/video information between the necessary video frames and audio frames.
METHODS AND SYSTEMS OF VIDEO PROCESSING
A method of processing a video includes capturing a first set of video data at a first definition, transmitting the first set of video data at a second definition lower than the first definition wirelessly to a user terminal, receiving a video edit request wirelessly from the user terminal, and finding video corresponding to edited video data described by the video edit request, thereby forming a second set of video data at a third definition. The video edit request is formed from editing the received first set of video data at the second definition at the user terminal.
MEDIA MESSAGE CREATION WITH AUTOMATIC TITLING
In some implementations, a user device can be configured to create media messages with automatic titling. For example, a user can create a media messaging project that includes multiple video clips. The video clips can be generated based on video data and/or audio data captured by the user device and/or based on pre-recorded video data and/or audio data obtained from various storage locations. When the user device captures the audio data for a clip, the user device can obtain a speech-to-text transcription of the audio data in near real time and present the transcription data (e.g., text) overlaid on the video data while the video data is being captured or presented by the user device.
Methods and systems of video processing
Methods and systems are provided for video processing. Video may be captured using an image capture device at a first definition. The image capture device may optionally be, or may be on board, an aerial vehicle, such as an unmanned aerial vehicle. A first set of video data may be transmitted to a user terminal at a second definition, which may be less than the first definition. A user may interact with the user terminal to edit the video and generate a video edit request. The video edit request may be transmitted to the image capture device, which may accordingly produce a second set of video data in accordance with the video edit request, at a third definition. The third definition may be greater than the second definition.
Media message creation with automatic titling
In some implementations, a user device can be configured to create media messages with automatic titling. For example, a user can create a media messaging project that includes multiple video clips. The video clips can be generated based on video data and/or audio data captured by the user device and/or based on pre-recorded video data and/or audio data obtained from various storage locations. When the user device captures the audio data for a clip, the user device can obtain a speech-to-text transcription of the audio data in near real time and present the transcription data (e.g., text) overlaid on the video data while the video data is being captured or presented by the user device.
Information processor, information processing method, and program
There is provided an information processor including circuitry configured to identify a part of a moving image in response to an audible sound input of a user, wherein the moving image is generated by a capturing of an imaging unit which is attached to the user.
Audio routing for audio-video recording
Systems and methods for routing audio for audio-video recordings allow a user to record desired audio with captured video at the time the video is being captured. Audio from one or more sources may be routed to the video capture application and recorded with the video. In one or more examples, audio may be routed from another application, e.g., an audio playback application, running on the same device as the video capture application. In another example, audio may be received from a remote device through a wireless connection. Multiple streams of audio content may be mixed together prior to storing with video. The audio, upon reception, may then be routed to the video capture application for recordation. An audio progression bar may also be provided to indicate duration and elapsed time information associated with the audio being recorded.
VIDEO RECORDING METHOD, ELECTRONIC DEVICE AND STORAGE MEDIUM
Provided are a video-recording method and apparatus, and an electronic device. The method comprises: during the process of playing target videos, obtaining a plurality of first pictures by intercepting, an editing process carried out by a user on an edit panel, and obtaining a plurality of second pictures by respectively carrying out picture conversion on the played target videos multiple times; then, obtaining a plurality of frame images by respectively superimposing the plurality of first pictures with the corresponding second pictures; and then generating recorded video segments according to a video stream obtained by synthesizing the plurality of frame images and an audio stream of target videos, thereby satisfying personalized requirements of users during a video-recording process
Apparatus and method for video-audio processing, and program for separating an object sound corresponding to a selected video object
The present technique relates to an apparatus and a method for video-audio processing, and a program each of which enables a desired object sound to be more simply and accurately separated. A video-audio processing apparatus includes a display control portion configured to cause a video object based on a video signal to be displayed; an object selecting portion configured to select the predetermined video object from the one video object or among a plurality of the video objects; and an extraction portion configured to extract an audio signal of the video object selected by the object selecting portion as an audio object signal. The present technique can be applied to a video-audio processing apparatus.
Digital camera with audio, visual and motion analysis
A digital camera with audio, visual and motion analysis includes a digital processor, an input processing system, and one or more imaging sensors, sound sensors, and motion sensors. In a non-limiting embodiment, the input processing system including non-transitory computer readable media including code segments, executable by the digital processor, for real-time audio, visual and motion analysis to develop a digital model of an ambient environment of the digital camera from data derived from the imaging sensor(s), sound sensor(s) and motion sensor(s).