Patent classifications
H04N9/806
Broadcast receiving device
A broadcast receiving device is provided with: a broadcast receiving unit which receives a digital broadcast signal including broadcast program video and application-related information; a storage unit which stores the received broadcast program video and application-related information; a video decoding unit which decodes the video; an application acquisition unit which acquires an application on the basis of location information included in the application-related information; an application execution unit which executes the acquired application; an output unit which is able to output the video; and a control unit for reproducing and decoding the broadcast program video from the storage unit, reproducing the application-related information from the storage unit, acquiring the application on the basis of the location information included in the reproduced application-related information, and executing the acquired application.
Data processing device, data processing method, and data processing system
Provided is a data processing device that includes a sound extracting unit that extracts one or more sound blocks to be reproduced together with video data based on the plurality of images, on the basis of a predetermined characteristic quantity from sound data corresponding to sound captured within a period in which a plurality of intermittent images has been captured.
Automatically curating video to fit display time
A system is configured to synchronize a first video, a second video, and an audio track. The system analyzes image content associated with the first video and the second video to obtain a first subset of images of the first video and a second subset of images of the second video. The system then determines a music beat of the audio track to be synchronized with one of the first subset of images or the second subset of images. The system then adjusts a framerate of the first subset of images or the second subset of images based on the determined music beat to synchronize the first subset of images or the second subset of images. The first subset of images and the second subset of images may then be combined, which the system then plays back from a designated playback slot along with the audio track.
Automatically curating video to fit display time
A system is configured to synchronize a first video, a second video, and an audio track. The system analyzes image content associated with the first video and the second video to obtain a first subset of images of the first video and a second subset of images of the second video. The system then determines a music beat of the audio track to be synchronized with one of the first subset of images or the second subset of images. The system then adjusts a framerate of the first subset of images or the second subset of images based on the determined music beat to synchronize the first subset of images or the second subset of images. The first subset of images and the second subset of images may then be combined, which the system then plays back from a designated playback slot along with the audio track.
Method and electronic device for generating multiple point of view video
The present disclosure provides an electronic device for generating a multiple point of view (MPOV) video and the method thereof. The present disclosure involves the electronic device to obtain a plurality of media contents. The electronic device would identify a first media content relating to a second media content in time and location according to time information, audio information, and location information including a geographic tag and a surrounding signal information. Then, the first media content and the second media content are provided as relevant media contents for generating the MPOV video of the event having the relevant media content captured from different point of view.
Method and electronic device for generating multiple point of view video
The present disclosure provides an electronic device for generating a multiple point of view (MPOV) video and the method thereof. The present disclosure involves the electronic device to obtain a plurality of media contents. The electronic device would identify a first media content relating to a second media content in time and location according to time information, audio information, and location information including a geographic tag and a surrounding signal information. Then, the first media content and the second media content are provided as relevant media contents for generating the MPOV video of the event having the relevant media content captured from different point of view.
Imaging system
An imaging system includes a sound recording apparatus and an imaging apparatus that is connected to the sound recording apparatus and records, as moving-image sounds, sounds collected by the sound recording apparatus. The imaging apparatus detects whether the sound recording apparatus has been connected thereto. Upon detecting that the sound recording apparatus has been connected to the imaging apparatus, the imaging apparatus sets a sound-recording condition associated with the sound recording apparatus as a sound-recording condition for the imaging apparatus in a sound recording process and records sounds under the sound-recording condition that has been set.
Automatic generation of video and directional audio from spherical content
A spherical content capture system captures spherical video and audio content. In one embodiment, captured metadata or video/audio processing is used to identify content relevant to a particular user based on time and location information. The platform can then generate an output video from one or more shared spherical content files relevant to the user. The output video may include a non-spherical reduced field of view such as those commonly associated with conventional camera systems. Particularly, relevant sub-frames having a reduced field of view may be extracted from each frame of spherical video to generate an output video that tracks a particular individual or object of interest. For each sub-frame, a corresponding portion of an audio track is generated that includes a directional audio signal having a directionality based on the selected sub-frame.
Automatic generation of video and directional audio from spherical content
A spherical content capture system captures spherical video and audio content. In one embodiment, captured metadata or video/audio processing is used to identify content relevant to a particular user based on time and location information. The platform can then generate an output video from one or more shared spherical content files relevant to the user. The output video may include a non-spherical reduced field of view such as those commonly associated with conventional camera systems. Particularly, relevant sub-frames having a reduced field of view may be extracted from each frame of spherical video to generate an output video that tracks a particular individual or object of interest. For each sub-frame, a corresponding portion of an audio track is generated that includes a directional audio signal having a directionality based on the selected sub-frame.
Audio alteration techniques
A method of altering audio output from an electronic device based on image data is provided. In one embodiment, the method includes acquiring image data and determining one or more characteristics of the image data. Such characteristics may include sharpness, brightness, motion, magnification, zoom setting, and so forth, as well as variation in any of the preceding characteristics. The method may also include producing audio output, wherein at least one characteristic of the audio output is determined based on one or more of the image data characteristics. Various audio output characteristics that may be varied based on the video data characteristics may include, for instance, pitch, reverberation, tempo, volume, filter frequency response, added sound effects, or the like. Additional methods, devices, and manufactures are also disclosed.