Patent classifications
H04N5/9207
Media message creation with automatic titling
In some implementations, a user device can be configured to create media messages with automatic titling. For example, a user can create a media messaging project that includes multiple video clips. The video clips can be generated based on video data and/or audio data captured by the user device and/or based on pre-recorded video data and/or audio data obtained from various storage locations. When the user device captures the audio data for a clip, the user device can obtain a speech-to-text transcription of the audio data in near real time and present the transcription data (e.g., text) overlaid on the video data while the video data is being captured or presented by the user device.
VIDEO CONTENT MEDIUM AND VIDEO REPRODUCTION APPARATUS
In a format in which a graphic is transmitted to a display apparatus after being combined with a content video, luminance of the graphic is fluctuated by dynamic metadata control. A video content medium relating to technology disclosed in the description of the present application is a video content medium on which one or more video streams, control information, and graphic information including a menu or a subtitle are recorded, wherein at least one of the video streams is a video having a wide luminance range, and graphic transition time information indicating a transition time to switch between luminance range adjustment functions when a graphic is combined is stored together.
MEDIA MESSAGE CREATION WITH AUTOMATIC TITLING
In some implementations, a user device can be configured to create media messages with automatic titling. For example, a user can create a media messaging project that includes multiple video clips. The video clips can be generated based on video data and/or audio data captured by the user device and/or based on pre-recorded video data and/or audio data obtained from various storage locations. When the user device captures the audio data for a clip, the user device can obtain a speech-to-text transcription of the audio data in near real time and present the transcription data (e.g., text) overlaid on the video data while the video data is being captured or presented by the user device.
Media message creation with automatic titling
In some implementations, a user device can be configured to create media messages with automatic titling. For example, a user can create a media messaging project that includes multiple video clips. The video clips can be generated based on video data and/or audio data captured by the user device and/or based on pre-recorded video data and/or audio data obtained from various storage locations. When the user device captures the audio data for a clip, the user device can obtain a speech-to-text transcription of the audio data in near real time and present the transcription data (e.g., text) overlaid on the video data while the video data is being captured or presented by the user device.
System and method for movie karaoke
While watching a movie, a user speaks lines of dialogue. The system records the speech, compares with the dialogue in the movie, and reports a score to the user. The system can share scores through an online service to create a community experience. In particular, the systems and methods disclosed implement a technique for matching user input to media content. A computer system receives audio input from a user (speech) and compares the received speech to dialogue in a movie or television program. For example, the computer system may convert the received speech to text and may compare the converted text against dialogue text using closed captioning or subtitle data. Alternatively, waveform data may be compared. The computer system generates a score for the speech based on how closely the speech matches the dialogue, and reports the score to the user through a user interface.
MEDIA MESSAGE CREATION WITH AUTOMATIC TITLING
In some implementations, a user device can be configured to create media messages with automatic titling. For example, a user can create a media messaging project that includes multiple video clips. The video clips can be generated based on video data and/or audio data captured by the user device and/or based on pre-recorded video data and/or audio data obtained from various storage locations. When the user device captures the audio data for a clip, the user device can obtain a speech-to-text transcription of the audio data in near real time and present the transcription data (e.g., text) overlaid on the video data while the video data is being captured or presented by the user device.