Patent classifications
G11B27/102
METHOD AND APPARATUS FOR GENERATING VIDEO WITH 3D EFFECT, METHOD AND APPARATUS FOR PLAYING VIDEO WITH 3D EFFECT, AND DEVICE
A method and an apparatus for generating a video with a three-dimensional (3D) effect, a method and an apparatus for playing a video with a 3D effect, and a device are provided. The method includes: obtaining an original video; segmenting at least one frame of raw image of the original video to obtain a foreground image sequence including a moving object, the foreground image sequence including at least one frame of foreground image; determining, based on the foreground image sequence, a target raw image in which a target occlusion image is to be placed and an occlusion method of the target occlusion image in the target raw image; adding the target occlusion image to the target raw image based on the occlusion method to obtain a final image; and generating a target video with a 3D effect based on the final image and the original video.
SCRATCHPAD CREATION METHOD AND ELECTRONIC DEVICE
A scratchpad creation method and an electronic device are disclosed. The method includes: receiving a first input performed by a user on a target identifier, where the target identifier is associated with a first video file; and displaying a first scratchpad in response to the first input, where the first scratchpad is a scratchpad created based on content of the first video file, the first scratchpad includes at least one video identifier and at least one progress identifier, the video identifier is used to indicate a video clip in the first video file, and the progress identifier is used to indicate completion progress of an operation corresponding to the video clip.
Segment action detection
Aspects of the present disclosure involve a system comprising a storage medium storing a program and method for receiving a video comprising a plurality of video segments; selecting a target action sequence that includes a sequence of action phases; receiving features of each of the video segments; computing, based on the received features, for each of the plurality of video segments, a plurality of action phase confidence scores indicating a likelihood that a given video segment includes a given action phase of the sequence of action phases; identifying a set of consecutive video segments of the plurality of video segments that corresponds to the target action sequence, wherein video segments in the set of consecutive video segments are arranged according to the sequence of action phases; and generating a display of the video that includes the set of consecutive video segments and skips other video segments in the video.
Methods, devices, and systems for video segmentation and annotation
Methods, devices, and systems for segmenting and annotating videos for analysis are disclosed. A user identifies specific moments of the video that provide a teachable moment. A pre-context and a post-context portion of the video surrounding the identified moment are used to create a tile video. One or more tile videos are compiled in a user-defined order to generate a weave video with a specific focus or theme. The generated weave video is shared with one or more users and can be annotated to facilitate teaching and/or discussion.
System for the automated, context sensitive, and non-intrusive insertion of consumer-adaptive content in video
Described herein is a method and system for automated, context sensitive and non-intrusive insertion of consumer-adaptive content in video. It assesses ‘context’ in the video that a consumer is viewing through multiple modalities and metadata about the video. The method and system described herein analyzes relevance for a consumer based on multiple factors such as the profile information of the end-user, history of the content, social media and consumer interests and professional or educational background, through patterns from multiple sources. The system also implements local-context through search techniques for localizing sufficiently large, homogenous regions in the image that do not obfuscate protagonists or objects in focus but are viable candidate regions for insertion for the intended content. This makes relevant, curated content available to a user in the most effortless manner without hampering the viewing experience of the main video.
Processing and formatting video for interactive presentation
Systems and methods are described for determining a first media item related to an event, of a plurality of stored media items each comprising video content related to the event, that was captured in a device orientation corresponding to a first device orientation detected for the first computing device; providing, to the first computing device, the first media item to be displayed on the first computing device; in response to a detected change to a second device orientation for the first computing device, determining a second media item that was captured in a device orientation corresponding to the second device orientation detected for the first computing device; and providing, to the first computing device, the second media item to be displayed on the first computing device.
Subtitle presentation based on volume control
Systems and methods are provided for presenting subtitles. The systems and methods include accessing, by a user device, a video discovery graphical user interface that includes a plurality of videos; receiving a user input that gradually reduces volume of the user device; determining that the volume of the user device has gradually been reduced by the user input until a mute state has been reached in which audio output of the user device is disabled; and in response to determining that the volume of the user device has gradually been reduced until the mute state has been reached, automatically causing subtitles of a first video of the plurality of videos to be displayed during playback of the first video.
User interface for managing audible descriptions for visual media
The present disclosure generally relates to user interfaces and techniques for managing audible descriptions for visual media. In some embodiments, the user interfaces and techniques provide different audible descriptions for a portion of a representation of the media, where one audible description is provided before the portion of the representation of the media has been changed and the different audible description is provided after the portion of the representation of media has been changed.
Method and System for Seamless Media Synchronization and Handoff
A method performed by a portable media player device. The method receives a microphone signal that includes audio content output by an audio playback device via a loudspeaker. The method determines identification information regarding the audio content, wherein the identification information is determined through an acoustic signal analysis of the microphone signal. In response to determining that the audio playback device has ceased outputting the audio content, the method retrieves from local memory of the portable media player device or a remote device with which the portable media player device is communicatively coupled, an audio signal that corresponds to the audio content and drives a speaker that is not part of the audio playback device using the audio signal to continue outputting the audio content and any additional audio content related to the audio content.
Video processing method, video playing method, devices and storage medium
Aspects of the invention provided a video processing method, a video playing method, devices thereof, and a storage medium. The video processing method is applied to a terminal device and can include that a video-frame tagging operation is detected during recording of a video. The video-frame tagging operation can tag a video frame in a recorded video at different times during recording of the video. The video frame can be tagged at different times during recording of the video according to the video-frame tagging operation. A final recorded video is generated based on the tagged video frame and an untagged video frame. The video frame is tagged through the detected video-frame tagging operation and the final recorded video is generated, so that a wonderful moment in the video can be rapidly located based on the tag of the video frame during video playback, and user experience may be improved.