Patent classifications
H04N21/47205
Method and apparatus for synthesizing video
The present disclosure provides a method and an apparatus for synthesizing a video, and a storage medium. The method is implemented as follows. A video to be synthesized is displayed on a video editing interface. In response to a video sharing instruction, a friend recommendation list is acquired and displayed. The friend recommendation list is used for indicating a plurality of sharing objects. A synthesized video is generated based on the video and the target sharing object. A reminding mark for indicating the target sharing object is displayed on a video picture of the synthesized video.
METHOD AND APPARATUS FOR CREATING INTERACTIVE VIDEO, DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
A method for creating an interactive video is provided. In the method, an interactive video creation interface is displayed. The interactive video creation interface includes a video editing preview region and a component editing region. A first interactive clip is added to the video editing preview region. The first interactive clip includes a display scene. An information viewing component is added to a target position in the display scene of the first interactive clip based on selection of an information viewing option that is included in the component editing region. The information viewing component is configured to present information in the display scene when selected by a user of the interactive video. The interactive video is generated according to the information viewing component added to the first interactive clip.
Method for collaborative comments or metadata annotation of video
A logger or annotator views video in a window or user interface (UI) of a computing device and enters time-stamped metadata or commentary; that metadata or commentary is then automatically displayed on a timeline or other time-based index in a different window or user interface of a second computing device used by a viewer or editor of that video. The metadata or commentary is represented by a marker or icon appearing on a timeline displayed in the window or user interface of a second computing device, and the metadata or commentary is shown when the viewer or editor selects that marker or icon.
Method and apparatus for displaying music points, and electronic device and medium
Disclosed are a method and apparatus for displaying music points, and an electronic device and a medium. One specific embodiment of the method includes: acquiring audio material; analyzing initial music points in the audio material, wherein the initial music points include beat points and/or note starting points in the audio material; and on an operation interface of video clipping, displaying, according to the position of the audio material on a clip timeline and the positions of target music points in the audio material, identifiers of the target music points on the clip timeline, wherein the target music points are some of or all of the initial music points. According to the embodiment, the time for a user to process audio material and to make music points is reduced, and the flexibility of tools is also guaranteed.
Communication exchange system for remotely communicating instructions
A server may communicatively couple to a user device and an instructor device. The server may receive location information from the user device. The location information may define visual content captured by the user device. The server may transmit the location information to the instructor device. The instructor device may present the visual content based on the received location information and receive input defining an instruction from an instructor. The server may receive instruction information defining the instruction from the instructor device. The server may transmit the instruction information to the user device. The user device may present the instruction overlaid on top of the visual content based on the received instruction information.
Asynchronous short video communication platform based on animated still images and audio
A communication platform for enabling a sender to create video messages comprising of one or more still images, accompanied by sender audio pertaining to the still images at the point where the images have been rendered as video, wherein effects and filters are applied to the still images, the video message further comprising a timeline for synchronizing the sender audio to the still images and associated metadata; and send the video message to a recipient; and wherein the recipient can view the video message comprising the one or more still images with the associated effects and filters, with accompanying sender audio synchronized to the still images; and enabling the recipient to reply using the received still images with an option of using preset animation events from the sender to create a new video message comprising recipient audio and new effects and filters; and send to the sender or other recipients.
Systems and methods for changing a users perspective in virtual reality based on a user-selected position
Systems and methods are described for a media guidance application (e.g., implemented on a user device) that allows users to select any arbitrary position in a virtual reality environment from where to view the virtual reality content and changes a user's perspective based on the selected position.
METHOD AND APPARATUS FOR SYNTHESIZING VIDEO
The present disclosure provides a method and an apparatus for synthesizing a video, and a storage medium. The method is implemented as follows. A video to be synthesized is displayed on a video editing interface. In response to a video sharing instruction, a friend recommendation list is acquired and displayed. The friend recommendation list is used for indicating a plurality of sharing objects. A synthesized video is generated based on the video and the target sharing object. A reminding mark for indicating the target sharing object is displayed on a video picture of the synthesized video.
INCORPORATING INTERACTION ACTIONS INTO VIDEO DISPLAY THROUGH PIXEL DISPLACEMENT
A video processing method includes obtaining, in response to an interaction operation received on a portion of a first image, an adjustment parameter corresponding to the interaction operation. The adjustment parameter indicates an adjustment range of a display position of one or more pixels corresponding to the portion of the first image based on the interaction operation. The method further includes obtaining a displacement parameter of the one or more pixels in the portion of the first image, the displacement parameter representing a displacement of the one or more pixels between the first image and a second image displayed after the first image. The method also includes adjusting a display position of one or more pixels in the second image based on the adjustment parameter and the displacement parameter, and displaying the second image based on the adjusted display position of the one or more pixels.
METHOD AND APPARATUS FOR PLAYING VIDEO SIGNAL OF MULTI-USER INTERACTION, AND DEVICE
A method for playing a video signal of multi-user interaction includes: generating a video playing interface of a live streaming room; switching an operation mode of the video playing interface from a viewer mode to an anchor mode in response to an anchor mode switching instruction, where in the anchor mode, the video playing interface includes a display item of at least one multi-user interaction activity; generating a live streaming instruction for the first multi-user interaction activity in the at least one multi-user interaction activity in response to a trigger operation for a display item corresponding to a first multi-user interaction activity in the at least one multi-user interaction activity; and playing a live streaming video signal of the first multi-user interaction activity in response to the live streaming instruction for the first multi-user interaction activity.