Patent classifications
H04N21/47205
VIDEO-LOG PRODUCTION SYSTEM
Methods, computer-readable media, and apparatuses for composing a video in accordance with a user goal and an audience preference are described. For example, a processing system having at least one processor may obtain a plurality of video clips of a user, determine at least one goal of the user for a production of a video from the plurality of video clips, determine at least one audience preference of an audience, and compose the video comprising at least one video clip of the plurality of video clips of the user in accordance with the at least one goal of the user and the at least one audience preference. The processing system may then upload the video to a network-based publishing platform.
HYPER-CONNECTED AND SYNCHRONIZED AR GLASSES
Systems and methods are described for selectively sharing audio and video streams amongst electronic eyewear devices. Each electronic eyewear device includes a camera arranged to capture a video stream in an environment of the wearer, a microphone arranged to capture an audio stream in the environment of the wearer, and a display. A processor of each electronic eyewear device executes instructions to establish an always-on session with other electronic eyewear devices and selectively shares an audio stream, a video stream, or both with other electronic eyewear devices in the session. Each electronic eyewear device also generates and receives annotations from other users in the session for display with the selectively shared video stream on the display of the electronic eyewear device that provided the selectively shared video stream. The annotation may include manipulation of an object in the shared video stream or overlay images registered with the shared video stream.
Short segment generation for user engagement in vocal capture applications
User interface techniques provide user vocalists with mechanisms for solo audiovisual capture and for seeding subsequent performances by other users (e.g., joiners). Audiovisual capture may be against a full-length work or seed spanning much or all of a pre-existing audio (or audiovisual) work and in some cases may mix, to seed further contributions of one or more joiners, a user's captured media content for at least some portions of the audio (or audiovisual) work. A short seed or short segment may span less than all (and in some cases, much less than all) of the audio (or audiovisual) work. For example, a verse, chorus, refrain, hook or other limited “chunk” of an audio (or audiovisual) work may constitute a short seed or short segment. Computational techniques are described that allow a system to automatically identify suitable short seeds or short segments. After audiovisual capture against the short seed or short segment, a resulting, solo or group, full-length or short-form performance may be posted, livestreamed, or otherwise disseminated in a social network.
Methods and systems of display edge interactions in a gesture-controlled device
Method and systems for controlling a display device, including detecting a mid-air gesture using a sensing device; mapping the detected mid-air gesture to locations of an interaction region, the interaction region including an on-screen region of the display device and an off-screen region that is located outside an edge of the on-screen region; and performing a display device control action upon detecting an edge interaction based on the mapping of the detected mid-air gesture to locations that interact with the edge of the on-screen region.
METHOD OF MERGING DIGITAL MEDIA
Embodiments herein provide for methods of dividing selected areas of a first video clip having a first composition, e.g., by generating individual video data corresponding to the selected areas, arranging the selected areas to provide a second composition, e.g., by combining the individual video data to generate composite video data corresponding to the second composition, and compiling the composite video data to provide a second video clip having the second composition.
Separation of graphics from natural video in streaming video content
Aspects of the subject disclosure may include, for example, a method that includes obtaining, by a processing system including a processor, video frames over a network; the processing system uses a machine learning algorithm to identify in each frame a first region comprising a natural image and a second region comprising a synthetic graphic image. The processing system separates the natural image from the synthetic graphic image to generate a natural video and a graphics video, encodes the natural video, and processes the graphics video to generate instructions for rendering graphic images at a client system. The client system performs a decoding procedure for the encoded video, a rendering procedure for client-side graphics in accordance with the instructions, and a compositing procedure to obtain a presentable video stream including the natural image and a client-side graphic corresponding to the synthetic graphic image. Other embodiments are disclosed.
Systems and methods for creating and navigating broadcast-ready social content items in a live produced video
Systems and methods for incorporating social content items into a produced video are provided. The system presents a producer interface to a user that allows the user to query for social content items. The user may then select and arrange social content items in an on-air queue. In an on-air mode, the system generates a broadcast-ready on-air format of the social content items and provides a video stream including the broadcast-ready social content items in the on-air queue to a video production system. The broadcast-ready social content items are incorporated into a produced video by the video production system. The user may navigate through the social content items in the on-air queue while on camera as part of the produced video.
Video file playing method and apparatus, and storage medium
This application discloses a video file playing method and apparatus, and a storage medium. The video file playing method includes playing an animation file frame by frame according to a playback time of a video file, the video file comprising at least one displayed object, and the animation file comprising an animation element generated according to the displayed object; determining click/tap position information of a screen clicking/tapping event in response to the screen clicking/tapping event being detected; determining an animation element display area corresponding to the click/tap position information of the screen clicking/tapping event in the animation file according to the click/tap position information; determining, according to the corresponding animation element display area, an animation element triggered by the screen clicking/tapping event; and determining an interactive operation corresponding to the triggered animation element and performing the interactive operation.
VIDEO TRANSMISSION METHOD, ELECTRONIC DEVICE AND COMPUTER READABLE MEDIUM
The present disclosure relates to a video transmission method, an electronic device and a computer readable medium. The method is applied to a first application installed in a first terminal, where the first terminal is further installed with a second application. The method includes: acquiring a current video frame; performing special effect processing on the current video frame according to a received special effect setting instruction, and generating a target video frame; and sending the target video frame to the second application. Thus, the first application can be matched with any second application that needs to use a special effect processing function, and the second application can be enabled to obtain a special effect-processed video on the premise that the second application does not need to be redeveloped.
VIDEO FILE PROCESSING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND COMPUTER STORAGE MEDIUM
The present invention provides a video file processing method and apparatus, an electronic device, and a computer readable storage medium, relating to the field of video processing. The method comprises: in a preset first editing interface for an original video file, when a trigger instruction for a preset first interaction function is received, displaying a preset second editing interface, the second editing interface comprising a preset interaction label; receiving, in the interaction label, first identification information, which is determined by an editor, of an interaction object to obtain an interaction label comprising the first identification information; and when an editing completion instruction initiated by the editor is received, generating a target video file comprising the interaction label, and publishing the target video file.