G11B27/036

Method and device for processing video

The present disclosure provides a method and device for processing a video. The method includes: determining a special effect video frame of a video, where a target feature area of the special effect video frame includes a preset special effect map; and modifying a display effect of the special effect map upon determining that a shielded area exists in the target feature area.

Personalized videos using selfies and stock videos
11704851 · 2023-07-18 · ·

A method is provided that includes displaying, by a computing device, representations of a plurality of stock videos to a user. The representations are at a still image, a partial clip, and/or a full play of the stock video. Each of the representations include a face outline for insertion of a facial image of a user. When the user has provided a self-image to the computing device, the facial image of the user is inserted in the face outline of the representations. The facial image is extracted from the self-image. The method may include receiving a selection of one of the representations of the plurality of stock videos, and displaying a personalized video including a selected stock video with the facial image positioned within a further face outline corresponding to the face outline of the selected representation.

Methods, devices, and systems for video segmentation and annotation
11705161 · 2023-07-18 · ·

Methods, devices, and systems for segmenting and annotating videos for analysis are disclosed. A user identifies specific moments of the video that provide a teachable moment. A pre-context and a post-context portion of the video surrounding the identified moment are used to create a tile video. One or more tile videos are compiled in a user-defined order to generate a weave video with a specific focus or theme. The generated weave video is shared with one or more users and can be annotated to facilitate teaching and/or discussion.

Methods, devices, and systems for video segmentation and annotation
11705161 · 2023-07-18 · ·

Methods, devices, and systems for segmenting and annotating videos for analysis are disclosed. A user identifies specific moments of the video that provide a teachable moment. A pre-context and a post-context portion of the video surrounding the identified moment are used to create a tile video. One or more tile videos are compiled in a user-defined order to generate a weave video with a specific focus or theme. The generated weave video is shared with one or more users and can be annotated to facilitate teaching and/or discussion.

NEURAL NETWORK FOR AUDIO AND VIDEO DUBBING WITH 3D FACIAL MODELLING
20230015971 · 2023-01-19 ·

A computer-implemented method includes obtaining source video data comprising a plurality of image frames, and using a face tracker to detect one or more instances of faces within respective sequences of image frames of the source video data. For a first instance of a given face detected within a first sequence of image frames, the method includes determining a framewise location and size of the first instance of the given face in the first sequence of image frames, using a neural renderer to obtain replacement video data comprising a replacement instance of the given face, and using the determined framewise location and size to replace at least part of the first instance of the given face with at least part of the replacement instance of the given face.

System for the automated, context sensitive, and non-intrusive insertion of consumer-adaptive content in video

Described herein is a method and system for automated, context sensitive and non-intrusive insertion of consumer-adaptive content in video. It assesses ‘context’ in the video that a consumer is viewing through multiple modalities and metadata about the video. The method and system described herein analyzes relevance for a consumer based on multiple factors such as the profile information of the end-user, history of the content, social media and consumer interests and professional or educational background, through patterns from multiple sources. The system also implements local-context through search techniques for localizing sufficiently large, homogenous regions in the image that do not obfuscate protagonists or objects in focus but are viable candidate regions for insertion for the intended content. This makes relevant, curated content available to a user in the most effortless manner without hampering the viewing experience of the main video.

System for the automated, context sensitive, and non-intrusive insertion of consumer-adaptive content in video

Described herein is a method and system for automated, context sensitive and non-intrusive insertion of consumer-adaptive content in video. It assesses ‘context’ in the video that a consumer is viewing through multiple modalities and metadata about the video. The method and system described herein analyzes relevance for a consumer based on multiple factors such as the profile information of the end-user, history of the content, social media and consumer interests and professional or educational background, through patterns from multiple sources. The system also implements local-context through search techniques for localizing sufficiently large, homogenous regions in the image that do not obfuscate protagonists or objects in focus but are viable candidate regions for insertion for the intended content. This makes relevant, curated content available to a user in the most effortless manner without hampering the viewing experience of the main video.

VIDEO GENERATION METHOD, VIDEO PLAYING METHOD, VIDEO GENERATION DEVICE, VIDEO PLAYING DEVICE, ELECTRONIC APPARATUS AND COMPUTER-READABLE STORAGE MEDIUM
20230224429 · 2023-07-13 ·

A video generation method, a video playing method, a video generation device, a video playing device, an electronic apparatus and a computer-readable storage medium are provided. The video generation method includes: acquiring a first video; executing Gaussian blur special effect processing on the first video to obtain a second video; superimposing an additional dynamic effect on the second video to generate a third video, wherein the additional dynamic effect is a dynamic image that presents a related information corresponding to the first video in a dynamic effect; and splicing the first video with the third video to generate a fourth video, wherein an image area of the fourth video comprises a first image area configured to display an image of the first video and a second image area configured to display an image of the third video.

VIDEO GENERATION METHOD, VIDEO PLAYING METHOD, VIDEO GENERATION DEVICE, VIDEO PLAYING DEVICE, ELECTRONIC APPARATUS AND COMPUTER-READABLE STORAGE MEDIUM
20230224429 · 2023-07-13 ·

A video generation method, a video playing method, a video generation device, a video playing device, an electronic apparatus and a computer-readable storage medium are provided. The video generation method includes: acquiring a first video; executing Gaussian blur special effect processing on the first video to obtain a second video; superimposing an additional dynamic effect on the second video to generate a third video, wherein the additional dynamic effect is a dynamic image that presents a related information corresponding to the first video in a dynamic effect; and splicing the first video with the third video to generate a fourth video, wherein an image area of the fourth video comprises a first image area configured to display an image of the first video and a second image area configured to display an image of the third video.

Apparatuses and methods for selectively inserting text into a video resume
11557323 · 2023-01-17 · ·

Aspects relate to apparatuses and methods for selectively inserting text into a video resume. An exemplary apparatus includes a processor and a memory communicatively connected to the processor, the memory containing instructions configuring the processor to receive a video resume from a user, divide the video resume is into temporal sections, acquire a plurality of textual inputs from a user, wherein the plurality of textual inputs pertains to the same user of received video resume, classify the plurality of textual inputs to corresponding temporal sections of the received video resume and display, as a function of the classification, the received video resume with a corresponding plurality of textual inputs.