G11B27/02

Tracker assisted image capture

A method for picture processing is described. A first tracking area is obtained. A second tracking area is also obtained. The method includes beginning to track the first tracking area and the second tracking area. Picture processing is performed once a portion of the first tracking area overlapping the second tracking area passes a threshold.

Tracker assisted image capture

A method for picture processing is described. A first tracking area is obtained. A second tracking area is also obtained. The method includes beginning to track the first tracking area and the second tracking area. Picture processing is performed once a portion of the first tracking area overlapping the second tracking area passes a threshold.

VIDEO REENACTMENT TAKING INTO ACCOUNT TEMPORAL INFORMATION
20220392490 · 2022-12-08 ·

Apparati, methods, and computer readable media for inserting identity information from a source image (static image or video) (301) into a destination video (302), while mimicking motion of the destination video (302). In an apparatus embodiment, an identity encoder (304) is configured to encode identity information of the source image (301). When source image (301) is a multi-frame static image or a video, an identity code aggregator (307) is positioned at an output of the identity encoder (304), and produces an identity vector (314). A driver encoder (313) is coupled to the destination (driver) video (302), and has two components: a pose encoder (305) configured to encode pose information of the destination video (302), and a motion encoder (315) configured to separately encode motion information of the destination video (302). The driver encoder (313) produces two vectors: a pose vector (308) and a motion vector (316). A neural network generator (310) has three inputs: the identity vector (314), the pose vector (308), and the motion vector (316). The neural network generator (310) is configured to generate, in response to these three inputs, a composite video (303) comprising identity information of the source image (301) inserted into the destination video (302), where the composite video (303) has substantially the same temporal information as the destination video (302).

VIDEO REENACTMENT TAKING INTO ACCOUNT TEMPORAL INFORMATION
20220392490 · 2022-12-08 ·

Apparati, methods, and computer readable media for inserting identity information from a source image (static image or video) (301) into a destination video (302), while mimicking motion of the destination video (302). In an apparatus embodiment, an identity encoder (304) is configured to encode identity information of the source image (301). When source image (301) is a multi-frame static image or a video, an identity code aggregator (307) is positioned at an output of the identity encoder (304), and produces an identity vector (314). A driver encoder (313) is coupled to the destination (driver) video (302), and has two components: a pose encoder (305) configured to encode pose information of the destination video (302), and a motion encoder (315) configured to separately encode motion information of the destination video (302). The driver encoder (313) produces two vectors: a pose vector (308) and a motion vector (316). A neural network generator (310) has three inputs: the identity vector (314), the pose vector (308), and the motion vector (316). The neural network generator (310) is configured to generate, in response to these three inputs, a composite video (303) comprising identity information of the source image (301) inserted into the destination video (302), where the composite video (303) has substantially the same temporal information as the destination video (302).

VIDEO TRANSLATION METHOD AND APPARATUS, STORAGE MEDIUM, AND ELECTRONIC DEVICE

A video translation method includes: converting speech in a video to be translated into text; displaying the text and first time information, second time information, and a reference translation of the text; in response to an operation by a user on the text or the reference translation, displaying an editing area supporting the user inputting a translation; following input by the user, providing a translation suggestion from the reference translation; when a confirmation operation by the user for the translation suggestion is detected, using the translation suggestion as a translation result and displaying the same; when a non-confirmation operation by the user for the translation suggestion is detected, receiving a translation inputted by the user which is different from the translation suggestion, using the inputted translation as a translation result and displaying the same, and updating the reference translation in a translation area according to the inputted translation.

VIDEO TRANSLATION METHOD AND APPARATUS, STORAGE MEDIUM, AND ELECTRONIC DEVICE

A video translation method includes: converting speech in a video to be translated into text; displaying the text and first time information, second time information, and a reference translation of the text; in response to an operation by a user on the text or the reference translation, displaying an editing area supporting the user inputting a translation; following input by the user, providing a translation suggestion from the reference translation; when a confirmation operation by the user for the translation suggestion is detected, using the translation suggestion as a translation result and displaying the same; when a non-confirmation operation by the user for the translation suggestion is detected, receiving a translation inputted by the user which is different from the translation suggestion, using the inputted translation as a translation result and displaying the same, and updating the reference translation in a translation area according to the inputted translation.

INFORMATION PROCESSING DEVICE AND VIDEO EDITING METHOD
20220362676 · 2022-11-17 ·

A ring buffer 136 records a game video provided by a running game software 110 together with time information. When an unlock condition of a trophy that is a virtual award is satisfied, a trophy processing section 124 gives a trophy to a user playing a game. A video acquiring section 140 reads, from the ring buffer 136, the video including the game image that is at a time when the unlock condition becomes satisfied, and records the video in a second recording section 160. A video processing section 152 carries out an editing process on the video recorded in the second recording section 160.

INFORMATION PROCESSING DEVICE AND VIDEO EDITING METHOD
20220362676 · 2022-11-17 ·

A ring buffer 136 records a game video provided by a running game software 110 together with time information. When an unlock condition of a trophy that is a virtual award is satisfied, a trophy processing section 124 gives a trophy to a user playing a game. A video acquiring section 140 reads, from the ring buffer 136, the video including the game image that is at a time when the unlock condition becomes satisfied, and records the video in a second recording section 160. A video processing section 152 carries out an editing process on the video recorded in the second recording section 160.

PERSONALIZED VIDEOS FEATURING MULTIPLE PERSONS

Provided are systems and methods for providing personalized videos featuring multiple persons. An example method includes enabling a communication chat between a user of a computing device and at least one further user of at least one further computing device, receiving a user selection of a video from one or more personalized videos, receiving an image of a source face and a further image of a further source face, modifying the image of the source face to generate an image of a modified source face, modifying the further image of the further source face to generate an image of a modified further source face, replacing, in the video, a target face with the image of the modified source face and at least one further target face with the modified further source face to generate a personalized video, and sending the personalized video to the at least one further user.

PERSONALIZED VIDEOS FEATURING MULTIPLE PERSONS

Provided are systems and methods for providing personalized videos featuring multiple persons. An example method includes enabling a communication chat between a user of a computing device and at least one further user of at least one further computing device, receiving a user selection of a video from one or more personalized videos, receiving an image of a source face and a further image of a further source face, modifying the image of the source face to generate an image of a modified source face, modifying the further image of the further source face to generate an image of a modified further source face, replacing, in the video, a target face with the image of the modified source face and at least one further target face with the modified further source face to generate a personalized video, and sending the personalized video to the at least one further user.