G06T13/00

Method for displaying an animation during the starting phase of an electronic device and associated electronic device
11523180 · 2022-12-06 · ·

A method for displaying an animation by a display chip of an electronic device, which includes a non-volatile memory and a random-access memory. The display chip includes a video output register and a display register. The method includes a first static programming phase including configuring the video output register; writing n images in the memory, n being an integer higher than or equal to two; writing into the memory of a plurality of nodes, such that each node includes the address in the memory of at least one portion of an image, as well as the address of the following node in the memory, the last node including the address in the random-access memory of the first node; and configuring the display register. The method also includes a second phase in which the n images are read by the display chip by the display register, to display the animation.

Method for displaying an animation during the starting phase of an electronic device and associated electronic device
11523180 · 2022-12-06 · ·

A method for displaying an animation by a display chip of an electronic device, which includes a non-volatile memory and a random-access memory. The display chip includes a video output register and a display register. The method includes a first static programming phase including configuring the video output register; writing n images in the memory, n being an integer higher than or equal to two; writing into the memory of a plurality of nodes, such that each node includes the address in the memory of at least one portion of an image, as well as the address of the following node in the memory, the last node including the address in the random-access memory of the first node; and configuring the display register. The method also includes a second phase in which the n images are read by the display chip by the display register, to display the animation.

Method for generating special effect program file package, method for generating special effect, electronic device, and storage medium

A method for generating a special effect program file package and a method for generating a special effect are provided. The method for generating a special effect program file package includes: importing a sub-material; obtaining a parameter value of a playback parameter of the sub-material and establishing a correspondence between a display position of the sub-material and at least one predetermined key point; and generating a special effect program file package according to the sub-material, the correspondence and the parameter value. The method for generating a special effect includes: importing a special effect program file package; obtaining a parameter value of a playback parameter of a sub-material in the special effect program file package; performing key point detection on a video image; and generating a special effect of the sub-material on the video image based on the detected key point and the parameter value of the playback parameter.

Switch control for animations
11520473 · 2022-12-06 · ·

In one general aspect, a method can include receiving, in a user interface of a first page of an application executing on a computing device, a selection of an animation option, receiving, in a user interface of a second page of the application executing on the computing device, a selection of an icon. In response to receiving the selection of the icon, the method can further include launching a third page of the application, and performing an animation of a visual presentation of the launching of the third page of the application from the second page of the application. The animation can be based on the received animation option selection.

Switch control for animations
11520473 · 2022-12-06 · ·

In one general aspect, a method can include receiving, in a user interface of a first page of an application executing on a computing device, a selection of an animation option, receiving, in a user interface of a second page of the application executing on the computing device, a selection of an icon. In response to receiving the selection of the icon, the method can further include launching a third page of the application, and performing an animation of a visual presentation of the launching of the third page of the application from the second page of the application. The animation can be based on the received animation option selection.

UTILIZING A MACHINE LEARNING MODEL TO DETERMINE ANONYMIZED AVATARS FOR EMPLOYMENT INTERVIEWS

A device receives interviewer data, associated with interviewers conducting interviews with interviewees, that includes data identifying avatars presented to the interviewers. The device receives interviewee data, associated with the interviewees, that includes data identifying genders of the interviewees. The device processes the interviewer data and the interviewee data, with a model, to generate unbiased training data, and trains a machine learning model, with the unbiased training data, to generate a trained machine learning model. The device receives particular interviewer data identifying a particular role, location, and/or gender of a particular interviewer, and receives particular interviewee data identifying a gender of a particular interviewee. The device processes the particular interviewer data and the particular interviewee data, with the trained machine learning model, to determine one or more anonymized avatars to present to the particular interviewer, and performs one or more actions based on the one or more anonymized avatars.

VIRTUAL OBJECT LIP DRIVING METHOD, MODEL TRAINING METHOD, RELEVANT DEVICES AND ELECTRONIC DEVICE
20220383574 · 2022-12-01 ·

A virtual object lip driving method performed by an electronic device includes: obtaining a speech segment and target face image data about a virtual object; and inputting the speech segment and the target face image data into a first target model to perform a first lip driving operation, so as to obtain first lip image data about the virtual object driven by the speech segment. The first target model is trained in accordance with a first model and a second model, the first model is a lip-speech synchronization discriminative model with respect to lip image data, and the second model is a lip-speech synchronization discriminative model with respect to a lip region in the lip image data.

VIRTUAL OBJECT LIP DRIVING METHOD, MODEL TRAINING METHOD, RELEVANT DEVICES AND ELECTRONIC DEVICE
20220383574 · 2022-12-01 ·

A virtual object lip driving method performed by an electronic device includes: obtaining a speech segment and target face image data about a virtual object; and inputting the speech segment and the target face image data into a first target model to perform a first lip driving operation, so as to obtain first lip image data about the virtual object driven by the speech segment. The first target model is trained in accordance with a first model and a second model, the first model is a lip-speech synchronization discriminative model with respect to lip image data, and the second model is a lip-speech synchronization discriminative model with respect to a lip region in the lip image data.

FRAME INTERPOLATION FOR RENDERED CONTENT

One embodiment of the present invention sets forth a technique for performing frame interpolation. The technique includes generating (i) a first set of feature maps based on a first set of rendering features associated with a first key frame, (ii) a second set of feature maps based on a second set of rendering features associated with a second key frame, and (iii) a third set of feature maps based on a third set of rendering features associated with a target frame. The technique also includes applying one or more neural networks to the first, second, and third set of feature maps to generate a set of mappings from a first set of pixels in the first key frame to a second set of pixels in the target frame. The technique further includes generating the target frame based on the set of mappings.

FRAME INTERPOLATION FOR RENDERED CONTENT

One embodiment of the present invention sets forth a technique for performing frame interpolation. The technique includes generating (i) a first set of feature maps based on a first set of rendering features associated with a first key frame, (ii) a second set of feature maps based on a second set of rendering features associated with a second key frame, and (iii) a third set of feature maps based on a third set of rendering features associated with a target frame. The technique also includes applying one or more neural networks to the first, second, and third set of feature maps to generate a set of mappings from a first set of pixels in the first key frame to a second set of pixels in the target frame. The technique further includes generating the target frame based on the set of mappings.