Patent classifications
G06T13/80
Media animation selection using a graph
A media sequence includes media items arranged in a sequence. A graph is generated to represent animations available for the media items in the media sequence. The graph includes nodes that represent the available animations. The animations to be used in generating the media sequence is selected via selection of a path through the graph, and the media sequence is generated using the selected animations.
Media animation selection using a graph
A media sequence includes media items arranged in a sequence. A graph is generated to represent animations available for the media items in the media sequence. The graph includes nodes that represent the available animations. The animations to be used in generating the media sequence is selected via selection of a path through the graph, and the media sequence is generated using the selected animations.
Animation preparing device, animation preparing method and recording medium
The animation preparing device includes at least one processor. The processor acquires exercise data concerning an exercise done by a user or the exercise that the user is doing. In a case where there exists a plurality of causes for a characteristic point of the user that has been detected based on the acquired exercise data, the processor prepares, from the exercise data, a first user animation represented in a first direction of line of sight as an animation of the exercise done by the user or the exercise that the user is doing in order to indicate a first cause for the characteristic point, and prepares a second user animation represented in a second direction of line of sight that is separate from the first line of sight in order to indicate a second cause for the characteristic point that is separate from the first cause.
Animation preparing device, animation preparing method and recording medium
The animation preparing device includes at least one processor. The processor acquires exercise data concerning an exercise done by a user or the exercise that the user is doing. In a case where there exists a plurality of causes for a characteristic point of the user that has been detected based on the acquired exercise data, the processor prepares, from the exercise data, a first user animation represented in a first direction of line of sight as an animation of the exercise done by the user or the exercise that the user is doing in order to indicate a first cause for the characteristic point, and prepares a second user animation represented in a second direction of line of sight that is separate from the first line of sight in order to indicate a second cause for the characteristic point that is separate from the first cause.
Using text for avatar animation
Systems and processes for animating an avatar are provided. An example process of animating an avatar includes at an electronic device having one or more processors and memory, receiving text, determining an emotional state, and generating, using a neural network, a speech data set representing the received text and a set of parameters representing one or more movements of an avatar based on the received text and the determined emotional state.
Using text for avatar animation
Systems and processes for animating an avatar are provided. An example process of animating an avatar includes at an electronic device having one or more processors and memory, receiving text, determining an emotional state, and generating, using a neural network, a speech data set representing the received text and a set of parameters representing one or more movements of an avatar based on the received text and the determined emotional state.
METHOD, APPARATUS AND COMPUTER PROGRAM PRODUCT FOR ADAPTIVE VENUE ZOOMING IN A DIGITAL MAP INTERFACE
A method, apparatus, and computer program product are provided for adaptive zoom control for zooming in on a venue beyond the maximum zoom level available in a digital map. An apparatus may be provided including at least one processor and at least one non-transitory memory including computer program code instructions. The computer program code instructions may be configured to, when executed, cause the apparatus to at least: provide for presentation of a map of a region including a venue; receive an input corresponding to a zoom-in action to view an enlarged portion of the region, where the enlarged portion of the region includes the venue; and in response to receiving the input corresponding to a zoom-in action to view the enlarged portion of the region, transition from the presentation of the map of the region to a presentation of a venue object corresponding to the venue.
METHODS, APPARATUS AND SYSTEM FOR ANALYTICS REPLAY UTILIZING RANDOM SAMPLING
Methods, systems, and computer program products for visually representing and displaying data are described. The visual representation may be a data animation. A data query may be submitted, a time measurement for processing the query may be obtained, and a sample size of the query may be adjusted based on the time measurement and a frame refresh rate of a data animation. A data animation may be generated based on one or more results of the query.
METHODS, APPARATUS AND SYSTEM FOR ANALYTICS REPLAY UTILIZING RANDOM SAMPLING
Methods, systems, and computer program products for visually representing and displaying data are described. The visual representation may be a data animation. A data query may be submitted, a time measurement for processing the query may be obtained, and a sample size of the query may be adjusted based on the time measurement and a frame refresh rate of a data animation. A data animation may be generated based on one or more results of the query.
ENHANCING VIDEO CHATTING
A method for a computing device to enhance video chatting Includes receiving a live video stream, processing a frame in the live video stream in real-time, and transmitting the frame to another computing device. Processing the frame in real-time includes detecting a face, an upper torso, or a gesture in the frame, and applying a visual effect to the frame. The method includes processing a next frame in the live video stream in real-time by repeating the enhancing, the detecting, and the applying.