Patent classifications
G06T13/00
SWEPT PARAMETER OSCILLOSCOPE
A test and measurement instrument has a user interface configured to allow a user to provide one or more user inputs, a display to display results to the user, a memory, one or more processors configured to execute code to cause the one or more processors to receive a waveform array containing waveforms resulting from sweeping one or more parameters from a set of parameters, recover a clock signal from the waveform array, generate a waveform image for each waveform, render the waveform images into video frames to produce an image array of the video frames, select at least some of the video frames to form a video sequence, and play the video sequence on a display. A method of animating waveform data includes receiving a waveform array containing waveforms resulting from sweeping one or more parameters from a set of parameters, recovering a clock signal from the waveforms, generating a waveform image from each of the waveforms, rendering the waveform images into video frames to produce an image array of the video frames, selecting at least some of the video frames to play as a video sequence, and playing the video sequence on a display.
SWEPT PARAMETER OSCILLOSCOPE
A test and measurement instrument has a user interface configured to allow a user to provide one or more user inputs, a display to display results to the user, a memory, one or more processors configured to execute code to cause the one or more processors to receive a waveform array containing waveforms resulting from sweeping one or more parameters from a set of parameters, recover a clock signal from the waveform array, generate a waveform image for each waveform, render the waveform images into video frames to produce an image array of the video frames, select at least some of the video frames to form a video sequence, and play the video sequence on a display. A method of animating waveform data includes receiving a waveform array containing waveforms resulting from sweeping one or more parameters from a set of parameters, recovering a clock signal from the waveforms, generating a waveform image from each of the waveforms, rendering the waveform images into video frames to produce an image array of the video frames, selecting at least some of the video frames to play as a video sequence, and playing the video sequence on a display.
ELECTRONIC DEVICE FOR GENERATING MOUTH SHAPE AND METHOD FOR OPERATING THEREOF
An electronic device includes at least one processor, and at least one memory storing instructions executable by the at least one processor and operatively connected to the at least one processor, where the at least one processor is configured to acquire voice data to be synthesized with at least one first image, generate a plurality of mouth shape candidates by using the voice data, select a mouth shape candidate among the plurality of mouth shape candidates, generate at least one second image based on the selected mouth shape candidate and at least a portion of each of the at least one first image, and generate at least one third image by applying at least one super-resolution model to the at least one second image.
ELECTRONIC DEVICE FOR GENERATING MOUTH SHAPE AND METHOD FOR OPERATING THEREOF
An electronic device includes at least one processor, and at least one memory storing instructions executable by the at least one processor and operatively connected to the at least one processor, where the at least one processor is configured to acquire voice data to be synthesized with at least one first image, generate a plurality of mouth shape candidates by using the voice data, select a mouth shape candidate among the plurality of mouth shape candidates, generate at least one second image based on the selected mouth shape candidate and at least a portion of each of the at least one first image, and generate at least one third image by applying at least one super-resolution model to the at least one second image.
AUDIO FILTER EFFECTS VIA SPATIAL TRANSFORMATIONS
An audio system of a client device applies transformations to audio received over a computer network. The transformations (e.g., HRTFs) effect changes in apparent source positions of the received audio, or of segments thereof. Such transformations may be used to achieve “animation” of audio, in which the source positions of the audio or audio segments appear to change over time (e.g., circling around the listener). Additionally, segmentation of audio into distinct semantic audio segments, and application of separate transformations for each audio segment, can be used to intuitively differentiate the different audio segments by causing them to sound as if they emanated from different positions around the listener.
AUDIO FILTER EFFECTS VIA SPATIAL TRANSFORMATIONS
An audio system of a client device applies transformations to audio received over a computer network. The transformations (e.g., HRTFs) effect changes in apparent source positions of the received audio, or of segments thereof. Such transformations may be used to achieve “animation” of audio, in which the source positions of the audio or audio segments appear to change over time (e.g., circling around the listener). Additionally, segmentation of audio into distinct semantic audio segments, and application of separate transformations for each audio segment, can be used to intuitively differentiate the different audio segments by causing them to sound as if they emanated from different positions around the listener.
System and method for immersive telecommunications
A system and method for immersive telecommunications communication by tracking the movement of objects and/or persons with sensors. Movement tracking is then used to animate an avatar that represents the person or object. Movement may be tracked in real time, which at least reduces latency of communication. The sensors comprise any type of movement sensor which may be attached to a person and/or object for tracking motion, including but not limited to, an IMU (Inertial Measurement Unit), an accelerometer, a gyroscope or other such sensors.
Animation modification for optical see-through displays
In one implementation, a method of displaying an animation is performed at a device including an optical see-through display, one or more processors, and a non-transitory memory. The method includes receiving a request to display a first animation of an object exhibiting a response characteristic. The method includes determining a metric characterizing an amount of processing power for the device to display the first animation on the optical see-through display. The method includes, in response to a determination that the metric exceeds a threshold associated with the device, selecting a second animation of the object exhibiting the response characteristic. The method includes displaying the second animation.
Animation modification for optical see-through displays
In one implementation, a method of displaying an animation is performed at a device including an optical see-through display, one or more processors, and a non-transitory memory. The method includes receiving a request to display a first animation of an object exhibiting a response characteristic. The method includes determining a metric characterizing an amount of processing power for the device to display the first animation on the optical see-through display. The method includes, in response to a determination that the metric exceeds a threshold associated with the device, selecting a second animation of the object exhibiting the response characteristic. The method includes displaying the second animation.
Generating digital avatar
In one embodiment, a method includes, by one or more computing systems: receiving one or more non-video inputs, where the one or more non-video inputs include at least one of a text input, an audio input, or an expression input, accessing a K-NN graph including several sets of nodes, where each set of nodes corresponds to a particular semantic context out of several semantic contexts, determining one or more actions to be performed by a digital avatar based on the one or more identified semantic contexts, generating, in real-time in response to receiving the one or more non-video inputs and based on the determined one or more actions, a video output of the digital avatar including one or more human characteristics corresponding to the one or more identified semantic contexts, and sending, to a client device, instructions to present the video output of the digital avatar.