G06T13/00

Generating digital avatar

In one embodiment, a method includes, by one or more computing systems: receiving one or more non-video inputs, where the one or more non-video inputs include at least one of a text input, an audio input, or an expression input, accessing a K-NN graph including several sets of nodes, where each set of nodes corresponds to a particular semantic context out of several semantic contexts, determining one or more actions to be performed by a digital avatar based on the one or more identified semantic contexts, generating, in real-time in response to receiving the one or more non-video inputs and based on the determined one or more actions, a video output of the digital avatar including one or more human characteristics corresponding to the one or more identified semantic contexts, and sending, to a client device, instructions to present the video output of the digital avatar.

IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND PROGRAM

A device and a method for performing AR image display control in which flicker in a boundary region between a virtual object and a real object is not noticeable are provided. A real object detection unit that executes processing of detecting a real object in a real world, and an augmented reality (AR) image display control unit that generates an AR image visually recognized in such a manner that a virtual object exists in a same space as a real object and outputs the AR image to a display unit are included, in which the AR image display control unit, in a case where a position of the real object detected by the real object detection unit is on a near side in a depth direction with respect to a position where the virtual object to be displayed on the display unit is arranged, an additional virtual object is output to the display unit on the near side in the depth direction with respect to the position of the real object in such a manner as to hide at least a partial region of a boundary region between the virtual object and the real object.

IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND PROGRAM

A device and a method for performing AR image display control in which flicker in a boundary region between a virtual object and a real object is not noticeable are provided. A real object detection unit that executes processing of detecting a real object in a real world, and an augmented reality (AR) image display control unit that generates an AR image visually recognized in such a manner that a virtual object exists in a same space as a real object and outputs the AR image to a display unit are included, in which the AR image display control unit, in a case where a position of the real object detected by the real object detection unit is on a near side in a depth direction with respect to a position where the virtual object to be displayed on the display unit is arranged, an additional virtual object is output to the display unit on the near side in the depth direction with respect to the position of the real object in such a manner as to hide at least a partial region of a boundary region between the virtual object and the real object.

INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING PROGRAM
20220414962 · 2022-12-29 · ·

An information processing system includes a memory and processing circuitry configured to generate a display image by rendering plural objects in a virtual space according to the object information; acquire a user input relating to a user object of the plural objects, the user object associated with the user; determine movement of the user object in the virtual space in accordance with the user input; generate positional relationship information indicating a positional relationship in the virtual space between the user object and a predetermined object of the plural objects; store first rendering information for rendering a first animation with a combination of the predetermined object and the user object; and generate the first animation according to the first rendering information in a case that a distance between the predetermined object and the user object is shorter than or equal to a predetermined distance according to the positional relationship information.

System and method for modifying content of a virtual environment

A system for modifying data representing a virtual environment includes: an environment navigation unit operable to control navigation within the virtual environment to generate one or more viewpoints within the virtual environment, an environment identification unit operable to identify one or more aspects of the geometry of the virtual environment visible in the one or more viewpoints, a geometry evaluation unit operable to evaluate the visibility of one or more aspects of the geometry based upon the identification for each of one or more viewpoints, and a data modification unit operable to modify one or more elements of data representing the virtual environment.

COMMUNICATION ASSISTANCE SYSTEM AND COMMUNICATION ASSISTANCE PROGRAM

A communication assistance system according to one embodiment is a communication assistance system assisting communication performed by a user using a terminal, the communication assistance system includes a control data configured to generate unit generating control data for controlling a movement of an avatar of the user that is displayed on the terminal and participates in the communication, based on video data including voice data of the user and image data of the user, when there is a deficiency of image information in the image data of the user, the control data generating unit supplements a deficient image information by using the voice data of the user and a learned model, and the learned model is a learned model generated by using training data such that the control data of the avatar is output when the voice data of the user is input.

COMMUNICATION ASSISTANCE SYSTEM AND COMMUNICATION ASSISTANCE PROGRAM

A communication assistance system according to one embodiment is a communication assistance system assisting communication performed by a user using a terminal, the communication assistance system includes a control data configured to generate unit generating control data for controlling a movement of an avatar of the user that is displayed on the terminal and participates in the communication, based on video data including voice data of the user and image data of the user, when there is a deficiency of image information in the image data of the user, the control data generating unit supplements a deficient image information by using the voice data of the user and a learned model, and the learned model is a learned model generated by using training data such that the control data of the avatar is output when the voice data of the user is input.

Animations
11532113 · 2022-12-20 · ·

At least certain embodiments of the present disclosure include a method for animating a display region, windows, or views displayed on a display of a device. The method includes starting at least two animations. The method further includes determining the progress of each animation. The method further includes completing each animation based on a single timer.

Animations
11532113 · 2022-12-20 · ·

At least certain embodiments of the present disclosure include a method for animating a display region, windows, or views displayed on a display of a device. The method includes starting at least two animations. The method further includes determining the progress of each animation. The method further includes completing each animation based on a single timer.

Generating and rendering motion graphics effects based on recognized content in camera view finder

Systems and methods are described for providing co-presence in an augmented reality environment. The method may include receiving a visual scene within a viewing window depicting a multi-frame real-time visual scene captured by a camera onboard an electronic device associated with the augmented reality environment, identifying a plurality of elements of the visual scene, detecting at least one graphic indicator associated with at least one of the plurality of elements, detecting at least one boundary associated with the at least one element, and generating, in the viewing window and based on the detection of the at least one graphic indicator, Augmented Reality (AR) motion graphics within the detected boundary. In response to determining that content related to the at least one element is available, the method may include retrieving the content and visually indicating an AR tracked control on the at least one element within the viewing window.