G06T13/80

ENHANCING VIDEO CHATTING
20180012390 · 2018-01-11 ·

A method for a computing device to enhance video chatting Includes receiving a live video stream, processing a frame in the live video stream in real-time, and transmitting the frame to another computing device. Processing the frame in real-time includes detecting a face, an upper torso, or a gesture in the frame, and applying a visual effect to the frame. The method includes processing a next frame in the live video stream in real-time by repeating the enhancing, the detecting, and the applying.

SOFTWARE WITH MOTION RECORDING FEATURE TO SIMPLIFY ANIMATION
20230237726 · 2023-07-27 · ·

Features of a software program designed to facilitate animation by users of handheld or portable electronic devices are described. The software program may be in the form of instructions suitable to be carried out by the microprocessor of such a device in response to inputs from the user. The software provides a motion recording feature in which a user input in the form of a pointer, touch point, or other position-related input is monitored over the course of a recording session, converted to a data string of attribute values, and stored in memory. The software displays an animation of a virtual object over an animation period by retrieving the data string of attribute values from the memory and causing the processor to generate the animation using the retrieved data string of attribute values

SOFTWARE WITH MOTION RECORDING FEATURE TO SIMPLIFY ANIMATION
20230237726 · 2023-07-27 · ·

Features of a software program designed to facilitate animation by users of handheld or portable electronic devices are described. The software program may be in the form of instructions suitable to be carried out by the microprocessor of such a device in response to inputs from the user. The software provides a motion recording feature in which a user input in the form of a pointer, touch point, or other position-related input is monitored over the course of a recording session, converted to a data string of attribute values, and stored in memory. The software displays an animation of a virtual object over an animation period by retrieving the data string of attribute values from the memory and causing the processor to generate the animation using the retrieved data string of attribute values

SYSTEMS AND METHODS FOR CREATING A 2D FILM FROM IMMERSIVE CONTENT

Systems, methods, and non-transitory computer-readable media can obtain data associated with a computer-based experience. The computer-based experience can be based on interactive real-time technology. At least one virtual camera can be configured within the computer-based experience in a real-time engine. Data associated with an edit cut of the computer-based experience can be obtained based on content captured by the at least one virtual camera. A plurality of shots that correspond to two-dimensional content can be generated from the edit cut of the computer-based experience in the real-time engine. Data associated with a two-dimensional version of the computer-based experience can be generated with the real-time engine based on the plurality of shots. The two-dimensional version can be rendered based on the generated data.

SYSTEMS AND METHODS FOR CREATING A 2D FILM FROM IMMERSIVE CONTENT

Systems, methods, and non-transitory computer-readable media can obtain data associated with a computer-based experience. The computer-based experience can be based on interactive real-time technology. At least one virtual camera can be configured within the computer-based experience in a real-time engine. Data associated with an edit cut of the computer-based experience can be obtained based on content captured by the at least one virtual camera. A plurality of shots that correspond to two-dimensional content can be generated from the edit cut of the computer-based experience in the real-time engine. Data associated with a two-dimensional version of the computer-based experience can be generated with the real-time engine based on the plurality of shots. The two-dimensional version can be rendered based on the generated data.

BIOMETRIC ENABLED VIRTUAL REALITY SYSTEMS AND METHODS FOR DETECTING USER INTENTIONS AND MODULATING VIRTUAL AVATAR CONTROL BASED ON THE USER INTENTIONS FOR CREATION OF VIRTUAL AVATARS OR OBJECTS IN HOLOGRAPHIC SPACE, TWO-DIMENSIONAL (2D) VIRTUAL SPACE, OR THREE-DIMENSIONAL (3D) VIRTUAL SPACE
20230236667 · 2023-07-27 ·

Biometric enabled virtual reality (VR) systems and methods are disclosed for detecting user intention(s) and modulating virtual avatar control based on the user intention(s) for creation of virtual avatar(s) or object(s) in holographic space, two-dimensional (2D) virtual space, or three-dimensional (3D) virtual space. A virtual representation of an intended motion of a user corresponding to an intention of muscle activation of the user is determined based on analysis of a biometric signal data of the user as collected by a biometric detection device. The virtual representation of the intended motion is used to modulate virtual avatar control or output to create at least one of a virtual avatar representing aspect(s) of the user or an object manipulated by the user in a holographic space, virtual 2D space, or virtual 3D space. The avatar or the object is created based on: (1) the biometric signal data of a user, or (2) user-specific specifications as provided by the user.

BIOMETRIC ENABLED VIRTUAL REALITY SYSTEMS AND METHODS FOR DETECTING USER INTENTIONS AND MODULATING VIRTUAL AVATAR CONTROL BASED ON THE USER INTENTIONS FOR CREATION OF VIRTUAL AVATARS OR OBJECTS IN HOLOGRAPHIC SPACE, TWO-DIMENSIONAL (2D) VIRTUAL SPACE, OR THREE-DIMENSIONAL (3D) VIRTUAL SPACE
20230236667 · 2023-07-27 ·

Biometric enabled virtual reality (VR) systems and methods are disclosed for detecting user intention(s) and modulating virtual avatar control based on the user intention(s) for creation of virtual avatar(s) or object(s) in holographic space, two-dimensional (2D) virtual space, or three-dimensional (3D) virtual space. A virtual representation of an intended motion of a user corresponding to an intention of muscle activation of the user is determined based on analysis of a biometric signal data of the user as collected by a biometric detection device. The virtual representation of the intended motion is used to modulate virtual avatar control or output to create at least one of a virtual avatar representing aspect(s) of the user or an object manipulated by the user in a holographic space, virtual 2D space, or virtual 3D space. The avatar or the object is created based on: (1) the biometric signal data of a user, or (2) user-specific specifications as provided by the user.

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM

An information processing apparatus determines a target state of a display element after transition, which is displayed on a screen of a display apparatus, according to an operation of a user, and displays a transition moving image that indicates a procedure of change of the display element from the initial state to the determined target state. The transition moving image includes a procedure of change of the display element from the initial state to a first intermediate state and a procedure of change of the display element from a second intermediate state to the target state and does not include a procedure of change of the display element from the first intermediate state to the second intermediate state.

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM

An information processing apparatus determines a target state of a display element after transition, which is displayed on a screen of a display apparatus, according to an operation of a user, and displays a transition moving image that indicates a procedure of change of the display element from the initial state to the determined target state. The transition moving image includes a procedure of change of the display element from the initial state to a first intermediate state and a procedure of change of the display element from a second intermediate state to the target state and does not include a procedure of change of the display element from the first intermediate state to the second intermediate state.

GENERATION AND IMPLEMENTATION OF 3D GRAPHIC OBJECT ON SOCIAL MEDIA PAGES
20230237754 · 2023-07-27 ·

Disclosed herein is digital object generator that builds unique digital objects based on the user specific input. The unique digital objects are part of a graphic presentation to users. The user specific input is positioned on pre-configured regions of a 3D object such as a polygon. Examples of the pre-configured regions include faces of the 3D object, orbits around the 3D object, or identifiable regions associated with the 3D object. The 3D object is rendered as a part of a social media page and enables social interactions between users. In the social media page, the 3D object rotates displaying regions/faces to page visitors. In some embodiments, the 3D object is implemented as a pet or companion of a user avatar in a virtual, augmented, or extended reality space.