Patent classifications
G06T2213/08
DISPLAYING VIRTUAL CONTENT IN AUGMENTED REALITY USING A MAP OF THE WORLD
An augmented reality display system comprises a passable world model data comprises a set of map points corresponding to one or more objects of the real world. The augmented reality system also comprises a processor to communicate with one or more individual augmented reality display systems to pass a portion of the passable world model data to the one or more individual augmented reality display systems, wherein the piece of the passable world model data is passed based at least in part on respective locations corresponding to the one or more individual augmented reality display systems.
SYSTEMS AND METHODS FOR CROSS-APPLICATION AUTHORING, TRANSFER, AND EVALUATION OF RIGGING CONTROL SYSTEMS FOR VIRTUAL CHARACTERS
Various examples of cross-application systems and methods for authoring, transferring, and evaluating rigging control systems for virtual characters are disclosed. A first application, which implements a first rigging control protocol, can provide an input associated with a request for a behavior from the rig for the virtual character. The input can be converted to be compatible with a second rigging control protocol that is different from the first rigging control protocol. One or more control systems can be evaluated based on the input to determine an output to provide the requested behavior from the virtual character rig. The one or more control systems can be defined according to the second rigging control protocol. The output can be converted to be compatible with the first rigging control protocol and provided to the first application to manipulate the virtual character according to the requested behavior.
GENERATING TEXTURED POLYGON STRIP HAIR FROM STRAND-BASED HAIR FOR A VIRTUAL CHARACTER
Computer generated (CG) hair for a virtual character can include strand-based (instanced) hair in which many thousands of digital strands represent real human hair strands. Strand-based hair can appear highly realistic, but rendering strand-based hair in real-time presents challenges. Techniques for generating textured polygon strip (poly strip) hair for a virtual character can use as an input previously-generated strand-based hair for the virtual character. Poly strips can be generated for a sampled set of strands in the strand-based hair. Additional poly strips may be generated near hairlines or part lines. Hair textures from a hair texture library can be matched to the poly strips. The matched textures can be scaled and packed into a region of texture space (e.g., a square region), which provides improved computer access, efficiency, and speed. A rendering engine can use the poly strips and the packed hair textures to render the character's hair in real-time.
Parameterized animation modifications
Techniques for animation are provided. A first trajectory for a first element in a first animation is determined. A first approximation is generated based on the first trajectory, and the first approximation is modified based on an updated state of the first element. The first trajectory is then refined based on the modified first approximation.
MULTI-MODAL MODEL FOR DYNAMICALLY RESPONSIVE VIRTUAL CHARACTERS
The disclosed embodiments relate to a method for controlling a virtual character (or “avatar”) using a multi-modal model. The multi-modal model may process various input information relating to a user and process the input information using multiple internal models. The multi-modal model may combine the internal models to make believable and emotionally engaging responses by the virtual character. The link to a virtual character may be embedded on a web browser and the avatar may be dynamically generated based on a selection to interact with the virtual character by a user. A report may be generated for a client, the report providing insights as to characteristics of users interacting with a virtual character associated with the client.
PERFORMANCE-BASED CODE ALTERATION FOR ANIMATION CONTROL RIGS
An animation system is provided for generating an animation control rig configured to manipulate a skeleton of an animated object. A partition separation process enables software changes to be inserted into uncompiled computer code associated with the animation control rig. Analysis of the uncompiled computer code is implemented relative to a performance metric. Based on the analysis in view of the performance matric, one or more partitions are determined in the uncompiled computer code to partition the code into separate code blocks. The uncompiled code is separated at the partition and updated with the software change. The updated code is compiled to generate the animation control rig.
FORCED CONTIGUOUS DATA FOR EXECUTION OF EVALUATION LOGIC USED IN ANIMATION CONTROL
An aspect provides a computer-implemented method for constructing evaluation logic associated with an animation software package. The method comprises receiving at least one software module, the at least one software module including at least one evaluator; writing the at least one software module to at least one executable code object; and maintaining data for the at least one software module in a contiguous block of memory for use by the software module.
OPERATING ANIMATION CONTROLS USING EVALUATION LOGIC
An aspect provides a computer-implemented method for operating animation controls associated with an animation control rig. The method comprises determining a node graph used to operate one or more animation controls; receiving an executable code object configured to replace at least two nodes disposed in the node graph at runtime, wherein the executable code object is configured to execute animation control inputs used to control the one or more animation controls at runtime as a single execution block configured to merge at least two data evaluation processes into a single data evaluation process to reduce execution overhead; processing the control data inputs using the executable code object; and operating the one or more animation controls with respect to the single execution instruction in response to the control data inputs.
Animation production system
To take animations in a virtual space, an animation production method comprising: a step of placing a virtual camera in a virtual space; a step of placing one or more objects in the virtual space; a user input detection unit that detects an input of a user from at least one of a head mounted display and a controller which the user mounted; a step of accepting at least one choice of the object in response to the input; and a step of removing the object from the virtual space in response to the input.
VIRTUAL AND AUGMENTED REALITY SYSTEMS AND METHODS
A method for displaying virtual content to a user, the method includes determining an accommodation of the user's eyes. The method also includes delivering, through a first waveguide of a stack of waveguides, light rays having a first wavefront curvature based at least in part on the determined accommodation, wherein the first wavefront curvature corresponds to a focal distance of the determined accommodation. The method further includes delivering, through a second waveguide of the stack of waveguides, light rays having a second wavefront curvature, the second wavefront curvature associated with a predetermined margin of the focal distance of the determined accommodation.