Patent classifications
G06T2210/28
Technologies for time-delayed augmented reality presentations
Technologies for time-delayed augmented reality (AR) presentations includes determining a location of a plurality of user AR systems located within the presentation site and determining a time delay of an AR sensory stimulus event of an AR presentation to be presented in the presentation site for each user AR system based on the location of the corresponding user AR system within the presentation site. The AR sensory stimulus event is presented to each user AR system based on the determined time delay associated with the corresponding user AR system. Each user AR system generates the AR sensory stimulus event based on a timing parameter that defines the time delay for the corresponding user AR system such that the generation of the AR sensory stimulus event is time-delayed based on the location of the user AR system within the presentation site.
AUGMENTED REALITY SYSTEM
An augmented reality system is provided, including a physical apparatus operable to change detectably by a human between a first state and a second state, and an augmented reality application. The physical apparatus includes a signal receiver for receiving a signal, and at least one controllable element operable to effect the change between the first state and the second state upon receiving the signal. The AR application, when executed by at least one processor of a computing device, the computing device having at least one camera and a display, cause the computing device to capture at least one image of the physical apparatus, generate a virtual reality object that is presented in the at least one image on the display, and transmit the signal to the physical apparatus to cause the at least one controllable element of the physical apparatus to switch between the first state and the second state.
Method of adjusting grid spacing of height map for autonomous driving
A method of adjusting a grid spacing of a height map for autonomous driving, may include acquiring a 2D image of a region ahead of a vehicle, generating a depth map using depth information on an object present in the 2D image, converting the generated depth map into a 3D point cloud, generating the height map by mapping the 3D point cloud onto a grid having a predetermined size, and adjusting a grid spacing of the height map in consideration of the driving state of the vehicle relative to the object.
VOICE DRIVEN MODIFICATION OF PHYSICAL PROPERTIES AND PHYSICS PARAMETERIZATION IN A CLOSED SIMULATION LOOP FOR CREATING STATIC ASSETS IN COMPUTER SIMULATIONS
A computer simulation object such as a chair is described by voice or photo input to render a 2D image. Machine learning may be used to convert voice input to the 2D image. The 2D image is converted to a 3D object and the 3D object or portions thereof are used in the computer simulation, such as a computer game, as the object such as a chair. A physics engine can be used to modify the 3D objects.
3D MODELLING AND REPRESENTATION OF FURNISHED ROOMS AND THEIR MANIPULATION
A computer implemented method for producing a visualisation, comprising the steps of a. obtaining i. a first element set, wherein the first element set has a first volume, ii. a second element set, wherein the second element set has a second volume; b. displaying i. a primary representation of a three-dimensional space, ii. the first element set at a first position in the primary representation, iii. displaying the second element set at a second position in the primary representation, wherein a distance between the first position and the second position has a first length; c. moving the first element set from the first position to a further position in the primary representation, wherein i. the moving of the first element set causes the second element set to move from the second position to an even-further position in the primary representation, ii. the distance between the further position and the even-further position has a further length, wherein the first length and the further length vary with less than 5% with respect to each other, iii. wherein less than 5% of the first volume overlaps with the second volume.
BODY FITTED ACCESSORY WITH PHYSICS SIMULATION
Methods and systems are disclosed for performing operations comprising: receiving video that includes a depiction of a real-world object; generating a three-dimensional (3D) body mesh associated with the real-world object that tracks movement of the real-world object across frames of the video; obtaining an external mesh associated with an augmented reality (AR) element; determining that a first portion of the external mesh is associated with movement information of the 3D body mesh; determining that a second portion of the external mesh is associated with movement information of an external force model; deforming the first and second portions of the external mesh separately based on movement information of the 3D body mesh and movement information of the external force model; and modifying the video to include a display of the AR element based on the deformed first and second portions of the external mesh.
Body fitted accessory with physics simulation
Methods and systems are disclosed for performing operations comprising: receiving video that includes a depiction of a real-world object; generating a three-dimensional (3D) body mesh associated with the real-world object that tracks movement of the real-world object across frames of the video; obtaining an external mesh associated with an augmented reality (AR) element; determining that a first portion of the external mesh is associated with movement information of the 3D body mesh; determining that a second portion of the external mesh is associated with movement information of an external force model; deforming the first and second portions of the external mesh separately based on movement information of the 3D body mesh and movement information of the external force model; and modifying the video to include a display of the AR element based on the deformed first and second portions of the external mesh.
Strain Based Dynamics for Rendering Special Effects
A strain based dynamic technique, for rendering special effects, includes simulation as a function of a Green-St. Venant strain tensor constraint. The behavior of a soft body may be controlled independent of a mesh structure by assigning different stiffness values to each constraint of the Green-St. Venant strain tensor.
Augmented reality system
An augmented reality system is provided, including a physical apparatus operable to change detectably by a human between a first state and a second state, and an augmented reality application. The physical apparatus includes a signal receiver for receiving a signal, and at least one controllable element operable to effect the change between the first state and the second state upon receiving the signal. The AR application, when executed by at least one processor of a computing device, the computing device having at least one camera and a display, cause the computing device to capture at least one image of the physical apparatus, generate a virtual reality object that is presented in the at least one image on the display, and transmit the signal to the physical apparatus to cause the at least one controllable element of the physical apparatus to switch between the first state and the second state.
NEURAL NETWORK MOTION CONTROLLER
Apparatuses, systems, and techniques to animate objects in computer-generated graphics. In at least one embodiment, one or more neural networks are trained to identify one or more forces to be applied to one or more objects based, at least in part, on training data corresponding to two or more aspects of motion of the one or more objects.