Patent classifications
G06T2210/28
METHOD OF ADJUSTING GRID SPACING OF HEIGHT MAP FOR AUTONOMOUS DRIVING
A method of adjusting a grid spacing of a height map for autonomous driving, may include acquiring a 2D image of a region ahead of a vehicle, generating a depth map using depth information on an object present in the 2D image, converting the generated depth map into a 3D point cloud, generating the height map by mapping the 3D point cloud onto a grid having a predetermined size, and adjusting a grid spacing of the height map in consideration of the driving state of the vehicle relative to the object.
FUSION METHOD FOR MOVEMENTS OF TEACHER IN TEACHING SCENE
A fusion method for movements of a teacher in a teaching scene includes normalization, motion perception and fusion of movements. According to interaction needs in an enhanced teaching scene, this application establishes a moving information collection and a conversion of moving position and range to realize the normalization of movement.
COMPOUND NEURAL NETWORK ARCHITECTURE FOR STRESS DISTRIBUTION PREDICTION
A neural network architecture and a method for determining a stress of a structure. The neural network architecture includes a first neural network and a second neural network. A neuron of last hidden layer of the first neural network is connected to a neuron of a last hidden layer of the second neural network. A first data set is input into the first neural network. A second data set is input into the second neural network. Data from the last hidden layer of the first neural network is combined with data from the last hidden layer of the second neural network. The stress of the structure is determined from the combined data.
AUGMENTED REALITY SYSTEM
An augmented reality system is provided, including a physical apparatus operable to change detectably by a human between a first state and a second state, and an augmented reality application. The physical apparatus includes a signal receiver for receiving a signal, and at least one controllable element operable to effect the change between the first state and the second state upon receiving the signal. The AR application, when executed by at least one processor of a computing device, the computing device having at least one camera and a display, cause the computing device to capture at least one image of the physical apparatus, generate a virtual reality object that is presented in the at least one image on the display, and transmit the signal to the physical apparatus to cause the at least one controllable element of the physical apparatus to switch between the first state and the second state.
Voice driven modification of physical properties and physics parameterization in a closed simulation loop for creating static assets in computer simulations
A computer simulation object such as a chair is described by voice or photo input to render a 2D image. Machine learning may be used to convert voice input to the 2D image. The 2D image is converted to a 3D object and the 3D object or portions thereof are used in the computer simulation, such as a computer game, as the object such as a chair. A physics engine can be used to modify the 3D objects.
TECHNOLOGIES FOR TIME-DELAYED AUGMENTED REALITY PRESENTATIONS
Technologies for time-delayed augmented reality (AR) presentations includes determining a location of a plurality of user AR systems located within the presentation site and determining a time delay of an AR sensory stimulus event of an AR presentation to be presented in the presentation site for each user AR system based on the location of the corresponding user AR system within the presentation site. The AR sensory stimulus event is presented to each user AR system based on the determined time delay associated with the corresponding user AR system. Each user AR system generates the AR sensory stimulus event based on a timing parameter that defines the time delay for the corresponding user AR system such that the generation of the AR sensory stimulus event is time-delayed based on the location of the user AR system within the presentation site.
Technologies for time-delayed augmented reality presentations
Technologies for time-delayed augmented reality (AR) presentations includes determining a location of a plurality of user AR systems located within the presentation site and determining a time delay of an AR sensory stimulus event of an AR presentation to be presented in the presentation site for each user AR system based on the location of the corresponding user AR system within the presentation site. The AR sensory stimulus event is presented to each user AR system based on the determined time delay associated with the corresponding user AR system. Each user AR system generates the AR sensory stimulus event based on a timing parameter that defines the time delay for the corresponding user AR system such that the generation of the AR sensory stimulus event is time-delayed based on the location of the user AR system within the presentation site.
Map of body cavity
In one embodiment, a medical analysis system, includes a display, and processing circuitry to receive a three-dimensional map of an interior surface of a cavity within a body of a living subject, positions on the interior surface being defined in a spherical coordinate system wherein each position is defined by an angular coordinate pair and an associated radial distance from an origin, project the angular coordinate pair of respective positions from the interior surface to respective locations in a two-dimensional plane according to a coordinate transformation, compute respective elevation values from the plane at the respective locations based on at least the radial distance associated with the respective projected angular coordinate pair, and render to the display an image of a partially flattened surface of the interior surface with the partially flattened surface being elevated from the plane according to the computed respective elevation values at the respective locations.
Interactive Animation Generation
An interactive animation generation method is provided. The method includes: in response to receiving an instruction for viewing a comment by a first user, obtaining information about a comment section and information about an animation material based on the instruction; in response to receiving an interaction instruction of the first user for a second user in the comment section, obtaining first information of the first user and second information of the second user; and generating an interactive animation in the comment section based on the first information, the second information, and the information about the animation material.
MAP OF BODY CAVITY
In one embodiment, a medical analysis system, includes a display, and processing circuitry to receive a three-dimensional map of an interior surface of a cavity within a body of a living subject, positions on the interior surface being defined in a spherical coordinate system wherein each position is defined by an angular coordinate pair and an associated radial distance from an origin, project the angular coordinate pair of respective positions from the interior surface to respective locations in a two-dimensional plane according to a coordinate transformation, compute respective elevation values from the plane at the respective locations based on at least the radial distance associated with the respective projected angular coordinate pair, and render to the display an image of a partially flattened surface of the interior surface with the partially flattened surface being elevated from the plane according to the computed respective elevation values at the respective locations.