Patent classifications
G06T2210/28
Neural network motion controller
Apparatuses, systems, and techniques to animate objects in computer-generated graphics. In at least one embodiment, one or more neural networks are trained to identify one or more forces to be applied to one or more objects based, at least in part, on training data corresponding to two or more aspects of motion of the one or more objects.
VOICE DRIVEN MODIFICATION OF PHYSICAL PROPERTIES AND PHYSICS PARAMETERIZATION IN A CLOSED SIMULATION LOOP FOR CREATING STATIC ASSETS IN COMPUTER SIMULATIONS
A computer simulation object such as a chair is described by voice or photo input to render a 2D image. Machine learning may be used to convert voice input to the 2D image. The 2D image is converted to a 3D object and the 3D object or portions thereof are used in the computer simulation, such as a computer game, as the object such as a chair. A physics engine can be used to modify the 3D objects.
Systems and methods for blind and visually impaired person environment navigation assistance
A method, performed by a mobile device, for assisting blind or visually impaired users navigate a room or a new and unfamiliar environment. The method includes blind user acquiring one or more images using a mobile device and invoking processing algorithms. Processing algorithms include one of Multi View Stereo and Structure from Motion, whereby algorithms construct a 3D representation of the environment being imaged. Further algorithms are applied to identify and assign attributes to objects in the imaged environment. The 3D representation is responsive to mobile device orientation. The environment is presented to the user via a touch screen, enabling the user to virtually explore the environment using touch, whereby objects being touched are identified, and associated with dimensional and other attributes.
Strain based dynamics for rendering special effects
A strain based dynamic technique, for rendering special effects, includes simulation as a function of a Green-St. Venant strain tensor constraint. The behavior of a soft body may be controlled independent of a mesh structure by assigning different stiffness values to each constraint of the Green-St. Venant strain tensor.
SYSTEMS AND METHODS FOR BLIND AND VISUALLY IMPAIRED PERSON ENVIRONMENT NAVIGATION ASSISTANCE
A method, performed by a mobile device, for assisting blind or visually impaired users navigate a room or a new and unfamiliar environment. The method includes blind user acquiring one or more images using a mobile device and invoking processing algorithms. Processing algorithms include one of Multi View Stereo and Structure from Motion, whereby algorithms construct a 3D representation of the environment being imaged. Further algorithms are applied to identify and assign attributes to objects in the imaged environment. The 3D representation is responsive to mobile device orientation. The environment is presented to the user via a touch screen, enabling the user to virtually explore the environment using touch, whereby objects being touched are identified, and associated with dimensional and other attributes.
TECHNOLOGIES FOR TIME-DELAYED AUGMENTED REALITY PRESENTATIONS
Technologies for time-delayed augmented reality (AR) presentations includes determining a location of a plurality of user AR systems located within the presentation site and determining a time delay of an AR sensory stimulus event of an AR presentation to be presented in the presentation site for each user AR system based on the location of the corresponding user AR system within the presentation site. The AR sensory stimulus event is presented to each user AR system based on the determined time delay associated with the corresponding user AR system. Each user AR system generates the AR sensory stimulus event based on a timing parameter that defines the time delay for the corresponding user AR system such that the generation of the AR sensory stimulus event is time-delayed based on the location of the user AR system within the presentation site.
Haptic authoring tool for animated haptic media production
Systems, methods, and computer program products to perform an operation comprising receiving input specifying one or more positional and dimensional properties of a first haptic animation object in an animation tool displaying a representation of a vibrotactile array comprising a plurality of actuators configured to output haptic feedback, computing, based on a rendering algorithm applied to the first haptic animation object, a vector profile for each of the actuators, and computing an intensity value for each of the actuators based on the vector profile of the respective actuator.
METHOD OF AND SYSTEM FOR FACILITATING STRUCTURED BLOCK PLAY IN A VIRTUAL REALITY ENVIRONMENT
A system and a method for facilitating structured block play in a VR environment that includes a set of computer-generated images providing a spatial representation of a predefined arrangement of blocks for analysis by a user in a VR environment, and a set of intangible, computer-generated blocks configured to be positioned by the user into a replication of the predefined arrangement in the VR environment.
GENERATING ADDITIONAL IMAGES FROM PREVIOUSLY GENERATED IMAGES
An example operation may include one or more of training a generative artificial intelligence (GenAI) model to generate images based on user data using a dataset of images, executing the GenAI model based on input data from a user interface of a software application to generate an image corresponding to the input data, and displaying the image via a user interface of a software application, receiving feedback about the image via the user interface; and retraining the GenAI model based on the generated image and the received feedback about the image via the user interface.
Methods and Systems for Generating Surface Models of Cardiac Structures
A system for generating a surface model of a cardiac structure includes a display device, a medical device, and a model reconstruction system. The medical device includes one or more sensors configured to generate sensor output used to generate location data for points disposed on a surface of the cardiac structure. The model reconstruction system configured to: (a) process the sensor output to generate a point cloud that represents measured locations on the surface of the cardiac structure and/or within the cardiac structure, (b) extract a surface point cloud from the point cloud, (d) generate a final signed distance field (SDF) representing a shape of a surface of the cardiac structure via a machine learning model, (e) construct a surface model representing the shape of the surface of the cardiac structure based on the final SDF, and output or display the surface model to a user via the display device.