G06T2210/61

METHODS AND SYSTEMS FOR MEASURING AND MODELING SPACES USING MARKERLESS PHOTO-BASED AUGMENTED REALITY PROCESS
20230110919 · 2023-04-13 ·

Described herein are platforms, systems, media, and methods for measuring a space by launching an active augmented reality (AR) session on a device comprising a camera and at least one processor; calibrating the AR session by establishing a fixed coordinate system, receiving a position and orientation of one or more horizontal or vertical planes in the space in reference to the fixed coordinate system, and receiving a position and orientation of the camera in reference to the fixed coordinate system; constructing a backing model; providing an interface allowing a user to capture at least one photo of the space during the active AR session; extracting camera data from the AR session for the at least one photo; extracting the backing model from the AR session; and storing the camera data and the backing model in association with the at least one photo.

Information processing device, information processing method, and program

There is provided an information processing device, an information processing method, and a program for enabling display of AR content that has been generated for a predetermined environment and is applied to the real environment. The information processing device according to one aspect of the present technology generates a template environment map showing the environment of a three-dimensional space that is to be a template and in which a predetermined object exists, and generates template content that is a template to be used in generating display content for displaying an object superimposed on the environment of a real space, the template content including information about the object disposed at a position in the three-dimensional space, the position having a predetermined positional relationship with the predetermined object. The present technology can be applied to a transmissive HMD, for example.

GENERATING THREE-DIMENSIONAL VIRTUAL SCENE

A method and system for generating a three-dimensional (3D) virtual scene are disclosed. The method includes: identifying a two-dimensional (2D) object in a 2D picture and the position of the 2D object in the 2D picture; obtaining the three-dimensional model of the 3D object corresponding to the 2D object; calculating the corresponding position of the 3D object corresponding to the 2D object in the horizontal plane of the 3D scene according to the position of the 2D object in the picture; and simulating the falling of the model of the 3D object onto the 3D scene from a predetermined height above the 3D scene, wherein the position of the landing point the model of the 3D object in the horizontal plane is the corresponding position of the 3D object in the horizontal plane of the 3D scene.

ANCHORING A SCENE DESCRIPTION TO A USER ENVIRONMENT FOR STREAMING IMMERSIVE MEDIA CONTENT
20220335694 · 2022-10-20 ·

An example device for presenting media data includes a memory configured to store media data defining one or more virtual objects in a virtual scene; and one or more processors implemented in circuitry and configured to: receive a scene description of a bitstream including the data describing the one or more virtual objects in the virtual scene and a scene anchor, the scene anchor representing a correspondence between the virtual scene and a real-world presentation environment; determine the correspondence between the virtual scene and the real-world presentation environment using the scene anchor; and present the one or more virtual objects at locations within the real-world presentation environment according to the determined correspondence.

VIRTUAL SCENARIO GENERATION METHOD, APPARATUS AND DEVICE AND STORAGE MEDIUM
20230071213 · 2023-03-09 ·

A virtual scenario generation method includes acquiring scenario characteristic information corresponding to a target virtual scenario to be generated; generating a scenario division mesh in an initial virtual scenario based on the scenario characteristic information, the scenario division mesh comprising division marking data configured to divide the initial virtual scenario; generating a scenario object collection which is about to be added to the scenario division mesh and comprises one or more scenario objects; performing attribute matching on the one or more scenario objects and the division marking data to obtain one or more candidate scenario objects allocated to the division marking data; selecting a target scenario object from the one or more candidate scenario objects according to position associated information between the candidate scenario objects and the division marking data; and matching the target scenario object with the division marking data to generate the target virtual scenario.

Broker for instancing

A first set of instance layer data that describes a scene to be represented by one or more computer-generated images is obtained. The set of instance layer data specifies a plurality of object instances within the scene, with each instance of the plurality of object instances corresponding to a position that an instance of a digital object is to appear in the scene. The set of instance layer data further specifies a first set of characteristics of the plurality of object instances that includes the position. A second set of instance layer data that indicates changes to be made to the scene described by the first set of instance layer data is obtained. A third set of instance layer data is generated to include the changes to the scene by overlaying the second set of instance layer data onto the first set of instance layer data. The scene is caused to be rendered by providing the third set of instance layer data to an instancing service.

Automatic Rendering Of 3D Sound
20230106884 · 2023-04-06 ·

Simulating a 3D audio environment, including receiving a visual representation of an object at a location in a scene, wherein the location represents a point in 3D space, receiving a sound element, and binding the sound element to the visual representation of the object such that a characteristic of the sound element is dynamically modified coincident with a change in location in the scene of the visual representation of the object in 3D space.

Three dimensional reconstruction of objects based on geolocation and image data

Embodiments of the systems and methods described herein provide a three dimensional reconstruction system that can receive an image from a camera, and then utilize machine learning algorithms to identify objects in the image. The three dimensional reconstruction system can identify a geolocation of a user, identify features of the surrounding area, such as structures or geographic features, and reconstruct the scene including the identified features. The three dimensional reconstruction system can generate three dimensional object data for the features and/or objects, modify the three dimensional objects, arrange the objects in a scene, and render a two dimensional view of the scene.

Image processing device, image processing method, and program

There is provided an image processing device including: a data storage unit storing feature data indicating a feature of appearance of one or more physical objects; an environment map building unit for building an environment map based on an input image obtained by imaging a real space and the feature data, the environment map representing a position of a physical object present in the real space; a control unit for acquiring procedure data for a set of procedures of operation to be performed in the real space, the procedure data defining a correspondence between a direction for each procedure and position information designating a position at which the direction is to be displayed; and a superimposing unit for generating an output image by superimposing the direction for each procedure at a position in the input image determined based on the environment map and the position information, using the procedure data.

Augmented reality system for viewing an event with mode based on crowd sourced images

Augmented reality systems provide graphics over views from a mobile device for both in-venue and remote viewing of a sporting or other event. A server system can provide a transformation between the coordinate system of a mobile device (smart phone, tablet computer, head mounted display) and a real world coordinate system. Requested graphics for the event are displayed over a view of an event.