G06T15/205

Efficient capture and delivery of walkable and interactive virtual reality or 360 degree video

Disclosed are systems and methods for generating a walkable 360-degree video or virtual reality (VR) environment. 360-degree video data is obtained for a real-world environment and comprises a plurality of chronologically ordered frames captured by traversing a first path through the real-world environment. One or more processing operations are applied to generate a processed 360-degree video, which can be displayed to a user of an omnidirectional treadmill. Locomotion information is received from one or more sensors of the omnidirectional treadmill, wherein the locomotion information is generated based on a physical movement on or within the omnidirectional treadmill. Using the received locomotion information, one or more playback commands for controlling playback of the processed 360-degree video are generated. One or more selected frames of the processed 360-degree video are rendered for presentation and display to the user, based on the one or more playback commands.

AUGMENTING A VIEW OF A REAL-WORLD ENVIRONMENT WITH A VIEW OF A VOLUMETRIC VIDEO OBJECT

The A method of augmenting a view of a real-world environment with a view of a volumetric video object on a user device is disclosed . The method includes determining a current pose information (CPI) indicating a current pose of the view of the real-world environment and a desired pose of the volumetric video object in the real-world environment. The method further includes sending the CPI to a remote server. The method further includes receiving a rendered view of the volumetric video object that has been rendered in accordance with the CPI from the remote server. The method also includes augmenting the view of the real-world environment by at least mapping the rendered view of the volumetric video object onto a planar mapping surface arranged according to the desired position of the volumetric video object.

VIRTUAL THERMAL CAMERA IMAGING SYSTEM
20230007862 · 2023-01-12 ·

System and method that includes mapping temperature values from a two dimensional (2D) thermal image of a component to a three dimensional (3D) drawing model of the component to generate a 3D thermal model of the component; mapping temperature values from the 3D thermal model to a 2D virtual thermal image corresponding to a virtual thermal camera perspective; and predicting an attribute for the component by applying a prediction function to the 2D virtual thermal image.

Rendering portions of a three-dimensional environment with different sampling rates utilizing a user-defined focus frame
11551409 · 2023-01-10 ·

Methods, systems, and non-transitory computer readable storage media are disclosed for rendering portions of a three-dimensional environment at different sampling rates based on a focus frame within a graphical user interface. Specifically, the disclosed system provides a tool for marking a region of a graphical user interface displaying a three-dimensional environment. The disclosed system generates a focus frame based on the marked region of the graphical user interface and attaches the focus frame to a portion of the three-dimensional environment. The disclosed system assigns a first sampling rate to the portion of the three-dimensional environment displayed within the focus frame and a second sampling rate to portions of the three-dimensional environment outside the focus frame. The disclosed system renders the three-dimensional environment by sampling the portion within the focus frame at the first sampling rate and the portions outside the focus frame at the second sampling rate.

Generating and validating a virtual 3D representation of a real-world structure

A computer system maintains structure data indicating geometrical constraints for each structure category of a plurality of structure categories. The computer system generates a virtual 3D representation of a structure based on a set of images depicting the structure. For each image in the set of images, one or more landmarks are identified. Based on the landmarks, a candidate structure category is selected. The virtual 3D representation is generated based on the geometrical constraints of the candidate structure category and the landmarks identified in the set of images.

System and Methods for Interactive Hybrid-Dimension Map Visualization
20180005434 · 2018-01-04 ·

A navigational system includes a hybrid-dimensional visualization scheme with a multi-modal interaction flow to serve for digital mapping applications, such as in car infotainment systems and online map services. The hybrid-dimensional visualization uses an importance-driven or focus-and-context visualization approach to combine the display of different map elements, including 2D map, 2D building footprint, 3D map, weather visualization, realistic day-night lighting, and POI information, into a single map view. The combination of these elements is guided by intuitive user interactions using multiple modalities simultaneously, such that the map information is filtered to best respond to the user's request, and presented in a way that presents both the focus and the context in a map in an aesthetic manner. The system facilitates several use cases that are common to the users, including destination preview, destination search, and virtual map exploration.

CREATING THREE DIMENSIONAL MODELS WITH ACCELERATION DATA

Obtaining physical model data for CAD model generation with a process that includes: receiving a first acceleration-based path data set including acceleration data for an accelerometer device as it was traced over a first path along the surface of a physical object, converting the first acceleration-based path data set to a first position-based data set including position data for the accelerometer as it was traced over the first path along the surface of the physical object, and generating a three dimensional object model data set based, at least in part on the position data of the first position-based data set.

Model and technical documents

Document spaces are used for intermediate technical documents created for a model with a displayed 3D model view. A document space is a finite volume having a location in the global three-dimensional coordinate system of the model and a reference to a technical document. A displayed document space in the displayed model view illustrates where one or more model parts, or corresponding pieces of model parts, which are included in the technical document, locate.

REDUCING COMPUTATIONAL COMPLEXITY IN THREE-DIMENSIONAL MODELING BASED ON TWO-DIMENSIONAL IMAGES
20180012399 · 2018-01-11 · ·

A method for three-dimensional (3D) modeling using two-dimensional (2D) image data includes obtaining a first image of an object oriented in a first direction and a second image of the object oriented in a second direction, determining a plurality of feature points of the object in the first image, and determining a plurality of matching feature points of the object in the second image that correspond to the plurality of feature points of the object in the first image. The method further includes calculating similarity values between the plurality of feature points and the corresponding plurality of matching feature points, calculating depth values of the plurality of feature points, calculating weighted depth values based on the similarity values and depth values, and performing 3D modeling of the object based on the weighted depth values.

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND COMPUTER PROGRAM PRODUCT
20180012529 · 2018-01-11 · ·

An information processing apparatus configured to paste a full-spherical panoramic image along an inner wall of a virtual three-dimensional sphere; calculate an arrangement position for arranging a planar image closer to a center point of the virtual three-dimensional sphere than the inner wall, in such an orientation that a line-of-sight direction from the center point to the inner wall and a perpendicular line of the planar image are parallel to each other, the planar image being obtained by pasting an embedding image to be embedded in the full-spherical panoramic image, on a two-dimensional plane; and display a display image on a display unit. The display image is a two-dimensional image viewed from the center point in the line-of-sight direction in a state in which the full-spherical panoramic image is pasted along the inner wall of the virtual three-dimensional sphere and the planar image is arranged at an arrangement position.