G06T2219/2016

Synthesizing three-dimensional visualizations from perspectives of onboard sensors of autonomous vehicles
11593996 · 2023-02-28 · ·

Aspects of the disclosure provide for generating a visualization of a three-dimensional (3D) world view from the perspective of a camera of a vehicle. For example, images of a scene captured by a camera of the vehicle and 3D content for the scene may be received. A virtual camera model for the camera of the vehicle may be identified. A set of matrices may be generated using the virtual camera model. The set of matrices may be applied to the 3D content to create a 3D world view. The visualization may be generated using the 3D world view as an overlay with the image, and the visualization provides a real-world image from the perspective of the camera of the vehicle with one or more graphical overlays of the 3D content.

METHOD, APPARATUS AND COMPUTER PROGRAM PRODUCT FOR ADAPTIVE VENUE ZOOMING IN A DIGITAL MAP INTERFACE

A method, apparatus, and computer program product are provided for adaptive zoom control for zooming in on a venue beyond the maximum zoom level available in a digital map. An apparatus may be provided including at least one processor and at least one non-transitory memory including computer program code instructions. The computer program code instructions may be configured to, when executed, cause the apparatus to at least: provide for presentation of a map of a region including a venue; receive an input corresponding to a zoom-in action to view an enlarged portion of the region, where the enlarged portion of the region includes the venue; and in response to receiving the input corresponding to a zoom-in action to view the enlarged portion of the region, transition from the presentation of the map of the region to a presentation of a venue object corresponding to the venue.

Method for Using a Physical Object to Manipulate a Corresponding Virtual Object in a Virtual Environment, and Associated Apparatus and Computer Program Product
20180008355 · 2018-01-11 · ·

Systems and methods are provided for planning a procedure. A display device is configured to display a first virtual element. A controller device having a processor is configured to be in communication with the display device, and the controller device is further configured to direct the display device to display the first virtual element. A physical control element is in communication with the controller device, and is configured to correspond to the first virtual element such that an actual manipulation of the control element is displayed, via the processor of the controller device and on the display device, as a corresponding response of the first virtual element to the actual manipulation of the control element. Associated systems, methods, and computer program products are also provided.

Computer-Implemented Method For Positioning Patterns Around An Avatar

A computer-implemented method for designing a virtual garment or upholstery (G) in a three-dimensional scene comprising the steps of: a) providing a three-dimensional avatar (AV) in the three-dimensional scene; b) providing at least one pattern (P) of said virtual garment or upholstery in the three-dimensional scene; c) determining a distance field from a surface of the avatar; d) positioning the pattern relative to the avatar by keeping a fixed orientation with respect to said distance field; and e) assembling the positioned pattern or patterns around the avatar to form said virtual garment or upholstery, and draping it onto the avatar. A computer program product, non-volatile computer-readable data-storage medium and Computer Aided Design system for carrying out such a method. Application of the method to the manufacturing of a garment or upholstery.

SURGEON HEAD-MOUNTED DISPLAY APPARATUSES

An augmented reality surgical system includes a head mounted display (HMD) with a see-through display screen, a motion sensor, a camera, and computer equipment. The motion sensor outputs a head motion signal indicating measured movement of the HMD. The computer equipment computes the relative location and orientation of reference markers connected to the HMD and to the patient based on processing a video signal from the camera. The computer equipment generates a three dimensional anatomical model using patient data created by medical imaging equipment, and rotates and scales at least a portion of the three dimensional anatomical model based on the relative location and orientation of the reference markers, and further rotate at least a portion of the three dimensional anatomical model based on the head motion signal to track measured movement of the HMD. The rotated and scaled three dimensional anatomical model is displayed on the display screen.

Systems and methods for modeling spines and treating spines based on spine models

Disclosed are systems and methods for rapid generation of simulations of a patient's spinal morphology that enable pre-operative viewing of a patient's condition and to assist surgeons in determining the best corrective procedure and with any of the selection, augmentation or manufacture of spinal devices based on the patient specific simulated condition. The simulation is generated by morphing a generic spine model with a three-dimensional curve representation of the patient's particular spinal morphology derived from existing images of the patient's condition.

INFORMATION PROCESSING APPARATUS, SIMULATOR RESULT DISPLAY METHOD, AND COMPUTER-READABLE RECORDING MEDIUM
20180012395 · 2018-01-11 · ·

An information processing apparatus is disclosed. A processor selects cross-section shape information and texture information corresponding to a view direction from a memory. The memory stores the cross-section shape information representing a cross-section shape and the texture information representing a texture of a cross-section for each of cross-sections in a vicinity of a line segment pertinent to a phenomenon portion. The processor generates visualization data used to visualize the line segment in a three dimensional image by using the cross-section shape information and the texture information being selected and displays the line segment based on the visualization data on a display part.

EXTENDED REALITY SERVICE PROVIDING METHOD AND SYSTEM FOR OPERATION OF INDUSTRIAL INSTALLATION
20230237744 · 2023-07-27 ·

The present application relates to an extended reality service providing method and system for operation of an industrial installation. More specifically, various types of data required for operation (e.g., inspection, examination, maintenance, repair, and reinforcement) of an industrial installation are digitalized, extended reality content, such as an augmented reality image or a mixed reality image based on the digitalized data, is provided to a site worker or a remote place manager, and the worker and the manager can communicate via a video call in real-time, whereby the work efficiency of the worker and the manager can be enhanced.

BIOMETRIC ENABLED VIRTUAL REALITY SYSTEMS AND METHODS FOR DETECTING USER INTENTIONS AND MODULATING VIRTUAL AVATAR CONTROL BASED ON THE USER INTENTIONS FOR CREATION OF VIRTUAL AVATARS OR OBJECTS IN HOLOGRAPHIC SPACE, TWO-DIMENSIONAL (2D) VIRTUAL SPACE, OR THREE-DIMENSIONAL (3D) VIRTUAL SPACE
20230236667 · 2023-07-27 ·

Biometric enabled virtual reality (VR) systems and methods are disclosed for detecting user intention(s) and modulating virtual avatar control based on the user intention(s) for creation of virtual avatar(s) or object(s) in holographic space, two-dimensional (2D) virtual space, or three-dimensional (3D) virtual space. A virtual representation of an intended motion of a user corresponding to an intention of muscle activation of the user is determined based on analysis of a biometric signal data of the user as collected by a biometric detection device. The virtual representation of the intended motion is used to modulate virtual avatar control or output to create at least one of a virtual avatar representing aspect(s) of the user or an object manipulated by the user in a holographic space, virtual 2D space, or virtual 3D space. The avatar or the object is created based on: (1) the biometric signal data of a user, or (2) user-specific specifications as provided by the user.

GENERATION AND IMPLEMENTATION OF 3D GRAPHIC OBJECT ON SOCIAL MEDIA PAGES
20230237754 · 2023-07-27 ·

Disclosed herein is digital object generator that builds unique digital objects based on the user specific input. The unique digital objects are part of a graphic presentation to users. The user specific input is positioned on pre-configured regions of a 3D object such as a polygon. Examples of the pre-configured regions include faces of the 3D object, orbits around the 3D object, or identifiable regions associated with the 3D object. The 3D object is rendered as a part of a social media page and enables social interactions between users. In the social media page, the 3D object rotates displaying regions/faces to page visitors. In some embodiments, the 3D object is implemented as a pet or companion of a user avatar in a virtual, augmented, or extended reality space.