Patent classifications
G06T2215/16
AUGMENTED REALITY CONTENT RENDERING VIA ALBEDO MODELS, SYSTEMS AND METHODS
Methods for rendering augmented reality (AR) content are presented. An a priori defined 3D albedo model of an object is leveraged to adjust AR content so that is appears as a natural part of a scene. Disclosed devices recognize a known object having a corresponding albedo model. The devices compare the observed object to the known albedo model to determine a content transformation referred to as an estimated shading (environmental shading) model. The transformation is then applied to the AR content to generate adjusted content, which is then rendered and presented for consumption by a user.
ACCURATE POSITIONING OF AUGMENTED REALITY CONTENT
A system for accurately positioning augmented reality (AR) content within a coordinate system such as the World Geodetic System (WGS) may include AR content tethered to trackable physical features. As the system is used by mobile computing devices, each mobile device may calculate and compare relative positioning data between the trackable features. The system may connect and group the trackable features hierarchically, as measurements are obtained. As additional measurements are made of the trackable features in a group, the relative position data may be improved, e.g., using statistical methods.
System and Methods for Interactive Hybrid-Dimension Map Visualization
A navigational system includes a hybrid-dimensional visualization scheme with a multi-modal interaction flow to serve for digital mapping applications, such as in car infotainment systems and online map services. The hybrid-dimensional visualization uses an importance-driven or focus-and-context visualization approach to combine the display of different map elements, including 2D map, 2D building footprint, 3D map, weather visualization, realistic day-night lighting, and POI information, into a single map view. The combination of these elements is guided by intuitive user interactions using multiple modalities simultaneously, such that the map information is filtered to best respond to the user's request, and presented in a way that presents both the focus and the context in a map in an aesthetic manner. The system facilitates several use cases that are common to the users, including destination preview, destination search, and virtual map exploration.
Dynamic Entering and Leaving of Virtual-Reality Environments Navigated by Different HMD Users
Systems and methods for processing operations for head mounted display (HMD) users to join virtual reality (VR) scenes are provided. A computer-implemented method includes providing a first perspective of a VR scene to a first HMD of a first user and receiving an indication that a second user is requesting to join the VR scene provided to the first HMD. The method further includes obtaining real-world position and orientation data of the second HMD relative to the first HMD and then providing, based on said data, a second perspective of the VR scene. The method also provides that the first and second perspectives are each controlled by respective position and orientation changes while viewing the VR scene.
Systems and methods for evaluating and reducing negative dysphotopsia
Systems and methods for evaluating ND are described herein. An example method can include constructing a non-sequential (NSC) ray-tracing model of an eye with an ophthalmic lens, and modelling a light source and a detector. The detector can be configured to mimic a retina of the eye. The method can also include computing irradiance data using the light source, the NSC ray-tracing model, and the detector. Irradiance data can be computed for each of a plurality of pupil sizes. The method can further include evaluating ND by analyzing the respective irradiance data for each of the pupil sizes. Also described herein are methods for designing an ophthalmic lens edge that reduces the incidence of ND for a given ophthalmic lens by adjusting the edge thickness and/or the scatter.
METHOD FOR DEPICTING AN OBJECT
The invention relates to technologies for visualizing a three-dimensional (3D) image. According to the claimed method, a 3D model is generated, images of an object are produced, a 3D model is visualized, the 3D model together with a reference pattern and also coordinates of texturing portions corresponding to polygons of the 3D model are stored in a depiction device, at least one frame of the image of the object is produced, the object in the frame is identified on the basis of the reference pattern, a matrix of conversion of photo image coordinates into dedicated coordinates is generated, elements of the 3D model are coloured in the colours of the corresponding elements of the image by generating a texture of the image sensing area using the coordinate conversion matrix and data interpolation, with subsequent designation of the texture of the 3D model.
Augmented reality content rendering via Albedo models, systems and methods
Methods for rendering augmented reality (AR) content are presented. An a priori defined 3D albedo model of an object is leveraged to adjust AR content so that is appears as a natural part of a scene. Disclosed devices recognize a known object having a corresponding albedo model. The devices compare the observed object to the known albedo model to determine a content transformation referred to as an estimated shading (environmental shading) model. The transformation is then applied to the AR content to generate adjusted content, which is then rendered and presented for consumption by a user.
METHOD AND SYSTEM FOR REPRESENTING AVATAR FOLLOWING MOTION OF USER IN VIRTUAL SPACE
A non-transitory computer-readable recording medium storing instructions that, when executed by a processor, cause the processor to set a communication session in which a plurality of users participate through a server, generate data for a virtual space, share motion data related to motions of the plurality of users through the communication session, generate a video in which avatars following the motions of the plurality of users are represented in the virtual space, based on the motion data, and share the generated video with the plurality of users through the communication session.
Pet treat
A composition and process for making pet food treats is described herein. Auxiliary ingredients are combined to form a meat mixture. The meat mixture is formed into portions. The portions of meat mixture are positioned on a chew stick that comprises rawhide. The pet treat gives the appearance of a grilled shish kabob, where the meat portions are meant for initial taste, while the chew stick will provide the dog with a longer-lasting chewing portion.
COMPUTER SYSTEM AND METHOD FOR CONTROLLING GENERATION OF VIRTUAL MODEL
Model data of a virtual model imitating an object model is generated based on photographed data obtained by photographing the object model including a joint structure. A given applied joint structure is applied to the virtual model. The virtual model based on the model data is disposed in a given virtual space. Virtual model management data including the model data and data of the applied joint structure is stored in a predetermined storage section or is externally output as data for causing a joint of the virtual model to function.