Patent classifications
G06T2215/16
Methods for collecting and processing image information to produce digital assets
Paired images of substantially the same scene are captured with the same freestanding sensor. The paired images include reflected light illuminated with controlled polarization states that are different between the paired images. Information from the images is applied to a convolutional neural network (CNN) configured to derive a spatially varying bi-directional reflectance distribution function (SVBRDF) for objects in the paired images. Alternatively, the sensor is fixed and oriented to capture images of an object of interest in the scene while a light source traverses a path that intersects the sensor's field of view. Information from the paired images of the scene and from the images captured of the object of interest when the light source traverses the field of view are applied to a CNN to derive a SVBDRF for the object of interest. The image information and the SVBRDF are used to render a representation with artificial lighting conditions.
SYSTEMS AND METHOD FOR USING VISUAL LANDMARKS IN INITIAL NAVIGATION
A route from a current location of a portable device to a destination is determined, where the route includes a sequence of directed sections. Navigation instructions to guide a user of the portable device along the route to the destination are generated. To this end, candidate navigation landmarks perceptible within a 360-degree range of the current location of the portable device are identified, a navigation landmark disposed in a direction substantially opposite to the direction of the first one in the sequence of directed sections is selected from among the candidate navigation landmarks, and an initial instruction in the navigation instructions is generated and provided via a user interface of the portable device. The initial instruction references the selected navigation landmark.
Image rendering of laser scan data
A method of rendering an image of three-dimensional laser scan data is described. The method includes providing a range cube map and a corresponding image cube map generating a tessellation pattern using the range cube map and rendering an image based on the tessellation pattern by sampling the image cube map.
Vehicle driver feedback device
The disclosure relates generally to an in-vehicle feedback system, and more particularly, to an in-vehicle device with a display or graphical interface that collects driving data and provides feedback based on the driving data. The system may comprise an in-vehicle device that includes a graphical user interface and a processor and a data collection device wirelessly connected to the in-vehicle device. The in-vehicle device may be configured to receive vehicle telematics data from the data collection device and the processor may process the telematics data in real time and cause the telematics data to be displayed on the graphical user interface. The graphical user interface may include a speed display and an acceleration display.
PROCESSING METHOD AND APPARATUS WITH AUGMENTED REALITY
A method and apparatus for processing augmented reality (AR) are disclosed. The method includes determining a compensation parameter to compensate for light attenuation of visual information caused by a display area of an AR device as the visual information corresponding to a target scene is displayed through the display area, generating a background image without the light attenuation by capturing the target scene using a camera of the AR device, generating a compensation image by reducing brightness of the background image using the compensation parameter, generating a virtual object image to be overlaid on the target scene, generating a display image by synthesizing the compensation image and the virtual object image, and displaying the display image in the display area.
System and method for providing personalized transactions based on 3D representations of user physical characteristics
The disclosed systems, components, methods, and processing steps are directed to determining user-item fit characteristics of an item for a user body part by accessing a three-dimensional (3D) reconstructed model of the user body part, accessing information about one or more 3D reference models of the item, the information for each 3D reference model including respective dimensional measurement, spatial, and geometrical attributes, performing a 3D matching process based on the 3D reconstructed model and the accessed information of the one or more 3D reference models to determine a best-fitting 3D reference model from the one or more 3D reference models, integrating the best-fitting 3D reference model with the 3D reconstructed model to provide a 3D best fit representation and displaying the 3D best fit representation along with visual indications of user-item fit characteristics.
PRESENTING REAL WORLD VIEW DURING VIRTUAL REALITY PRESENTATION
In one aspect, a device includes a processor, a display accessible to the processor, and storage accessible to the processor. The storage includes instructions executable by the processor to present first virtual reality (VR) content on the display. The instructions are also executable to determine that an end user-defined trigger has occurred for transitioning from presenting VR content to presenting the real-world environment. Responsive to the determination, the instructions are executable to transition presentation of at least part of the first VR content to presentation of the real-world environment.
SYSTEM AND METHOD FOR RENDERING 6 DEGREE-OF-FREEDOM VIRTUAL REALITY
A system for rendering 6 degree-of-freedom virtual reality according to an embodiment of the present disclosure includes a visibility test module performing a visibility test for determining whether a current point of interest where a main viewpoint is directed is visible for each of a plurality of reference viewpoints and generating visibility information by identifying the number of invisible fragments of each reference viewpoint according to the performance result, a reference viewpoint selection module selecting a final reference viewpoint for a rendering process for a current frame based on the visibility information for each of the plurality of reference viewpoints and a preset selection criterion, and a rendering process module performing an image-based rendering process by using a color image and a depth image corresponding to the final reference viewpoint.
Creating and distributing interactive addressable virtual content
Systems and methods create and distribute addressable virtual content with interactivity. The virtual content may depict a live event and may be customized for each individual user based on dynamic characteristics (e.g., habits, preferences, etc.) of the user that are captured during user interaction with the virtual content. The virtual content is generated with low latency between the actual event and the live content that allows the user to interactively participate in actions related to the live event. The virtual content may represent a studio with multiple display screens that each show different live content (of the same or different live events), and may also include graphic displays that include related data such as statistics corresponding to the live event, athletes at the event, and so on. The content of the display screens and graphics may be automatically selected based on the dynamic characteristics of the user.
DIARISATION AUGMENTED REALITY AIDE
An image of a real-world environment including one or more users, is received from an image capture device. A mask status of a first user of is determined by a processor based on the image. A stream of audio including speech from one or more users is captured from one or more audio transceivers. A first user speech from the stream of audio identified by the processor. The stream of audio is parsed, by the processor and based on the first user speech and based on an audio processing technique, to create a first user speech element. An augmented view that includes the first user speech element is generated, for a wearable computing device, based on the first user speech and based on the mask status.