Patent classifications
G06F3/011
Analyzing augmented reality content item usage data
Usage metrics for augmented reality content may be identified and analyzed to determine measures of fitness for respective usage metrics. The measures of fitness may indicate a level of correlation with an outcome specified by an augmented reality content creator and an amount of interaction with an augmented reality content item by users of a client application. Recommendations may be provided to augmented reality content creators indicating modifications to augmented reality content items that have at least a threshold probability of increasing the level of interaction between users of the client application and the augmented reality content item.
Wireless devices with flexible monitors and keyboards
A portable device (e.g., a wireless device such as a cell phone) is provided with a flexible keyboard and a flexible display screen. Such flexible components may be stored in the housing of the portable device when not in use. The flexible display screen and flexible keyboard may be expanded from the housing when the flexible components are utilized by a user. Non-flexible display and input components may be provided on the exterior of the portable device such that the device may be used, in some form, while the flexible components are stored. In one embodiment, a portion of the flexible display (or flexible keyboard) may be utilized when the flexible display (or flexible keyboard) is stored in said first housing.
Systems, methods, and media for displaying interactive augmented reality presentations
Systems, methods, and media for displaying interactive augmented reality presentations are provided. In some embodiments, a system comprises: a plurality of head mounted displays, a first head mounted display comprising a transparent display; and at least one processor, wherein the at least one processor is programmed to: determine that a first physical location of a plurality of physical locations in a physical environment of the head mounted display is located closest to the head mounted display; receive first content comprising a first three dimensional model; receive second content comprising a second three dimensional model; present, using the transparent display, a first view of the first three dimensional model at a first time; and present, using the transparent display, a first view of the second three dimensional model at a second time subsequent to the first time based one or more instructions received from a server.
Mid-air volumetric visualization movement compensation
A wearable computing device generates a volumetric visualization at a first position that is located in a three-dimensional space. The wearable computing device includes a volumetric source configured to create the volumetric visualization. The wearable computing device includes one or more sensors configured to determine movement of the wearable computing device. A movement of the wearable computing device is identified by the wearable computing device. Based on the movement the wearable computing device adjusts the volumetric source.
Systems and methods for adaptive input thresholding
The disclosed computer-implemented method may include detecting, by a computing system, a gesture that appears to be intended to trigger a response by the computing system, identifying, by the computing system, a context in which the gesture was performed, and adjusting, based at least on the context in which the gesture was performed, a threshold for determining whether to trigger the response to the gesture in a manner that causes the computing system to perform an action that is based on the detected gesture.
Eye image selection
Systems and methods for eye image set selection, eye image collection, and eye image combination are described. Embodiments of the systems and methods for eye image set selection can include comparing a determined image quality metric with an image quality threshold to identify an eye image passing an image quality threshold, and selecting, from a plurality of eye images, a set of eye images that passes the image quality threshold.
Blending virtual environments with situated physical reality
Various embodiments are provided herein for tracking a user's physical environment, to facilitate on-the-fly blending of a virtual environment with detected aspects of the physical environment. Embodiments can be employed to facilitate virtual roaming by compositing virtual representations of detected physical objects into virtual environments. A computing device coupled to a HMD can select portions of a depth map generated based on the user's physical environment, to generate virtual objects that correspond to the selected portions. The computing device can composite the generated virtual objects into an existing virtual environment, such that the user can traverse the virtual environment while remaining aware of their physical environment. Among other things, the computing device can employ various blending techniques for compositing, and further provide image pass-through techniques for selective viewing of the physical environment while remaining fully-immersed in virtual reality.
Methods and systems for displaying virtual objects from an augmented reality environment on a multimedia device
Methods and systems are disclosed for displaying an augmented reality virtual object on a multimedia device. One method comprises detecting, in an augmented reality environment displayed using a first device, a virtual object; detecting, within the augmented reality environment, a second device, the second device comprising a physical multimedia device; and generating, at the second device, a display comprising a representation of the virtual object.
Device for a color-based detection of image contents computing device, and motor vehicle including the device
An apparatus for color-dependent detection of image contents includes a light input coupling apparatus, carrier medium, measuring region, output coupling region, and camera apparatus. The light input coupling apparatus includes a light source to emit light at a first wavelength. The carrier medium receives the light and transmits the light by internal reflection to the measuring region. The measuring region includes a first diffraction structure that outputs light at the first wavelength. The first diffraction structure is formed as a multiplex diffraction structure to input light in a second wavelength range. The output coupling region includes a second diffraction structure formed as a multiplex diffraction structure that outputs light at the first wavelength and the second wavelength range. The camera apparatus captures light output from the carrier medium to the camera apparatus, and provides the light in a form of image data which correlates with the light.
Depth estimation using biometric data
Method of generating depth estimate based on biometric data starts with server receiving positioning data from first device associated with first user. First device generates positioning data based on analysis of a data stream comprising images of second user that is associated with second device. Server then receives a biometric data of second user from second device. Biometric data is based on output from a sensor or a camera included in second device. Server then determines a distance of second user from first device using positioning data and biometric data of the second user. Other embodiments are described herein.