Patent classifications
G02B2027/014
Wearable heads-up display with optical path fault detection
A wearable heads-up display includes a power source, laser sources, and a lightguide. A photodetector is positioned to detect an intensity of a test light emitted at a perimeter of the lightguide from an optical path within the lightguide. A laser safety circuit provides a control to reduce or shut off a supply of electrical power from the power source to the laser sources in response to an output signal from the photodetector indicating that the detected intensity is below a threshold.
Systems and methods for eye tracking using modulated radiation
Eye-tracking systems of the present disclosure may include at least one light source configured to emit modulated radiation toward an intended location for a user's eye. The modulated radiation may be modulated in a manner that enables the light source to be identified by detection and analysis of the modulated radiation. At least one optical sensor including at least one sensing element may be configured to detect at least a portion of the modulated radiation. A processor may be configured to identify, based on the modulated radiation detected by the optical sensor, the light source that emitted the modulated radiation. Various other methods, systems, and devices are also disclosed.
Ambient light management systems and methods for wearable devices
Techniques are described for operating an optical system. In some embodiments, light associated with a world object is received at the optical system. Virtual image light is projected onto an eyepiece of the optical system. A portion of a system field of view of the optical system to be at least partially dimmed is determined based on information detected by the optical system. A plurality of spatially-resolved dimming values for the portion of the system field of view may be determined based on the detected information. The detected information may include light information, gaze information, and/or image information. A dimmer of the optical system may be adjusted to reduce an intensity of light associated with the world object in the portion of the system field of view according to the plurality of dimming values.
APPARATUS AND METHOD FOR PIXELATED OCCLUSION
An apparatus and method for providing pixelated occlusion is disclosed. The apparatus includes a display, a unitary and transmissive optical component, and a contact lens. The display provides a display image. The unitary reflective and transmissive optical component receives the display image and forms a reflected display image having a first polarization and receives a scene image and forms a transmitted scene image. The contact lens forms a combined image including the reflected display image and the transmitted scene image. The pixelated display includes one or more occluding pixels having a second polarization with the first polarization substantially orthogonal to the second polarization. The pixelated display is included anterior to the unitary and reflective optical component.
Systems and methods for mask-based temporal dithering
In one embodiment, a computing system may determine a target grayscale value associated with a target image to be represented by a plurality of subframes. The system may determine grayscale ranges based on the target grayscale value. Each grayscale range may correspond to a combination of zero or more subframes of the plurality of subframes. The system may select dot subsets from a dithering mask based on the grayscale ranges. Each of the dot subsets may correspond to a grayscale range. The system may generate the subframes based on (1) the selected dot subsets and (2) respective combinations of zero or more subframes. The subframes may have a smaller number of bits per color than the target frame. The system may display the subframes sequentially in time domain on a display to represent the target image.
Mobile device for viewing of dental treatment outcomes
A mobile computing device comprises an AR display, an image capture device that generates image data of a face of a viewer of the AR display, and a processing device. The processing device receives the image data; processes the image data to identify a position of a dental arch in the image data; determines a treatment outcome for the dental arch; generates a post-treatment image of the dental arch that shows the treatment outcome; generates updated image data comprising a superimposition of the post-treatment image of the dental arch over the received image data depicting the face of the viewer; and outputs the updated image data to the AR display, wherein the post-treatment image of the dental arch is superimposed over the dental arch in the received image data such that the post-treatment image is visible in the AR display rather than a true depiction of the dental arch.
EYEWEAR WITH INTEGRATED PERIPHERAL DISPLAY
Systems and methods for projecting each of a chronology of images as a sequence of images using a shifting element as part of a near-eye display system are provided for use in virtual reality, augmented reality, or mixed reality systems. In some example embodiments, a chronology of images is received by a peripheral sequencing system. The system divides each image into image portions and generates sequences of image portions to recreate the images based on arrangement data. The system then causes a high-speed display of each sequence of images such that they appear simultaneous to a viewer. In some embodiments, the projection is transmitted to a shifting optical element such as a rotating micromirror that propagates a display to a user. In some embodiments, the system further detects and corrects for image and environmental distortions.
System and method for interactive 360 video playback based on user location
A system, method, and Head-Mounted Display, HMD, apparatus for recording a video and playing back the video to a viewer in a Virtual Reality, VR, environment. A geographical area is recorded with an omnidirectional video recording camera by dividing the geographical area into a plurality of area portions and recording in separate video sections, each of the area portions while moving the camera in different directions. Time points in each video section are associated with virtual locations of the view. At a time point providing the viewer with a choice of directions to proceed, the system receives the viewer's choice and presents to the viewer, a video section corresponding to the virtual location of the viewer and the desired direction of movement. The viewer's choice may be indicated by detecting a direction of the viewer's field of view or by receiving from the viewer, a response to a banner notification.
Multi-channel depth estimation using census transforms
A depth estimation system is described capable of determining depth information using two images from two cameras. A first camera captures a first image and a second camera captures a second image, both images including a plurality of light channels. A scan direction is selected from a plurality of scan directions. For the selected scan direction, along each of a plurality of scanlines, the system compares pixels from the first image to pixels from the second image. The comparison is based on calculating a census transform for each pixel in the first image and a census transform for each pixel in the second image. This comparison is used to determine a stereo correspondence between the pixels in the first image and the pixels in the second image. The system generates a depth map based on the stereo correspondence.
Near interaction mode for far virtual object
A computing system is provided. The computing system includes a head mounted display (HMD) device including a display, a processor configured to execute one or more programs, and associated memory. The processor is configured to display a virtual object at least partially within a field of view of a user on the display, identify a plurality of control points associated with the virtual object, and determine that one or more of the control points associated with the virtual object are further than a predetermined threshold distance from the user. The processor is configured to, based on the determination, invoke a far interaction mode for the virtual object and receive a trigger input from the user. In response to the trigger input in the far interaction mode, the processor is configured to invoke a near interaction mode and display a virtual interaction object within the predetermined threshold distance from the user.