Patent classifications
G02B27/0093
Shuttered Light Field Display
A method of displaying a light field to one or more viewers using a light field display module comprising an optical image generator for generating an optical image and an array of shutters for selectively providing the viewers with partial views of the optical image, the method comprising determining a shutter pattern that ensures no partial views ever overlap, and repeatedly generating the partial views according to the shutter pattern and from a digital representation of the light field, generating the optical image from the partial views, and shifting the shutter pattern, until each viewer's set of partial views comprise a full 3D view.
WORLD LOCK SPATIAL AUDIO PROCESSING
A method for providing a world-locked experience to a user of a headset in an immersive reality application includes receiving, from an immersive reality application, a first audio waveform from a first acoustic source to provide to a user of a headset, determining a direction of arrival for the first acoustic source relative to the headset, and providing, to a speaker in the headset, an audio signal including the first audio waveform and intended for an ear of the user of the headset, wherein the audio signal includes a time delay and an amplitude for the first audio waveform based on the direction of arrival for the first acoustic source relative to the user of the headset. A non-transitory, computer-readable medium storing instructions which, when executed by a processor, cause a system to perform the above method, and the system, are also provided.
METHOD AND SYSTEM FOR GAZE-BASED CONTROL OF MIXED REALITY CONTENT
Systems and methods are presented for discovering and positioning content into augmented reality space. A method includes forming a three-dimensional (3D) map of surroundings of a user of an augmented reality (AR) head mounted display (HMD); determining a depth-wise location of a gaze point of a user based on eye gaze direction and eye vergence; determining a visual guidance line pathway in the 3D map; guiding an action of the user along the visual guidance line pathway at one or more identified focal points; and rendering a mixed reality (MR) object along the visual guidance line pathway at a location corresponding to a direction of the user’s gaze.
Video Processing Systems and Methods
Example video processing systems and methods are described. In one implementation, compressed video data is received from a recording device. Additionally, metadata associated with the compressed video data is received such that the metadata includes frame-specific metadata associated with frames in the compressed video data. Further, an application program is received and configured to generate a real-time interactive experience for a user based on the compressed video data and the metadata associated with the compressed video data. A non-fungible token (NFT) is generated that includes the compressed video data, the metadata associated with the compressed video data, and the application program.
SYSTEM AND METHOD FOR ENHANCING VISUAL ACUITY
A head wearable display system comprising a target object detection module receiving multiple image pixels of a first portion and a second portion of a target object, and the corresponding depths; a first light emitter emitting multiple first-eye light signals to display a first-eye virtual image of the first portion and the second portion of the target object for a viewer; a first light direction modifier for respectively varying a light direction of each of the multiple first-eye light signals emitted from the first light emitter; a first collimator; a first combiner, for redirecting and converging the multiple first-eye light signals towards a first eye of the viewer. The first-eye virtual image of the first portion of the target object in a first field of view has a greater number of the multiple first-eye light signals per degree than that of the first-eye virtual image of the second portion of the target object in a second field of view.
Apparatuses, Methods and Computer Programs for Controlling a Microscope System
Examples relate to apparatuses, methods and computer programs for controlling a microscope system, and to a corresponding microscope system. An apparatus for controlling a microscope system comprises an interface for communicating with a camera module. The camera module is suitable for providing camera image data of a head of a user of the microscope system. The apparatus comprises a processing module configured to obtain the camera image data from the camera module via the interface. The processing module is configured to process the camera image data to determine information on an angular orientation of the head of the user relative to a display of the microscope system. The processing module is configured to provide a control signal for a robotic adjustment system of the microscope system based on the information on the angular orientation of the head of the user.
OPTICAL ELEMENT FOR INFLUENCING LIGHT DIRECTIONS, ARRANGEMENT FOR IMAGING A MULTIPLICITY OF ILLUMINATED OR SELF-LUMINOUS SURFACES, AND ILLUMINATION DEVICE
An optical element including a plate-shaped substrate with a light-entrance surface and a light-exit surface, a multiplicity of imaging elements formed on the light-exit surface and a multiplicity of diaphragms formed on the light-entrance surface. Each diaphragm includes a transparent geometric region in an opaque region. The optical element can be switched between two operating modes B1 and B2 such that some of the imaging elements change their focal length between values f1 and f2 and/or, some of the diaphragms change their aperture width and/or their position. Exactly one diaphragm is associated with each imaging element in mode B1 so that light passing through the diaphragm is imaged or collimated by the associated imaging element. Consequently, light arriving in the optical element through the diaphragms and then through the light-entrance surface has, after passing through the associated imaging elements in the two operating modes B1 and B2, different propagation angles.
Light field near-eye display and method thereof for generating virtual reality images
A method for generating virtual reality images and used in a light field near-eye display includes steps of: shifting a display image according to at least one change vector of a plurality of eye movement parameters, and calculating a compensation mask according to a simulated image and superimposing the compensation mask on a target image to generate a superimposed target image, wherein brightness distributions of the simulated image and the compensation mask are opposite to each other. The light field near-eye display is also provided. In this way, the light field near-eye display for generating virtual reality images and the method thereof can achieve the purpose of improving the uniformity of the image and expanding the eye box size.
NEAR-EYE DISPLAY DEVICE
The present invention relates to a near-eye display device. The a near-eye display device includes a display, a first lens disposed in front of the display so as to be spaced apart from the display by a predetermined distance, a dynamic aperture adjustment element disposed adjacent to the first lens to dynamically control an aperture size of the first lens and a horizontal position of the aperture on a plane perpendicular to an optical axis, a main optics lens disposed to be spaced apart from the first lens by a predetermined distance, and a control system configured to control the dynamic aperture adjustment element.
HEAD-MOUNTED DEVICE
A head-mounted device includes a first light field camera, a second light field camera, a first light field display, a second light field display and a supporting structure. Each of the first light field camera and the second light field camera includes, in order from an object side to an image side, a lens group, a collimator and an image sensor. Each of the lens groups includes a plurality of lens units. The lens units are arranged in a two-dimensional lens array, and each of the lens units includes a lens container and a plurality of lens elements. A first engaging structure is disposed between at least two adjacent lens elements of the lens elements.