Patent classifications
H04N13/327
Binocular See-Through AR Head-Mounted Display Device and Information Display Method Therefor
A binocular see-through AR head-mounted display device is disclosed. Based on that the mapping relationships f.sub.c.fwdarw.s and f.sub.d.fwdarw.i are pre-stored in the head-mounted device, the position of the target object in the camera image is obtained through an image tracking method, and is mapped to the screen coordinate system of the head-mounted device for calculating the left/right image display position. Through a monocular distance finding method, the distance between the target object and the camera is real-time calculated referring to the imaging scale of the camera, so as to calculate a left-right image distance, thereby calculating the right or the right image display position. Correspondingly, the present invention also provides an information display method for a binocular see-through AR head-mounted display device and an augmented reality information display system. The present invention is highly reliable with low cost. The conventional depth of field adjustment is to change an image distance of an optical element. However, the present invention breaks conventional thinking, which calculates the left and the right image display positions for depth of field adjustment without changing a structure of an optical device. The present invention is novel and practical compared to changing an optical focal length.
Measuring quality-of-experience (QoE) for virtual reality (VR) streaming content
Measuring quality-of-experience (QoE) for virtual reality (VR) streaming content is disclosed. A network computing device receives a client-side VR stream capture and a client pose data set that are generated by a client computing device based on a VR content and one or more induced network impairments (e.g., latency, packet loss, and/or jitter, as non-limiting examples). Using the same VR content and the client pose data set, the network computing device generates a source VR stream capture that is not subjected to the one or more induced network impairments. The network computing device performs a frame-by-frame comparison of the client-side VR stream capture and the source VR stream capture. Based on the frame-by-frame comparison, the network computing device generates a QoE metric that indicates a degree of degradation of the client-side VR stream capture relative to the source VR stream capture.
Measuring quality-of-experience (QoE) for virtual reality (VR) streaming content
Measuring quality-of-experience (QoE) for virtual reality (VR) streaming content is disclosed. A network computing device receives a client-side VR stream capture and a client pose data set that are generated by a client computing device based on a VR content and one or more induced network impairments (e.g., latency, packet loss, and/or jitter, as non-limiting examples). Using the same VR content and the client pose data set, the network computing device generates a source VR stream capture that is not subjected to the one or more induced network impairments. The network computing device performs a frame-by-frame comparison of the client-side VR stream capture and the source VR stream capture. Based on the frame-by-frame comparison, the network computing device generates a QoE metric that indicates a degree of degradation of the client-side VR stream capture relative to the source VR stream capture.
CALIBRATION OF STEREOSCOPIC DISPLAY USING WAVEGUIDE COMBINER
Examples are disclosed that relate to calibration of a stereoscopic display system of an HMD via an optical calibration system comprising a waveguide combiner. One example provides an HMD device comprising a first image projector and a second image projector configured to project a stereoscopic image pair, and an optical calibration system. The optical calibration system comprises a first optical path indicative of an alignment of the first image projector, a second optical path indicative of an alignment of the second image projector, a waveguide combiner in which the first and second optical paths combine into a shared optical path, and one or more boresight sensors configured to detect calibration image light traveling along one or more of the first optical or the second optical path.
CALIBRATION OF STEREOSCOPIC DISPLAY USING WAVEGUIDE COMBINER
Examples are disclosed that relate to calibration of a stereoscopic display system of an HMD via an optical calibration system comprising a waveguide combiner. One example provides an HMD device comprising a first image projector and a second image projector configured to project a stereoscopic image pair, and an optical calibration system. The optical calibration system comprises a first optical path indicative of an alignment of the first image projector, a second optical path indicative of an alignment of the second image projector, a waveguide combiner in which the first and second optical paths combine into a shared optical path, and one or more boresight sensors configured to detect calibration image light traveling along one or more of the first optical or the second optical path.
STEREOSCOPIC DISPLAY DEVICE AND METHOD OF CALIBRATING SAME, AND STORAGE MEDIUM
Provided is a method of calibrating a stereoscopic display device. The device includes a motor and a display panel, and the display panel is driven by the motor to rotate to realize a stereoscopic display. The method includes acquiring a control strategy of the motor and display parameters of the display panel matching the control strategy, wherein the control strategy indicates that each time the motor runs for a preset period of time, the motor is restarted; controlling the motor to run according to the control strategy, to calibrate the motor by restarting; and driving the display panel to display according to the display parameters in the rotation process of the motor.
SYSTEM AND METHOD FOR THE DIAGNOSIS AND TREATMENT OF AMBLYOPIA USING A 3D DISPLAY
Methods, systems, and storage media for projecting a viewer-specific 3D object perspectives from a single 3D display are disclosed. Implementations may: acquire face and eye region image data of a plurality of viewers within a field of view of at least one camera associated with a 3D-enabled digital display; analyze the eye region image data to determine at least one 3D eye position, at least one eye state, at least one gaze angle, and at least one point-of-regard for a viewer relative to at least one camera associated with the 3D-enabled digital display; and calculate a plurality of processed image projections for display by the single 3D display. The digital-processing of input image projection enables a separate optical input to the user's eyes, and by the use of visual-acuity pre-processing of the image—via visual-field kernel, enables the treatment of eye abbreviations, including an Amblyopic-eye without the need for any additional eye-ware, or head-up-displays (HMD's).
SYSTEM AND METHOD FOR THE DIAGNOSIS AND TREATMENT OF AMBLYOPIA USING A 3D DISPLAY
Methods, systems, and storage media for projecting a viewer-specific 3D object perspectives from a single 3D display are disclosed. Implementations may: acquire face and eye region image data of a plurality of viewers within a field of view of at least one camera associated with a 3D-enabled digital display; analyze the eye region image data to determine at least one 3D eye position, at least one eye state, at least one gaze angle, and at least one point-of-regard for a viewer relative to at least one camera associated with the 3D-enabled digital display; and calculate a plurality of processed image projections for display by the single 3D display. The digital-processing of input image projection enables a separate optical input to the user's eyes, and by the use of visual-acuity pre-processing of the image—via visual-field kernel, enables the treatment of eye abbreviations, including an Amblyopic-eye without the need for any additional eye-ware, or head-up-displays (HMD's).
Computer-readable non-transitory storage medium, web server, and calibration method for interpupillary distance
An object of the present invention is to obtain calibration data more easily in a VR (Virtual Reality) device. a user wearing a pair of VR goggles visually recognizes overlapped marker images displayed in the 360-degree VR space, and a stationary state is detected when the images for right and left eyes are overlapped, and when the stationary state satisfies a predetermined condition set in advance, one of the plurality of marker images displayed on the display in this state, which is at the center, is set as a marker image for calibration setting, calibration data of the interpupillary distance based on the marker image for calibration setting having been set is acquired, and the acquired calibration data is set as calibration data used for subsequent reproduction of images.
Computer-readable non-transitory storage medium, web server, and calibration method for interpupillary distance
An object of the present invention is to obtain calibration data more easily in a VR (Virtual Reality) device. a user wearing a pair of VR goggles visually recognizes overlapped marker images displayed in the 360-degree VR space, and a stationary state is detected when the images for right and left eyes are overlapped, and when the stationary state satisfies a predetermined condition set in advance, one of the plurality of marker images displayed on the display in this state, which is at the center, is set as a marker image for calibration setting, calibration data of the interpupillary distance based on the marker image for calibration setting having been set is acquired, and the acquired calibration data is set as calibration data used for subsequent reproduction of images.