Patent classifications
H04N2213/002
THREE-DIMENSIONAL AUTO-FOCUSING DISPLAY METHOD AND SYSTEM THEREOF
A 3D auto-focusing display method comprises executing an eye-tracking step on a 3D image to obtain focal point coordinates (x1, y1) of viewers of the image, mapping the focal point coordinates (x1, y1) of viewers to a coordinate location of a display to obtain display coordinates (x2, y2) for defining the coordinate location of the display corresponding to a depth diagram of the 3D image, determining a region where the image is located by using the display coordinates (x2, y2) as an input parameter and by use of the depth diagram of the image, determining whether the image is 3D stereoscopic images according to the region and executing a depth map step to revise the 3D image based on the image and a plurality of depth data of the region to reflect the display coordinates (x2, y2) as a focused image, and outputting the revised focused image to the display.
VEHICLE TERRAIN CAPTURE SYSTEM AND DISPLAY OF 3D DIGITAL IMAGE AND 3D SEQUENCE
To simulate a 3D image of a terrain, including a vehicle having a geocoding detector to identify coordinate reference data, the vehicle to traverse the terrain, a memory device for storing an instruction, and a capture module in communication with the processor and connected to the vehicle, the capture module having a 2D RGB digital camera to capture a series of 2D digital images of the terrain and a digital elevation capture device to capture a series of digital elevation scans to generate a digital elevation model of the terrain, with the coordinate reference data, overlay the series of 2D digital images of the terrain thereon the digital elevation model of the terrain while maintaining the coordinate reference data, a key subject point is identified in the series of 2D digital images, and a display configured to display a multidimensional digital image/sequence.
EXERCISE EQUIPMENT
An objective of the present invention is to provide exercise equipment which can provide a user with an immersive experience that allows the user to feel like they are there, while reducing unease or discomfort on the part of the user. The exercise equipment comprises: an exercise device for allowing a user to carry out a prescribed exercise; a measurement device which measures a viewpoint position of the user in a prescribed reference coordinate system when the user carries out the exercise using the exercise device; and a video device which generates and displays a display image of an object in a virtual space of the reference coordinate system on a fixed screen according to the viewpoint position, the display image simulating how the object would appear when viewed from the viewpoint position through the screen.
IMAGE-PROCESSING METHOD, CONTROL DEVICE, AND ENDOSCOPE SYSTEM
In an image-processing method, a first image and a second image having parallax with each other are acquired. In each of the first image and the second image, a first region, which includes a center of one of the first image and the second image and has a predetermined shape, is set. In each of the first image and the second image, a second region surrounding an outer edge of the first region of each of the first image and the second image is set. Image processing is performed on a processing region including the second region in at least one of the first image and the second image so as to change an amount of parallax of the processing region.
Wearable 3D augmented reality display
A wearable 3D augmented reality display and method, which may include 3D integral imaging optics.
METHODS AND SYSTEMS FOR MULTIPLE ACCESS TO A SINGLE HARDWARE DATA STREAM
A target is outputted to an ideal position in 3D space. A viewer indicates the apparent position of the target, and the indication is sensed. An offset between the ideal and apparent positions is determined, and an adjustment determined from the offset such that the apparent position of the ideal position with the adjustment matches the ideal position without the adjustment. The adjustment is made to the first entity and/or a second entity, such that the entities appear to the viewer in the ideal position. The indication may be monocular with a separate indication for each eye, or binocular with a single viewer indication for both eyes. The indication also may serve as communication, such as a PIN input, so that calibration is transparent to the viewer. The method may be continuous, intermittent, or otherwise ongoing over time.
Virtual Reality System
Methods and systems for a virtual and/or augmented reality device may include a light emitting device that includes one or more light emitting elements configured to generate collimated light beams. A scanning mirror may include one or more microelectromechanical systems (MEMS) mirrors. Each MEMS mirror of the scanning mirror may be configured to dynamically tilt in at least one of two orthogonal degrees of freedom to raster scan the light beams over multiple angles corresponding to a field of view of an image. A curved mirror may include curves in two orthogonal directions configured to reflect the collimated light beams from the scanning mirror into a subject's eye in proximity to the curved mirror to form a virtual image. The curved mirror may allow external light to pass through, thus allowing the virtual image to be combined with a real image to provide an augmented reality.
DEPTH BASED FOVEATED RENDERING FOR DISPLAY SYSTEMS
Methods and systems for depth-based foveated rendering in the display system are disclosed. The display system may be an augmented reality display system configured to provide virtual content on a plurality of depth planes using different wavefront divergence. Some embodiments include monitoring eye orientations of a user of a display system based on detected sensor information. A fixation point is determined based on the eye orientations, the fixation point representing a three-dimensional location with respect to a field of view. Location information of virtual objects to present is obtained, with the location information indicating three-dimensional positions of the virtual objects. Resolutions of at least one virtual object is adjusted based on a proximity of the at least one virtual object to the fixation point. The virtual objects are presented to a user by display system with the at least one virtual object being rendered according to the adjusted resolution.
Dynamic parallax correction for visual sensor fusion
An augmented reality (AR) vision system is disclosed. A display is configured to present a surrounding environment to eyes of a user of the AR vision system. A depth tracker is configured to produce a measurement of a focal depth of a focus point in the surrounding environment. Two or more image sensors receive illumination from the focus point and generate a respective image. A controller receives the measurement of the focal depth, generates an interpolated look-up-table (LUT) function by interpolating between two or more precalculated LUTs, applies the interpolated LUT function to the images to correct a parallax error and a distortion error at the measured focal depth, generates a single image of the surrounding environment, and displays the single image to the user.
Viewer-adjusted stereoscopic image display
A stereoscopic video playback device is provided that processes original stereoscopic image pairs taken using parallel-axis cameras and provided for viewing under original viewing conditions by scaling and cropping to provide new viewing condition stereoscopic video on a single screen.