Patent classifications
H04N13/04
Systems and methods of direct pointing detection for interaction with a digital device
Systems, methods, and non-transitory computer-readable media are disclosed. For example, a touch-free gesture recognition system is disclosed that includes at least one processor. The processor may be configured to enable presentation of first display information to a user to prompt a first touch-free gesture at at least a first location on a display. The processor may also be configured to receive first gesture information from at least one image sensor corresponding to a first gesturing location on the display correlated to a first touch-free gesture by the user, wherein the first gesturing location differs from a location of the first display information at least in part as a result of one eye of the user being dominant over another eye of the user. In addition, the processor may be configured to determine a first offset associated with the location of the first display information and the first gesturing location. Further, the processor may be configured to enable presentation of second information to prompt the user to make a subsequent touch-free gesture at at least a second location on the display. Additionally, the processor may be configured to receive subsequent gesture information from at least one image sensor corresponding to a subsequent touch-free gesture by the user. Also, the processor may be configured to use the first offset to determine a location on the display affected by the subsequent touch-free gesture.
Method and apparatus for photographing and projecting moving images in three dimensions
A digital cinematographic and projection process that provides 3D stereoscopic imagery that is not adversely affected by the standard frame rate of 24 frames per second, as is the convention in the motion picture industry worldwide. A method for photographing and projecting moving images in three dimensions includes recording a moving image with a first and a second camera simultaneously and interleaving a plurality of frames recorded by the first camera with a plurality of frames recorded by the second camera. The step of interleaving includes retaining odd numbered frames recorded by the first camera and deleting the even numbered frames, retaining even numbered frames recorded by the second camera and deleting the odd numbered frames, and creating an image sequence by alternating the retained images from the first and second camera.
Display system
A display system is configured to display a stereoscopic three dimensional relief effect in aerial space from a two dimensional image source. The display system has a reflector, having a generalized cylindrical concave surface. The two dimensional image source is arranged between the first side and the second side facing the generalized cylindrical concave surface. The reflector reflects light from the two dimensional image source outward as an aerial image. The aerial image exhibits the stereoscopic three dimensional relief effect. A support structure is operably connected to the reflector and the two dimensional image source. The reflector and the two dimensional image source are adapted to be individually rotated and tilted relative to one another while their position is physically secured.
Overlay Display
Some embodiments provide a system which includes a layered transparent surface which includes a UV absorption layer configured to be located between a user environment and an external environment and a phosphor layer configured to be located between the user environment and the UV absorption layer. An image projection system can project an ultraviolet image upon the phosphor layer, which can generate a visual image based on a fluorescent reaction of the phosphor layer to the ultraviolet image which can be perceived by a user in the user environment. The image projection system can include a plurality of image projection systems which can project separate images on separate projection fields, which can result in the phosphor layer generating an image which can be perceived by a user, in the user environment, as a stereoscopic image.
METHOD, APPARATUS, AND DEVICE FOR REALIZING VIRTUAL STEREOSCOPIC SCENE
A method and a system for realizing a virtual stereoscopic scene based on mapping are provided. The method comprises, acquiring a distance between an observer's two eyes E_R, a maximum convex displaying distance of a real screen N_R, a distance from the observer's eyes to the real screen Z_R, and a maximum concave displaying distance of the real screen F_R; calculating a parallax d.sub.N.sub._.sub.R at N_R, and a parallax d.sub.F.sub._.sub.R at F_R; acquiring a distance between a virtual single camera and a virtual near clipping plane N_V, and a distance between a virtual single camera and a virtual far clipping plane F_V; calculating a distance E_V between a left virtual camera and a right virtual camera, and asymmetric perspective projection parameters of the left virtual camera and the right virtual camera; performing a perspective projection transformation of scene content of the virtual single camera, and displaying a virtual stereoscopic scene.
INTEGRATING POINT SOURCE FOR TEXTURE PROJECTING BULB
A texture projecting light bulb includes an extended light source located within an integrator. The integrator includes at least one aperture configured to allow light to travel out of the interior of the integrator. In various embodiments, the interior of the integrator may be a diffusely reflective surface and the integrator may be configured to produce a uniform light distribution at the aperture to approximate a point source. The integrator may be surrounded by a light bulb enclosure. In various embodiments, the light bulb enclosure may include transparent and opaque regions configured to project a structured pattern of visible and/or infrared light.
PASSIVE OPTICAL AND INERTIAL TRACKING IN SLIM FORM-FACTOR
Apparatus and systems directed to a wireless hand-held inertial controller with passive optical and inertial tracking in a slim form-factor, for use with a head mounted virtual or augmented reality display device (HMD), that operates with six degrees of freedom by fusing (i) data related to the position of the controller derived from a forward-facing optical sensor located in the HMD with (ii) data relating to the orientation of the controller derived from an inertial measurement unit located in the controller.
MODULAR EXTENSION OF INERTIAL CONTROLLER FOR SIX DOF MIXED REALITY INPUT
A modular holding fixture for selectively coupling to a wireless hand-held inertial controller to provide passive optical and inertial tracking in a slim form-factor for use with a head mounted display that operates with six degrees of freedom by fusing (i) data related to the position of the controller derived from a forward-facing depth camera located in the head mounted display with (ii) data relating to the orientation of the controller derived from an inertial measurement unit located in the controller
HEAD MOUNTED DISPLAY AND OPERATING METHOD THEREOF
A head mounted display includes: a display configured to display an image; a shutter configured to block light incident on an eye; a controller configured to control the display to display a left eye image and a right eye image using half or more of a region of the display in a horizontal direction and to control the shutter based on the image displayed on the display; and a lens configured to focus light output from the display such that the left eye image and the right eye image displayed on the display are viewed by a left eye and a right eye respectively.
Multi-View Interactive Digital Media Representation Lock Screen
Various embodiments describe systems and processes for capturing and generating multi-view interactive digital media representations for display on a user device. In one aspect, a mobile device is provided which comprises a display, one or more processors, memory, and one or more programs stored in the memory. The one or more programs comprise instructions for locking the mobile device, and providing a lock screen on the display in a lock mode upon receiving user input for accessing the mobile device. The lock screen may display a multi-view interactive digital media representation (MIDMR) which provides an interactive three-dimensional representation of an object that is responsive to user interaction with the mobile device. The MIDMR may respond to spatial and movement sensors in the mobile device. The mobile device may be unlocked for use upon receiving user identification input, which may include maneuvering the MIDMR in a predetermined pattern.