Patent classifications
G02B2027/014
HEAD MOUNTED DISPLAY, DISPLAY SYSTEM, CONTROL METHOD OF HEAD MOUNTED DISPLAY, AND COMPUTER PROGRAM
A transmission-type head mounted display includes a detection unit that detects a first target from outside scenery, an image display unit which is capable of transmitting the outside scenery and is capable of displaying an image, and a display image setting unit that causes the image display unit to display a first moving image which is a moving image associated with the detected first target.
METHOD FOR PROVIDING USER INTERFACE THROUGH HEAD MOUNTED DISPLAY USING EYE RECOGNITION AND BIO-SIGNAL, APPARATUS USING SAME, AND COMPUTER READABLE RECORDING MEDIUM
A method for providing a user interface through a head mounted display using eye recognition and bio-signals comprises the steps of: (a) moving a cursor to a particular location at which a user gazes by referencing the eye information obtained from a first eyeball that is one of the eyeballs of the user through a camera module when the user gazes at a particular location on an output screen; and (b) supporting in order to provide detailed selection items corresponding to an entity when a certain entity exists in the certain position by referencing the movement information obtained from the eyelid corresponding to a second eyeball that is one of the eyeballs of the user through a bio-signal acquisition module.
IMAGE PROJECTION MEDIUM AND DISPLAY PROJECTION SYSTEM USING SAME
A heads up display system includes a display medium in combination with a dock or support configured to receiving a portable electronic device. The display of the portable electronic device is projected by the display medium, such as in a forward direction. The vehicle dashboard or a surrounding area can have an integral dock, which can be at a top of a gauge visor, bottom of a gauge visor, is displace below vehicle gauges. In each case, an aperture may be provide to provide a path from the display of the electronic device to the display medium. In some embodiments, the heads up display system may include a dedicated display that mirrors the display of the portable electronic device, via a wired or wireless connection.
ELECTRONIC APPARATUS AND METHOD FOR DISPLAYING VIRTUAL ENVIRONMENT IMAGE
An electronic apparatus includes a controller, a display device coupled to the controller, and a detector coupled to the controller. The display device displays virtual environmental images. The detector detects a spatial parameter of a local space of the electronic apparatus in which the electronic apparatus is located. The controller receives the spatial parameter and controls the display device based on the spatial parameter.
Augmenting a Moveable Entity with a Hologram
- Daniel Joseph McCulloch ,
- Nicholas Gervase Fajt ,
- Adam G. Poulos ,
- Christopher Douglas Edmonds ,
- Lev Cherkashin ,
- Brent Charles Allen ,
- Constantin Dulu ,
- Muhammad Jabir Kapasi ,
- Michael Grabner ,
- Michael Edward Samples ,
- Cecilia Bong ,
- Miguel Angel Susffalich ,
- Varun Ramesh Mani ,
- Anthony James Ambrus ,
- Arthur C. Tomlin ,
- James Gerard Dack ,
- Jeffrey Alan Kohler ,
- Eric S. Rehmeyer ,
- Edward D. Parker
In embodiments of augmenting a moveable entity with a hologram, an alternate reality device includes a tracking system that can recognize an entity in an environment and track movement of the entity in the environment. The alternate reality device can also include a detection algorithm implemented to identify the entity recognized by the tracking system based on identifiable characteristics of the entity. A hologram positioning application is implemented to receive motion data from the tracking system, receive entity characteristic data from the detection algorithm, and determine a position and an orientation of the entity in the environment based on the motion data and the entity characteristic data. The hologram positioning application can then generate a hologram that appears associated with the entity as the entity moves in the environment.
ADAPTIVE SMOOTHING BASED ON USER FOCUS ON A TARGET OBJECT
Techniques described herein dynamically adapt an amount of smoothing that is applied to signals of a device (e.g., positions and/or orientations of an input mechanism, positions and/or orientations of an output mechanism) based on a determined distance between an object and the device, or based on a determined distance between the object and another device (e.g., a head-mounted device). The object can comprise one of a virtual object presented on a display of the head-mounted device or a real-world object within a view of the user. The object can be considered a “target” object based on a determination that a user is focusing on, or targeting, the object. For example, the head-mounted device or other devices can sense data associated with an eye gaze of a user and can determine, based on the sensed data, that the user is looking at the target object.
Display Screen Front Panel of HMD for Viewing by Users Viewing the HMD Player
Method for providing image of HMD user to a non-HMD user includes, receiving a first image of a user including the user's facial features captured by an external camera when the user is not wearing a head mounted display (HMD). A second image capturing a portion of the facial features of the user when the user is wearing the HMD is received. An image overlay data is generated by mapping contours of facial features captured in the second image with contours of corresponding facial features captured in the first image. The image overlay data is forwarded to the HMD for rendering on a second display screen that is mounted on a front face of the HMD.
Using HMD Camera Touch Button to Render Images of a User Captured During Game Play
Methods and systems for presenting an image of a user interacting with a video game includes providing images of a virtual reality (VR) scene of the video game for rendering on a display screen of a head mounted display (HMD). The images of the VR scene are generated as part of game play of the video game. An input provided at a user interface on the HMD received during game play is used to initiate a signal to pause the video game and to generate an activation signal to activate an image capturing device. The activation signal causes the image capturing device to capture an image of the user interacting in a physical space. The image of the user captured by the image capturing device during game play is associated with a portion of the video game that corresponds with a time when the image of the user was captured. The association causes the image of the user to be transmitted to the HMD for rendering on the display screen of the HMD.
Dynamic Entering and Leaving of Virtual-Reality Environments Navigated by Different HMD Users
Systems and methods for processing operations for head mounted display (HMD) users to join virtual reality (VR) scenes are provided. A computer-implemented method includes providing a first perspective of a VR scene to a first HMD of a first user and receiving an indication that a second user is requesting to join the VR scene provided to the first HMD. The method further includes obtaining real-world position and orientation data of the second HMD relative to the first HMD and then providing, based on said data, a second perspective of the VR scene. The method also provides that the first and second perspectives are each controlled by respective position and orientation changes while viewing the VR scene.
NEAR-TO-EYE DISPLAY DEVICE WITH SPATIAL LIGHT MODULATOR AND PUPIL TRACKER
A near-to-eye display device includes a spatial light modulator, a rotatable reflective optical element and a pupil-tracking device. The pupil-tracking device tracks the eye pupil position of the user. Based on the data provided by the pupil-tracking device, the reflective optical element is rotated such that the light modulated by the spatial light modulator is directed towards the user's eye pupil.