Patent classifications
G06F3/0308
Object and environment tracking via shared sensor
One disclosed example provides a head-mounted device configured to control a plurality of light sources of a handheld object and acquire image data comprising a sequence of environmental tracking exposures in which the plurality of light sources are controlled to have a lower integrated intensity and handheld object tracking exposures in which the plurality of light sources are controlled to have a higher integrated intensity. The instructions are further executable to detect, via an environmental tracking exposure, one or more features of the surrounding environment, determine a pose of the head-mounted device based upon the one or more features of the surrounding environment detected, detect via a handheld object tracking exposure the plurality of light sources of the handheld object, determine a pose of the handheld object relative to the head-mounted device based upon the plurality of light sources detected, and output the pose of the handheld object.
SYSTEMS AND METHODS FOR PROVIDING A MODULAR INTERACTIVE PLATFORM WITH SATELLITE ACCESSORIES
An interactive visual display platform including low power wearable satellite accessories for identifying and interfacing with users. A visual display sensing surface generates graphical images according to objects detected proximate to the surface. Three dimensional gaming upon a two dimensional surface by the creation of environments according to a user's physical movements.
Mouse device
A mouse device includes a casing, a light-transmissible element and at least one light-emitting element. The casing has an opening. The light-transmissible element is disposed within the casing. A portion of the light-transmissible element is exposed outside the opening of the casing. The light-transmissible element includes plural scattering patterns. The at least one light-emitting element is disposed within the casing and located beside the light-transmissible element. The at least one light-emitting element emits a light beam to an internal portion of the light-transmissible element. After the light beam is transmitted through the plural scattering patterns, the light beam is scattered by the plural scattering patterns, and the scattered light beam is transmitted through the opening of the casing and outputted.
Active marker device for performance capture
A performance capture system is provided to detect one or more active marker units in a live action scene. Active marker units emanate at least one wavelength of light that is captured by the performance capture system and used to detect the active markers in the scene. The system detects the presence of the light as a light patch in a capture frames and determines if the light patch represents light from an active marker unit. In some implementations, various active markers in a scene may emanate different wavelengths of light. For example, wavelengths of light from multi-emitting active marker units may be changed due to various conditions in the scene.
LIGHT-EMITTING USER INPUT DEVICE FOR CALIBRATION OR PAIRING
A light emitting user input device can include a touch sensitive portion configured to accept user input (e.g., from a user's thumb) and a light emitting portion configured to output a light pattern. The light pattern can be used to assist the user in interacting with the user input device. Examples include emulating a multi-degree-of-freedom controller, indicating scrolling or swiping actions, indicating presence of objects nearby the device, indicating receipt of notifications, assisting pairing the user input device with another device, or assisting calibrating the user input device. The light emitting user input device can be used to provide user input to a wearable device, such as, e.g., a head mounted display device.
Physical-virtual game board and content delivery system
A physical-virtual gaming system comprising a game board including a multiplicity of physical locations, a game overlay including a multiplicity of logical locations, and game pieces for moving throughout the logical locations. The system tracks the physical locations of the game pieces and translates these physical locations into the logical locations using pre-known physical-to-logical location mapping information. Game boards, game overlays and game pieces can be individualized with their own unique identifiers. As game pieces are moved, a game database comprising at least current game piece locations and the game state are maintained. Based at least in part upon the current game piece locations and game state the system provides actions including outputting any of virtual content and information on any of a shared or private computing device running a game app. Actions also include causing changes to game devices including wearables.
PROXIMITY DETECTION DEVICE
Implementations of a proximity detection device according to the present disclosure include a plurality of light emitting elements and a plurality of light receiving elements arranged in a lower portion of a touch panel display. The proximity detection device detects an object that is in proximity by reflected light received by the light receiving elements when irradiation light from the light emitting elements is reflected by the object. Implementations of a proximity detection device includes a drive circuit sequentially driving the plurality of light emitting elements, a measurement circuit measuring detection levels of the light receiving elements when the plurality of light emitting elements sequentially emit light, respectively, and a control unit having a function to estimate a position in the horizontal direction from a plurality of measurement results and correct the estimated position.
Systems and methods for augmented reality
Disclosed is a method of localizing a user operating a plurality of sensing components, preferably in an augmented or mixed reality environment, the method comprising transmitting pose data from a fixed control and processing module and receiving the pose data at a first sensing component, the pose data is then transformed into a first component relative pose in a coordinate frame based on the control and processing module. A display unit in communication with the first sensing component is updated with the transformed first component relative pose to render virtual content with improved environmental awareness.
Controlling light source intensities on optically trackable object
Examples are disclosed that relate to dynamically controlling light sources on an optically trackable peripheral device. One disclosed example provides a near-eye display device comprising an image sensor, a communications subsystem, a logic subsystem, and a storage subsystem. The storage subsystem stores instructions executable by the logic subsystem to control a peripheral device comprising a plurality of light sources by receiving image data from the image sensor, identifying in the image data a constellation of light sources formed by a subset of light sources of the peripheral device, and based upon the constellation of light sources identified, send to the peripheral device via the communications subsystem constellation information related to the constellation of light sources identified.
OPTIMIZING HEAD MOUNTED DISPLAYS FOR AUGMENTED REALITY
While many augmented reality systems provide “see-through” transparent or translucent displays upon which to project virtual objects, many virtual reality systems instead employ opaque, enclosed screens. Indeed, eliminating the user's perception of the real-world may be integral to some successful virtual reality experiences. Thus, head mounted displays designed exclusively for virtual reality experiences may not be easily repurposed to capture significant portions of the augmented reality market. Various of the disclosed embodiments facilitate the repurposing of a virtual reality device for augmented reality use. Particularly, by anticipating user head motion, embodiments may facilitate scene renderings better aligned with user expectations than naïve renderings generated within the enclosed field of view. In some embodiments, the system may use procedural mapping methods to generate a virtual model of the environment. The system may then use this model to supplement the anticipatory rendering.