Patent classifications
G06F3/013
ELECTRONIC DEVICE AND OPERATION METHOD THEREOF
An electronic device includes sensors, a display, and a processor electrically connected to the sensors and the display, in which the electronic device is in a first running mode for permanently providing compass information or a second running mode for providing the compass information, in response to a request from a user. When set to be in the first running mode, the processor a performance mode of a digital compass to be a first performance mode, determines first performance mode-based compass information using the sensors, and displays the determined first performance mode-based compass information on the display. When set to be in the second running mode, the processor sets the performance mode of the digital compass to be a second performance mode, determines second performance mode-based compass information using the sensors at the request from the user, and displays the determined second performance mode-based compass information on the display.
Prism based light redirection system for eye tracking systems
A head-mounted device (HMD) contains a display, an optics block, a redirection structure, and an eye tracking system. The display is configured to emit image light and provide it to an eye of a user. The optics block is configured to direct the emitted light in order to allow it to reach the eye. The eye tracking system contains a camera, an illumination source, and a controller. The camera is configured to capture image data using infrared light reflected from the eye. The controller is configured to use this image data to determine eye tracking information. The illumination source is configured to illuminate the eye with infrared light for the purpose of taking eye tracking measurements. The redirection structure is configured to direct infrared light reflected from the eye to the eye tracking system. In multiple embodiments, redirection structures may comprise prism arrays, lenses, liquid crystal layers, or grating structures.
Ring motion capture and message composition system
Systems, devices, media, and methods are presented for composing and sharing a message based on the motion of a handheld electronic device such as a ring. The methods in some implementations include presenting a keyboard on a display, collecting course data associated with a course traveled by the ring, and overlying a trace onto the keyboard, such that the trace is correlated in near real-time with the course traveled by the ring. In some implementations the display element is part of a portable device, such as the lens of an electronic eyewear device. Based on the course data relative to the key locations on the keyboard, the system identifies and presents candidate words to be included in a message.
Virtual and augmented reality signatures
A method implemented on a visual computing device to authenticate one or more users includes receiving a first three-dimensional pattern from a user. The first three-dimensional pattern is sent to a server computer. At a time of user authentication, a second three-dimensional pattern is received from the user. The second three-dimensional pattern is sent to the server computer. An indication is received from the server computer as to whether the first three-dimensional pattern matches the second three-dimensional pattern within a margin of error. When the first three-dimensional pattern matches the second three-dimensional pattern within the margin of error, the user is authenticated at the server computer. When the first three-dimensional pattern does not match the second three-dimensional pattern within the margin of error, user is prevented from being authenticated at the server computer.
System and method for an augmented reality goal assistant
A method for an augmented reality goal assistant is described. The method includes detecting an object associated with a behavioral goal of a user. The method also includes altering an appearance of the object based on the behavioral goal of the user. The method further includes displaying the altered appearance of a detected object on an augmented reality headset, such that the altered appearance of the detected object is modified based on the behavioral goal of the user.
Systems and methods for driving a display
An image system dynamically updates drive sequences in an image system. Drive sequences are image display settings or display driving characteristics with which a display is operated. The image system may determine the drive sequence at least partially based on input from one or more sensors. For example, the image system may include sensors such as an inertial measurement unit, a light sensor, a camera, a temperature sensor, or other sensors from which sensor data may be collected. The image system may analyze the sensor data to calculate drive sequence settings or to select a drive sequence from a number of predetermined drive sequences. Displaying image content on a display includes providing the display with image data and includes operating the display with various drive sequences.
System and method for PIN entry on mobile devices
A system for entering a secure Personal Identification Number (PIN) into a mobile computing device includes a mobile computing device and a peripheral device that are connected via a data communication link. The mobile computing device includes a mobile application and a display and the mobile application runs on the mobile computing device and displays a grid on the mobile computing device display. The peripheral device includes a display and an encryption engine, and the peripheral device display displays a grid corresponding to the grid displayed on the mobile computing device display. Positional inputs on the mobile computing device grid are sent to the peripheral device and the peripheral device decodes the positional inputs into PIN digits and generates an encrypted PIN and then sends the encrypted PIN back to the mobile computing device.
Method and system for determining a current gaze direction
A method for determining a current gaze direction of a user in relation to a three-dimensional (“3D”) scene, the 3D scene sampled by a rendering function to produce a two-dimensional (“2D”) projection image of the 3D scene, the sampling performed based on a virtual camera in turn being associated with a camera position and camera direction in the 3D scene. The method includes determining, by a gaze direction detection means, a first gaze direction of the user related to the 3D scene at a first gaze time point. The method includes determining a time-dependent virtual camera 3D transformation representing a change of a virtual camera position and/or virtual camera direction between the first gaze time point and a second sampling. The method includes determining the current gaze direction as a modified gaze direction calculated based on the first gaze direction and an inverse of the time-dependent virtual camera 3D transformation.
System for authorizing rendering of objects in three-dimensional spaces
Systems and methods for authorizing rendering of objects in three-dimensional spaces are described. The system may include a first system defining a virtual three-dimensional space including the placement of a plurality of objects in the three-dimensional space, and a second system including a plurality of rules associated with portions of the three-dimensional space and a device coupled to the first system and the second system. The device may receive a request to render a volume of three-dimensional space, retrieve objects for the volume of three-dimensional, retrieve rules associated with the three-dimensional, and apply the rules for the three-dimensional space to the objects.
Whole-body human-computer interface
A human-computer interface system having an exoskeleton including a plurality of structural members coupled to one another by at least one articulation configured to apply a force to a body segment of a user, the exoskeleton comprising a body-borne portion and a point-of-use portion; the body-borne portion configured to be operatively coupled to the point-of-use portion; and at least one locomotor module including at least one actuator configured to actuate the at least one articulation, the at least one actuator being in operative communication with the exoskeleton.