Patent classifications
G06F3/012
RENDERING INFORMATION IN A GAZE TRACKING DEVICE ON CONTROLLABLE DEVICES IN A FIELD OF VIEW TO REMOTELY CONTROL
Provided are a computer program product, system, and method for rendering information in a gaze tracking device on controllable devices in a field of view to remotely control. A determination is made of a field of view from the gaze tracking device of a user based on a user position. Devices are determined in the field of view the user is capable of remotely controlling to render in the gaze tracking device. An augmented reality representation of information on the determined devices is rendered in a view of the gaze tracking device. User controls are received to remotely control a target device comprising one of the determined devices for which information is rendered in the gaze tracking device. The received user controls are transmitted to the target device to control the target device.
METHOD AND SYSTEM FOR GAZE-BASED CONTROL OF MIXED REALITY CONTENT
Systems and methods are presented for discovering and positioning content into augmented reality space. A method includes forming a three-dimensional (3D) map of surroundings of a user of an augmented reality (AR) head mounted display (HMD); determining a depth-wise location of a gaze point of a user based on eye gaze direction and eye vergence; determining a visual guidance line pathway in the 3D map; guiding an action of the user along the visual guidance line pathway at one or more identified focal points; and rendering a mixed reality (MR) object along the visual guidance line pathway at a location corresponding to a direction of the user’s gaze.
COORDINATING ALIGNMENT OF COORDINATE SYSTEMS USED FOR A COMPUTER GENERATED REALITY DEVICE AND A HAPTIC DEVICE
A first electronic device controls a second electronic device to measure a position of the first electronic device. The first electronic device includes a motion sensor, a network interface circuit, a processor, and a memory. The motion sensor senses motion of the first electronic device. The network interface circuit communicates with the second electronic device. The memory stores program code that is executed by the processor to perform operations that include, responsive to determining that the first electronic device has a level of motion that satisfies a defined rule, transmitting a request for the second electronic device to measure a position of the first electronic device. The position of the first electronic device is sensed and then stored in the memory. An acknowledgement is received from the second electronic device indicating that it has stored sensor data that can be used to measure the position of the first electronic device.
VIRTUAL CONTENT EXPERIENCE SYSTEM AND CONTROL METHOD FOR SAME
Disclosed is a virtual content experience system. In the virtual content experience system, a central server for driving the system contains: a content conversion unit which converts two-dimensional image content, received by means of a data transmission and reception unit or input by a user, into a stereoscopic image; a motion information generation unit which recognizes text information extracted from the two-dimensional image content and converts the text information into motion information; a content playback control unit which is provided to transmit the motion information to a motion information management unit provided in a virtual reality experience chair, or receive start information and end information about the motion information from the motion information management unit to generate and change control information for controlling whether to provide new two-dimensional image content; and a display unit for displaying the content conversion unit, and the motion information or control information.
METHOD AND DEVICE FOR LATENCY REDUCTION OF AN IMAGE PROCESSING PIPELINE
In some implementations, a method includes: determining a complexity value for first image data associated with of a physical environment that corresponds to a first time period; determining an estimated composite setup time based on the complexity value for the first image data and virtual content for compositing with the first image data; in accordance with a determination that the estimated composite setup time exceeds the threshold time: forgoing rendering the virtual content from the perspective that corresponds to the camera pose of the device relative to the physical environment during the first time period; and compositing a previous render of the virtual content for a previous time period with the first image data to generate the graphical environment for the first time period.
Hands-Free Crowd Sourced Indoor Navigation System and Method for Guiding Blind and Visually Impaired Persons
The present invention discloses an indoor Electronic Traveling Aid (ETA) system for blind and visually impaired (BVI) people. The system comprises a headband, intuitive tactile display with myographic (EMG) feedback, controller, and server-based methods corresponding to three operation modalities. In 1.sup.st modality, sighted users mark routes, map navigational directions, and create semantic comments for BVIs. This information of routes is continuously collected and estimated in ETA servers. In the 2.sup.nd modality, BVIs choose the routes from servers, thereby, are supplied with real-time navigational guidance. Also, an EMG interface is used, where the user's facial muscles are enabled is to send commands to the ETA system. In the 3.sup.rd modality, BVIs receive real-time audio guidance in complex or unforeseen situations: ETA provides a crowd-assisted interface and real-time sensory (e.g., video) data, where crowd-assistants analyze the situation and help the BVI to navigate.
Ring motion capture and message composition system
Systems, devices, media, and methods are presented for composing and sharing a message based on the motion of a handheld electronic device such as a ring. The methods in some implementations include presenting a keyboard on a display, collecting course data associated with a course traveled by the ring, and overlying a trace onto the keyboard, such that the trace is correlated in near real-time with the course traveled by the ring. In some implementations the display element is part of a portable device, such as the lens of an electronic eyewear device. Based on the course data relative to the key locations on the keyboard, the system identifies and presents candidate words to be included in a message.
Visual-inertial tracking using rolling shutter cameras
Visual-inertial tracking of an eyewear device using a rolling shutter camera(s). The eyewear device includes a position determining system. Visual-inertial tracking is implemented by sensing motion of the eyewear device. An initial pose is obtained for a rolling shutter camera and an image of an environment is captured. The image includes feature points captured at a particular capture time. A number of poses for the rolling shutter camera is computed based on the initial pose and sensed movement of the device. The number of computed poses is responsive to the sensed movement of the mobile device. A computed pose is selected for each feature point in the image by matching the particular capture time for the feature point to the particular computed time for the computed pose. The position of the mobile device is determined within the environment using the feature points and the selected computed poses for the feature points.
Whole-body human-computer interface
A human-computer interface system having an exoskeleton including a plurality of structural members coupled to one another by at least one articulation configured to apply a force to a body segment of a user, the exoskeleton comprising a body-borne portion and a point-of-use portion; the body-borne portion configured to be operatively coupled to the point-of-use portion; and at least one locomotor module including at least one actuator configured to actuate the at least one articulation, the at least one actuator being in operative communication with the exoskeleton.
Radio frequency sensing in a television environment
Techniques are provided for performing radio frequency (RF) sensing to determine the viewing status of a television user. This can be used to determine user behavior during the playback of content (e.g., whether a user is watching the content), which can be used as a data point for determining the user's level of interest in the content. Using the status of the television user, embodiments can provide additional or alternative functionality, such as powering down and/or powering up the television. Furthermore, RF sensing may be performed by existing television hardware, such a Wi-Fi transceiver, and may therefore provide RF sensing functionality to a television with little or no added cost.