Patent classifications
G06F3/017
RENDERING INFORMATION IN A GAZE TRACKING DEVICE ON CONTROLLABLE DEVICES IN A FIELD OF VIEW TO REMOTELY CONTROL
Provided are a computer program product, system, and method for rendering information in a gaze tracking device on controllable devices in a field of view to remotely control. A determination is made of a field of view from the gaze tracking device of a user based on a user position. Devices are determined in the field of view the user is capable of remotely controlling to render in the gaze tracking device. An augmented reality representation of information on the determined devices is rendered in a view of the gaze tracking device. User controls are received to remotely control a target device comprising one of the determined devices for which information is rendered in the gaze tracking device. The received user controls are transmitted to the target device to control the target device.
DIGITAL ASSISTANT REFERENCE RESOLUTION
Systems and processes for operating a digital assistant are provided. An example process for performing a task includes, at an electronic device having one or more processors and memory, receiving a spoken input including a request, receiving an image input including a plurality of objects, selecting a reference resolution module of a plurality of reference resolution modules based on the request and the image input, determining, with the selected reference resolution module, whether the request references a first object of the plurality of objects based on at least the spoken input, and in accordance with a determination that the request references the first object of the plurality of objects, determining a response to the request including information about the first object.
WEARABLE ELECTRONIC DEVICE AND METHOD FOR PROVIDING INFORMATION OF BRUSHING TEETH IN WEARABLE ELECTRONIC DEVICE
According to an embodiment, a wearable electronic device may include a motion sensor, an audio sensor, a display, a memory, and a processor electrically connected to the motion sensor, the audio sensor, and the memory. The processor may be configured to obtain motion sensing information via the motion sensor, obtain an audio signal corresponding to the motion sensing information via the audio sensor, identify a tooth-brushing hand motion type corresponding to the motion sensing information, identify an audio signal pattern corresponding to the tooth-brushing hand motion type, and identify, based on the tooth-brushing hand motion type and the audio signal pattern, a tooth-brushing hand motion. Other embodiments may also be possible.
GRAPHICAL MENU STRUCTURE
A human interface including steps of presenting an image, then receiving a gesture from the user. The image is analyzed to identify the elements of the image and then compared to known images and then either soliciting an input from the user or displaying a menu to the user. Comparing the image and/or graphical image elements may be effectuated using a trained artificial intelligence engine or, in some embodiments, with a structured data source, said data source including predetermined images and menu options. If the input from the user is known, then presenting a predetermined menu. If the image is not known, then presenting an image or other menu options, and soliciting from the user the desired options. Once the user selections an option, the resulting selection may be used to further train the AI system or added to the structured data source for future reference.
METHOD AND SYSTEM FOR GAZE-BASED CONTROL OF MIXED REALITY CONTENT
Systems and methods are presented for discovering and positioning content into augmented reality space. A method includes forming a three-dimensional (3D) map of surroundings of a user of an augmented reality (AR) head mounted display (HMD); determining a depth-wise location of a gaze point of a user based on eye gaze direction and eye vergence; determining a visual guidance line pathway in the 3D map; guiding an action of the user along the visual guidance line pathway at one or more identified focal points; and rendering a mixed reality (MR) object along the visual guidance line pathway at a location corresponding to a direction of the user’s gaze.
AIR TRANSPORTATION SYSTEMS AND METHODS
Systems and methods are disclosed for transporting people using air vehicles.
REDUCED SIZE USER INTERFACE
Techniques and user interfaces for accessing, navigating, and displaying photos on an electronic device.
METHOD AND WEARABLE DEVICE FOR DETECTING AND VERBALIZING NONVERBAL COMMUNICATION
A triboelectric sensor device with a substantially cylindrical nonconductive core, and a conductive fiber substantially helically disposed around the conductive core and in an axial direction thereof. Example implementations also include a method of extracting communication from body position, by transforming one or more training body position inputs by a principal component analysis, generating training input to a support vector machine (SVM) based on a target body position, and generating one or more SVM classification outputs associated with the target body position.
INFORMATION DISPLAY SYSTEM AND INFORMATION DISPLAY METHOD
An information display system according to the present disclosure includes: display device provided in a mask, the display device being configured to display a screen viewed by a person wearing the mask; and display control unit for switching a displaying mode of the screen.
Gesture-Based Skill Search
A map regarding one or more virtual actions may be stored in memory. The virtual actions may be associated with a set of data associated with performing a corresponding action in a real-world environment. Data regarding a action by a user in the real-world environment may be captured. A current progress level of the user within the virtual environment is identified. The current identified progress level is associated with one or more available virtual actions. The captured data may correspond to an identified one of the available virtual actions based on a match between the captured data and the set of data associated with the identified virtual action as indicated by the map. A search of one or more databases for instructions corresponding to the identified virtual action is initiated.