Patent classifications
G06F3/01
RENDERING INFORMATION IN A GAZE TRACKING DEVICE ON CONTROLLABLE DEVICES IN A FIELD OF VIEW TO REMOTELY CONTROL
Provided are a computer program product, system, and method for rendering information in a gaze tracking device on controllable devices in a field of view to remotely control. A determination is made of a field of view from the gaze tracking device of a user based on a user position. Devices are determined in the field of view the user is capable of remotely controlling to render in the gaze tracking device. An augmented reality representation of information on the determined devices is rendered in a view of the gaze tracking device. User controls are received to remotely control a target device comprising one of the determined devices for which information is rendered in the gaze tracking device. The received user controls are transmitted to the target device to control the target device.
DIGITAL ASSISTANT REFERENCE RESOLUTION
Systems and processes for operating a digital assistant are provided. An example process for performing a task includes, at an electronic device having one or more processors and memory, receiving a spoken input including a request, receiving an image input including a plurality of objects, selecting a reference resolution module of a plurality of reference resolution modules based on the request and the image input, determining, with the selected reference resolution module, whether the request references a first object of the plurality of objects based on at least the spoken input, and in accordance with a determination that the request references the first object of the plurality of objects, determining a response to the request including information about the first object.
GENERATING ACTIONABLE INSIGHT INTERFACES DERIVED FROM BUSINESS DATA SETS
Systems and methods are described for deriving actionable insight interfaces from received data sets using performance indicators and stored insight templates. A server may tag data columns of the data set, which may be then mapped to a plurality of performance indicator inputs to determine a plurality of performance indicators. A selected insight template may then be retrieved from a template database based on the determined performance indicators matching input requirements of the selected insight template. Each insight template stored within the template database may be stored as a data object that includes a plurality of rules and narrative text that provides a text recommendation based on the rule outputs. After the rules for the selected insight template have been executed, the narrative text and the rule outputs may be transmitted to a display device for display on an insight graphic interface.
GENERATING ACTIONABLE INSIGHT INTERFACES DERIVED FROM BUSINESS DATA SETS
Systems and methods are described for deriving actionable insight interfaces from received data sets using performance indicators and stored insight templates. A server may tag data columns of the data set, which may be then mapped to a plurality of performance indicator inputs to determine a plurality of performance indicators. A selected insight template may then be retrieved from a template database based on the determined performance indicators matching input requirements of the selected insight template. Each insight template stored within the template database may be stored as a data object that includes a plurality of rules and narrative text that provides a text recommendation based on the rule outputs. After the rules for the selected insight template have been executed, the narrative text and the rule outputs may be transmitted to a display device for display on an insight graphic interface.
GRAPHICAL MENU STRUCTURE
A human interface including steps of presenting an image, then receiving a gesture from the user. The image is analyzed to identify the elements of the image and then compared to known images and then either soliciting an input from the user or displaying a menu to the user. Comparing the image and/or graphical image elements may be effectuated using a trained artificial intelligence engine or, in some embodiments, with a structured data source, said data source including predetermined images and menu options. If the input from the user is known, then presenting a predetermined menu. If the image is not known, then presenting an image or other menu options, and soliciting from the user the desired options. Once the user selections an option, the resulting selection may be used to further train the AI system or added to the structured data source for future reference.
METHOD AND SYSTEM FOR GAZE-BASED CONTROL OF MIXED REALITY CONTENT
Systems and methods are presented for discovering and positioning content into augmented reality space. A method includes forming a three-dimensional (3D) map of surroundings of a user of an augmented reality (AR) head mounted display (HMD); determining a depth-wise location of a gaze point of a user based on eye gaze direction and eye vergence; determining a visual guidance line pathway in the 3D map; guiding an action of the user along the visual guidance line pathway at one or more identified focal points; and rendering a mixed reality (MR) object along the visual guidance line pathway at a location corresponding to a direction of the user’s gaze.
AIR TRANSPORTATION SYSTEMS AND METHODS
Systems and methods are disclosed for transporting people using air vehicles.
GENERATING AUGMENTED REALITY IMAGES FOR DISPLAY ON A MOBILE DEVICE BASED ON GROUND TRUTH IMAGE RENDERING
Systems and methods are disclosed herein for monitoring a location of a client device associated with a transportation service and generating augmented reality images for display on the client device. The systems and methods use sensor data from the client device and a device localization process to monitor the location of the client device by comparing renderings of images captured by the client device to renderings of the vicinity of the pickup location. The systems and methods determine navigation instructions from the user's current location to the pickup location and select one or more augmented reality elements associated with the navigation instructions and/or landmarks along the route to the pickup location. The systems and methods instruct the client device to overlay the selected augmented reality elements on a video feed of the client device.
PAIN MEDICATION MANAGEMENT SYSTEM
A pain management system for treating pain and/or detecting potential drug abuse in a patient suffering from pain, the system comprising at least one human machine interface (HMI), operable to acquire data generated by a patient responsive to pain that the patient experiences and data responsive to the patients intake of a drug for controlling the pain; at least one processor operable to process the pain and drug intake data to generate a pain control regimen; and at least one communication interface operable to support communications between an attending medical professional and the at least one HMI and/or the processor to enable the attending medical professional to access the pain and drug intake data and the pain control regimen
COORDINATING ALIGNMENT OF COORDINATE SYSTEMS USED FOR A COMPUTER GENERATED REALITY DEVICE AND A HAPTIC DEVICE
A first electronic device controls a second electronic device to measure a position of the first electronic device. The first electronic device includes a motion sensor, a network interface circuit, a processor, and a memory. The motion sensor senses motion of the first electronic device. The network interface circuit communicates with the second electronic device. The memory stores program code that is executed by the processor to perform operations that include, responsive to determining that the first electronic device has a level of motion that satisfies a defined rule, transmitting a request for the second electronic device to measure a position of the first electronic device. The position of the first electronic device is sensed and then stored in the memory. An acknowledgement is received from the second electronic device indicating that it has stored sensor data that can be used to measure the position of the first electronic device.