Patent classifications
G06F2203/0381
Contextual assistant using mouse pointing or touch cues
A method for a contextual assistant to use mouse pointing or touch cues includes receiving audio data corresponding to a query spoken by a user, receiving, in a graphical user interface displayed on a screen, a user input indication indicating a spatial input applied at a first location on the screen, and processing the audio data to determine a transcription of the query. The method also includes performing query interpretation on the transcription to determine that the query is referring to an object displayed on the screen without uniquely identifying the object, and requesting information about the object. The method further includes disambiguating, using the user input indication indicating the spatial input applied at the first location on the screen, the query to uniquely identify the object that the query is referring to, obtaining the information about the object requested by the query, and providing a response to the query.
VOICE COMMAND-DRIVEN DATABASE
A voice command-driven system and computer-implemented method are disclosed for selecting a data item in a list of text-based data items stored in a database using a simple affirmative voice command input without utilizing a connection to a network. The text-based data items in the list are converted to speech using an embedded text-to-speech engine and an audio output of a first converted data item is provided. A listening state is entered into for a predefined pause time to await receipt of the simple affirmative voice command input. If the simple affirmative voice command input is received during the predefined pause time, the first converted data item is selected for processing. If the simple affirmative voice command input is not received during the predefined pause time, an audio output of a next converted data item in the list is provided.
REDUNDANT TRACKING SYSTEM
A redundant tracking system comprising multiple redundant tracking sub-systems, enabling seamless transitions between such tracking sub-systems, provides a solution to this problem by merging multiple tracking approaches into a single tracking system. This system is able to combine tracking objects with six degrees of freedom (6DoF) and 3DoF through combining and transitioning between multiple tracking systems based on the availability of tracking indicia tracked by the tracking systems. Thus, as the indicia tracked by any one tracking system becomes unavailable, the redundant tracking system seamlessly switches between tracking in 6DoF and 3DoF thereby providing the user with an uninterrupted experience.
METHODS AND SYSTEMS FOR SUGGESTING AN ENHANCED MULTIMODAL INTERACTION
Provided are methods and systems for suggesting an enhanced multimodal interaction. The method for suggesting at least one modality of interaction, includes: identifying, by an electronic device, initiation of an interaction by a user with a first device using a first modality; detecting, by the electronic device, an intent of the user and a state of the user based on the identified initiated interaction; determining, by the electronic device, at least one of a second modality and at least one second device, to continue the initiated interaction, based on the detected intent of the user and the detected state of the user; and providing, by the electronic device, a suggestion to the user to continue the interaction with the first device using the determined second modality, by indicating the second modality on the first device or the at least one second device.
Executing gestures with active stylus
In one embodiment, a stylus with one or more electrodes and one or more computer-readable non-transitory storage media embodying logic for transmitting signals wirelessly to a device through a touch sensor of the device has one or more sensors for detecting movement of the stylus.
HUMAN INTERFACE SYSTEM
A human interface system comprising a physical controller configured to receive input from a user and a brain-computer interface in which visual stimuli are presented such that the intention of the user can be validated. The input data from the physical controller is combined with input data from the brain-computer interface to provide hybrid input which may be used to control one or more external real or computer-generated objects. Method of operating said human interface device.
METHOD OF CONTROLLING PROJECTOR AND PROJECTOR
A method of controlling a projector includes projecting an achromatic area image representing a drawing area in which the projector receives drawing using a pointer on a projection surface at a first luminance at a ratio on the projection surface, the ratio being smaller than or equal to a specific ratio to a maximum luminance at which the projector capable of project an image on the projection surface, detecting a position on the projection surface pointed by the pointer while projecting the area image, determining whether or not the position is included in the drawing area, and displaying, by the projector, an area including at least a part of an outline of the drawing area at a luminance higher than the first luminance when it is determined that the position is included in the drawing area.
Techniques for interacting with handheld devices
In one embodiment of the present invention, a method for multiple device interaction includes detecting an orientation of a first device relative to a second device. The method also includes detecting a first gesture performed with either the first device or the second device, wherein the first gesture causes a first action that is based at least in part on the orientation of the first device relative to the second device.
Information processing apparatus and control method for controller apparatus
The information processing apparatus is connected to a controller apparatus provided with a push button which moves from a first position to a second position by being pushed by a user's finger. The information processing apparatus acquires the push-in amount of the push button of the controller apparatus, determines whether or not the push-in amount is in a range that excludes the first position and the second position and is configured by two threshold values set between the first position and the second position, and performs predetermined processing on the basis of the result of the determination.
MULTI-MODAL SENSOR BASED PROCESS TRACKING AND GUIDANCE
Examples are disclosed that relate to computer-based tracking of a process performed by a user. In one example, multi-modal sensor information is received via a plurality of sensors. A world state of a real-world physical environment and a user state in the real-world physical environment are tracked based on the multi-modal sensor information. A process being performed by the user within a working domain is recognized based on the world state and the user state. A current step in the process is detected based on the world state and the user state. Domain-specific instructions directing the user how to perform an expected action are presented via a user interface device. A user action is detected based on the world state and the user state. Based on the user action differing from the expected action, domain-specific guidance to perform the expected action is presented via the user interface device.