Patent classifications
G06F3/04892
Automated teller device having accessibility configurations
An automated teller device having accessibility configurations is disclosed. in one aspect, in response to a setting to enable the accessibility keypad mode, the automated teller device operates a session in an accessibility keypad mode. In the accessibility keypad mode a second set of actions is mapped to the keys of the keypad. The second set of actions is different than a first set of actions mapped to the keys of the keypad in a standard keypad mode. The second set of actions comprises one or more of actions for navigation and input selection of the graphical user interface, actions for control of audio being reproduced, actions for control of volume of the audio being reproduced, or actions for control of a rate of reproduction of the audio being reproduced.
METHOD AND ELECTRONIC DEVICE FOR NAVIGATING APPLICATION SCREEN
Provided are an electronic device for navigating an application screen, and an operating method thereof. The method may include receiving a user input; determining, based on the user input, a user intent for controlling the electronic device; determining a command for performing a control operation corresponding to the user intent as a goal; identifying elements of a user interface on the screen of the application; determining, based on the user intent and the elements of the user interface, at least one sub-goal for executing the command; and executing the command by performing at least one task corresponding to the at least one sub-goal, wherein the at least one sub-goal is changeable based on a validation of an operation of navigating the application for executing the command, and the at least one task includes units of action for navigating the application.
Simulated Input Mechanisms for Small Form Factor Devices
A wearable computing device includes a display, a motion sensor, and a controller that: defines a pose of a simulated input object with selectable input elements; using the motion sensor, determines current poses of the display, and for each pose: (i) based on the pose and the pose of the input object, selects a portion of the input object, including a subset of the input elements, and (ii) renders the portion of the input object on the display; for at least one of the current poses, detects a simulated key press associated with one of the subset of input elements, and generates input data corresponding to the one of the subset of input elements. The device includes a housing containing the display, the motion sensor, and the controller; and a mounting component, coupled to the housing and configured to removably affix the housing to a forearm of an operator.
Aircraft system and method to move cursor to default position with hardware button
A system may include a display installed in an aircraft, a hardware button installed in the aircraft, and a processor installed in the aircraft, wherein the processor may be communicatively coupled to the display and to the hardware button. The processor may be configured to: output at least one view to the display; receive a user input from the hardware button; and based at least on the user input, cause a cursor to move to a default position on one of the at least one view.
Aircraft system and method to move cursor to default position with hardware button
A system may include a display installed in an aircraft, a hardware button installed in the aircraft, and a processor installed in the aircraft, wherein the processor may be communicatively coupled to the display and to the hardware button. The processor may be configured to: output at least one view to the display; receive a user input from the hardware button; and based at least on the user input, cause a cursor to move to a default position on one of the at least one view.
Systems and methods for controlling cursor behavior
Systems, methods, and non-transitory computer readable media containing instructions for causing at least one processor to perform operations to enable cursor control in an extended reality space are provided. In one implementation, the processor is configured to perform operations comprising receiving from an image sensor first image data reflecting a first region of focus of a user of a wearable extended reality appliance; causing a first presentation of a virtual cursor in the first region of focus; receiving from the image sensor second image data reflecting a second region of focus of the user outside the initial field of view in the extended reality space; receiving input data indicative of a desire of the user to interact with the virtual cursor; and causing a second presentation of the virtual cursor in the second region of focus in response to the input data.
Systems and methods for controlling cursor behavior
Systems, methods, and non-transitory computer readable media containing instructions for causing at least one processor to perform operations to enable cursor control in an extended reality space are provided. In one implementation, the processor is configured to perform operations comprising receiving from an image sensor first image data reflecting a first region of focus of a user of a wearable extended reality appliance; causing a first presentation of a virtual cursor in the first region of focus; receiving from the image sensor second image data reflecting a second region of focus of the user outside the initial field of view in the extended reality space; receiving input data indicative of a desire of the user to interact with the virtual cursor; and causing a second presentation of the virtual cursor in the second region of focus in response to the input data.
Underwater Camera Operations
Camera operations are controlled by motion patterns determined from the outputs of an internal motion sensor. These methods remove the need for a nob, button, touch screen, or other mechanical control devices with movable components. This effectively removes common water leakage weak points for electronic devices with cameras. Motion patterns determined from the outputs of an internal motion sensor are also used to adjust camera operation parameters such as the brightness of a supporting light source, shutter speed, aperture opening, and contrast. These methods are also applicable for land operations.
Devices, systems, and methods for controlling computing devices via neuromuscular signals of users
The disclosed human computer interface (HCI) system may include (1) at least one processor, (2) a plurality of sensors that detect one or more neuromuscular signals from a forearm or wrist of a user, and (3) memory that stores (A) one or more trained inferential models that determine an amount of force associated with the one or more neuromuscular signals and (B) computer-executable instructions that, when executed by the at least one processor, cause the at least one processor to (I) identify the amount of force determined by the one or more trained inferential models, (II) determine that the amount of force satisfies a threshold force value, and in accordance with the determination that the amount of force satisfies the threshold force value, (III) generate a first input command for the HCI system. Various other devices, systems, and methods are also disclosed.
System and methods for device interaction using a pointing device and attention sensing device
A system and methods are provided to manage gestures and positional data from a pointing device, considering an attention sensing device with known accuracy characteristics. The method uses the state of the user's attention and the pointing device data as input, mapping them against predefined regions on the device's screen(s). It then uses both the mapping results and raw inputs to affect the device, such as sending instructions or moving the pointing cursor.