Patent classifications
G06F3/0236
Systems and methods for context-based optical character recognition
Methods, systems, and apparatus, including computer programs stored on computer-readable media, for displaying contextually relevant information, comprising receiving cursor data comprising a location of a cursor on an electronic display, and determining a screenshot of at least a portion of the electronic display. One or more proximate alphanumeric characters may be determined in at least a portion of the screenshot based on the location of the cursor, and at least one of the proximate alphanumeric characters may be matched with one or more terms from a predetermined list of terms. An information card may be caused to be displayed on the electronic display based on the location of the cursor, the information card corresponding to the one or more terms from the predetermined list of terms.
SYSTEMS TO ENHANCE DATA ENTRY IN MOBILE AND FIXED ENVIRONMENT
A mobile phone device includes a housing having a substantially rectangular shape wherein its height dimension substantially corresponds to a distance between an ear and a mouth of a user and wherein its width dimension is less than its height dimension. A display unit is integrated within the front surface of the mobile phone device. The display unit substantially entirely covers the front surface of the mobile phone device. The mobile phone device does not include a physical key on the front surface.
Inputting images to electronic devices
A computing device is described which has a memory storing text input by a user. The computing device has a processor which is configured to send the text to a prediction engine having been trained to predict images from text. The processor is configured to receive from the prediction engine, in response to the sent text, a plurality of predictions, each prediction comprising an image predicted as being relevant to the text. The processor is configured to insert a plurality of the images into the text on the basis of criteria comprising one or more of: ranks of the predictions, categories of the images, rules associated with one or more of the images, user input, a trigger word. The processor is configured to insert the plurality of images into the text sequentially, in an order corresponding to ranks of the predictions.
Systems, methods, and interfaces for performing inputs based on neuromuscular control
The disclosed computer-implemented method may include presenting, via a user interface, a sensory cue, and receiving, from neuromuscular sensors of a wearable device, various neuromuscular signals generated by a user wearing the wearable device, where the user generates the neuromuscular signals in response to the sensory cue being presented to the user via the user interface. The method may also include interpreting the received neuromuscular signals as input commands with respect to the sensory cue provided by the user interface, such that the input commands initiate performance of specified tasks within the user interface. The method may also include performing the specified tasks within the user interface according to the interpreted input commands. Various other methods, systems, and computer-readable media are also disclosed.
Method and system for ranking candidates in input method
A method and a system for ranking candidates in an input method are provided. The method comprises: receiving an initial key code string inputted by a user using an input method; for each character in the initial key code string, obtaining a weight of the character and weights of characters surrounding the character, and establishing a key code string weight list with a corresponding hierarchy according to a character input order. The method further comprises: when character combinations are obtained from a dictionary, according to a correspondence relationship between a hierarchy in the input method dictionary and the hierarchy in the key code string weight list, determining weights of the character combinations using the key code string weight list; and based on the weights of the character combinations, ranking candidates corresponding to the character combinations.
Eye-tracking communication methods and systems
Provided is a control system that interfaces with an individual through tracking the eyes and/or tracking other physiological signals generated by an individual. The system, is configured to classify the captured eye images into gestures, that emulate a joystick-like control of the computer. These gestures permit the user to operate, for instance a computer or a system with menu items.
Detecting and using body tissue electrical signals
Bio-potentials are sensed on the skin of a subject. The bio-potentials include muscle bio-potentials, and nerve bio-potentials. Skin sensors are positioned to enable the sensing circuitry to emphasize the nerve bio-potentials and deemphasize the muscle bio-potentials in processed bio-potential signals generated by the sensing circuitry. A machine learning component identifies control sequences or tracks body motions based on the processed bio-potential signals.
Wearable device with a bezel to sense a touch input
An electronic device is provided. The electronic device includes a bezel as a first metallic component, a dial as a second metallic component to form a capacitor with the bezel, an inner ring as a dielectric disposed between the bezel and the dial, at least one processor configured to obtain a capacitance value generated by a touch input on the bezel.
Detecting and Using Body Tissue Electrical Signals
Systems and methods for gesture control are disclosed. In some embodiments, a system may include a plurality of electrode pairs, a motion sensor, a controller, and a classifier. The system may be configured to: enter a monitoring state in which the system is configured to receive data; receive a first set of data; determine that the first set of data does not satisfy one or more action criteria; return to the monitoring state without transmitting the first set of data to the classifier; receive a second set of data; determine that the second set of data satisfies the one or more action criteria; transmit the second set of data to the classifier; using the classifier, analyze the second set of data to generate an interpreted output indicating a gesture performed by the person; and based on the interpreted output, generate a machine instruction.
INTERFACE DISPLAY METHOD AND APPARATUS OF APPLICATION, DEVICE, AND MEDIUM
This application discloses a method of displaying information in a program interface of an application performed by a computer device. The method includes: displaying a virtual keyboard control and an extension bar control in the program interface; in response to an input operation in the virtual keyboard control, displaying at least one character string in the extension bar control, the at least one character string being determined according to the input operation in the virtual keyboard control; and in response to a select operation on a target string among the at least one character string in the extension bar control, displaying a function interface of applying a target function to the target string. This embodiment allows a user to quickly switch between function interfaces when using an application, thereby reducing operation steps of the user and improving human-computer interaction efficiency.