G06F2203/0381

TOUCH CONTROL SYSTEM AND SENSING METHOD THEREOF AND ACTIVE PEN
20230004233 · 2023-01-05 · ·

A touch control system includes: a touch panel; an active pen having a plurality of functions, the functions being used for controlling the active pen or the touch panel and initiated only by at least one voice signal, the active pen including: a voice receiving module configured to receive the at least one voice signal; a voice analyzing module configured to analyze the at least one voice signal to generate a controlling command; and a control module configured to determine that the controlling command is configured to control the active pen or the touch panel; and a touch controller electrically connected to the touch panel and receive, in response to the controlling command being configured to control the touch panel, the controlling command.

Intent detection with a computing device

A method can perform a process with a method including capturing an image, determining an environment that a user is operating a computing device, detecting a hand gesture based on an object in the image, determining, using a machine learned model, an intent of a user based on the hand gesture and the environment, and executing a task based at least on the determined intent.

VIRTUAL AND AUGMENTED REALITY INSTRUCTION SYSTEM
20220415197 · 2022-12-29 ·

A virtual and augmented reality instruction system may include a complete format and a portable format. The complete format may include a board system to capture all movement (including writing and erasing) on the board's surface, and a tracking system to capture all physical movements. The portable format may include a touch-enabled device or digital pen and a microphone, and is designed to capture a subset of the data captured by the complete format. In one embodiment of the complete format, the board system and the tracking system can communicate with each other through a network, and control devices (such as a laptop, desktop, mobile phone and tablet) can be used to control the board system and tracking system through the network. In further embodiments of the complete format, augmented reality can be achieved within the tracking system through the combination of 3D sensors and see through augmented reality glasses.

Deep neural network training for application program generation
11537871 · 2022-12-27 · ·

A computer architecture may comprise a processor, a memory, and a differential memory subsystem (DMS). A learning engine is stored on the memory and configured to present data to an expert user, to receive user sensory input measuring reactions related to the presented data, and to create an attention map based thereon. The attention map is indicative of portions of the presented data on which the expert user focused. The learning engine is configured to annotate the attention map with the natural language input labels and to train a neural network based on the user sensory input. The learning engine is configured to create a model based on the trained neural network, to provide an application program for an output target; and to instruct the output target via the application program to detect and remedy anomalous activity. The DMS is physically separate and configured for experimental data processing functions.

Developer and runtime environments supporting multi-input modalities

Developer and runtime environments supporting multi-modal input for computing systems are disclosed. The developer environment includes a gesture library of human body gestures (e.g., hand gestures) that a previously-trained, system-level gesture recognition machine is configured to recognize. The developer environment further includes a user interface for linking a gesture of the gesture library with a semantic descriptor that is assigned to a function of the application program. The application program is executable to implement the function responsive to receiving an indication of the gesture recognized by the gesture recognition machine within image data captured by a camera. The semantic descriptor may be additionally linked to a different input modality than the gesture, such as a natural language input.

Wireless input system

A wireless input system includes a first computer, a second computer, a first-mode wireless connection device, a first input device and a second input device. When the first-mode wireless connection between the first input device and the first-mode wireless connection device is established and the first-mode wireless connection between the second input device and the first-mode wireless connection device is established, the first input device and the second input device can be operated to control the first computer. When the second-mode wireless connection between the first input device and the second computer is established, the second input device can follow the first input device to perform a connection switching operation. Consequently, the second-mode wireless connection between the second input device and the second computer is established.

METHOD, APPARATUS, AND COMPUTER PROGRAM FOR TOUCH STABILIZATION
20220397975 · 2022-12-15 ·

Embodiments relate to a method, apparatus, and computer program for stabilizing a user's interaction with a touchscreen in a vehicle. The method comprises populating an interface of the touchscreen display with a plurality of elements. Each element of the plurality comprises an active area for registering a touch interaction by the user. The method further comprises determining a focus area of the user on the interface and comparing the focus area with the active areas of the plurality of elements to determine a focused set comprising at least one element that exceeds a likely selection threshold. The method continues by adjusting the active areas of the plurality of elements to reduce the likely selection threshold of at least one element in the focused set.

System and method for adapting graphical user interfaces to real-time user metrics

The invention concerns a software based system for computer-aided design (CAD) that includes user interface tailoring and methods for continuously evaluating the learning progress of the user and increase work productivity by searching for the patterns in the user input to predict the goal of user actions and propose next action to reach the goal in optimal way. Components of the presented invention relate to collection of the different user input including at least eye tracking and user focus and attention related features; analyzing continuously user's behavior to evaluate user learning progress and work productivity related to the computer-aided design tool; monitoring user interface components that are used by the user; searching for the patterns in user behavior; tailoring user interface controls to maximize a work productivity at the same time increasing user's qualification profile. The core of the invention comprises gaze tracking as an input component for better user activity and performance tracking, component for features extraction fusion of different types user input, continuously monitored users qualification profile and two classifiers making decision on user interface complexity level and a set of most relevant graphical user interface controls for the next user action.

METHOD AND SYSTEM FOR VIRTUAL ASSISTANT DECISION MAPPING AND PRIORITIZATION

Aspects of the subject disclosure may include, for example, obtaining information relating to a context associated with a user, monitoring, based on the obtaining the information, a behavior of the user relative to the context, determining to provide assistance to the user based on the monitoring the behavior of the user, responsive to the determining to provide assistance to the user, identifying an assistive action, wherein the identifying the assistive action is based on a predefined threshold, and based on the identifying the assistive action, performing the assistive action for the user. Other embodiments are disclosed.

Apparatus, method and recording medium for controlling user interface using input image

A method of controlling a user interface using an input image is provided. The method includes storing operation executing information of each of one or more gesture forms according to each of a plurality of functions, detecting a gesture form from the input image, and identifying the operation executing information mapped on the detected gesture form to execute an operation according to a function which is currently operated.