G06F3/00

Cognitive composition of multi-dimensional icons

Systems and methods for cognitive composition of multi-dimensional icons and interactions are disclosed. In embodiments, a computer-implemented method comprises: generating, by a computing device, interaction logs based on user context data received; identifying, by the computing device, one or more target applications and associated scripts; automatically generating, by the computing device, a multi-dimensional icon for the one or more target applications based on the interaction logs, wherein the multi-dimensional icon comprises a geometric structure including content cells; allocating, by the computing device, the scripts to respective content cells of the multi-dimensional icon.

Neuromuscular text entry, writing and drawing in augmented reality systems

Methods and systems for providing input to an augmented reality system or an extended reality system based, at least in part, on neuromuscular signals. The methods and systems comprise detecting, using one or more neuromuscular sensors arranged on one or more wearable devices, neuromuscular signals from a user; determining that a computerized system is in a mode configured to provide input including text to the augmented reality system; identifying based, at least in part, on the neuromuscular signals and/or information based on the neuromuscular signals, the input, wherein the input is further identified based, at least in part, on the mode; and providing the identified input to the augmented reality system.

Deep causal learning for data storage and processing power management

Method for active data storage management to optimize use of an electronic memory. The method includes providing signal injections for data storage. The signal injections can include various types of data and sizes of data files. Response signals corresponding with the signal injections are received, and a utility of those signals is measured. Based upon the utility of the response signals, parameters relating to storage of the data is modified to optimize use of long-term high latency passive data storage and short-term low latency active data storage.

Complex system and data transfer method

In a complex system including; one or more storage systems including a cache and a storage controller; and one or more storage boxes including a storage medium, the storage box generates redundant data from write data received from a server, and writes the write data and the redundant data to the storage medium. The storage box transmits the write data to the storage system when it is difficult to generate the redundant data or it is difficult to write the write data and the redundant data to the storage medium. The storage system stores the received write data in the cache.

System and method for providing interlinked user interfaces corresponding to safety logic of a process control system

A system and method for enabling access to information included in a safety requirement specification (SRS) for a process plant, the process plant controlled by a process control system including displaying, in a user interface, a cause and effect matrix (CEM) having a set of elements including a set of causes and a set of effects, wherein each of the set of causes represents a condition within the process plant and each of the set of effects represents an effect to be performed within the process plant, and wherein at least some of the set of causes and the set of effects are related as cause-effect pairs whereby the corresponding effect activates in response to an occurrence of the corresponding condition. The system and method further including receiving, via the user interface, a selection of an element of the set of elements, and, in response to receiving the selection: (i) accessing, from the SRS, a set of information associated with the element of the set of elements, and (ii) displaying the set of information in the user interface.

Electronic device and control method thereof

Disclosed is an electronic device. The electronic device comprises: a microphone comprising circuitry; a speaker comprising circuitry; and a processor electrically connected to the microphone and speaker, wherein the processor, when a first user's voice is input through the microphone, identifies a user who uttered the first user's voice and provides a first response sound, which is obtained by inputting the first user's voice to an artificial intelligence model learned through an artificial intelligence algorithm, through the speaker, and when a second user's voice is input through the microphone, identifies a user who uttered the second user's voice, and if the user who uttered the first user's voice is the same as the user who uttered the second user's voice, provides a second response sound, which is obtained by inputting the second user's voice and utterance history information to the artificial intelligence model, through the speaker. In particular, at least some of the methods of providing a response sound to a user's voice may use an artificial intelligence model learned in accordance with at least one of a machine learning, neural network, or deep learning algorithm.

Electronic device and control method thereof

Disclosed is an electronic device. The electronic device comprises: a microphone comprising circuitry; a speaker comprising circuitry; and a processor electrically connected to the microphone and speaker, wherein the processor, when a first user's voice is input through the microphone, identifies a user who uttered the first user's voice and provides a first response sound, which is obtained by inputting the first user's voice to an artificial intelligence model learned through an artificial intelligence algorithm, through the speaker, and when a second user's voice is input through the microphone, identifies a user who uttered the second user's voice, and if the user who uttered the first user's voice is the same as the user who uttered the second user's voice, provides a second response sound, which is obtained by inputting the second user's voice and utterance history information to the artificial intelligence model, through the speaker. In particular, at least some of the methods of providing a response sound to a user's voice may use an artificial intelligence model learned in accordance with at least one of a machine learning, neural network, or deep learning algorithm.

Enhanced input using recognized gestures

A representation of a user can move with respect to a graphical user interface based on input of a user. The graphical user interface comprises a central region and interaction elements disposed outside of the central region. The interaction elements are not shown until the representation of the user is aligned with the central region. A gesture of the user is recognized, and, based on the recognized gesture, the display of the graphical user interface is altered and an application control is outputted.

Method for controlling an application employing identification of a displayed image
11561608 · 2023-01-24 · ·

An application control system and method is adapted for use with an entertainment system of a type including a display such as a monitor or TV and having display functions. A control device may be conveniently held by a user and employs an imager. The control system and method images the screen of the TV or other display to detect distinctive markers displayed on the screen. This information is transmitted to the entertainment system for control of an application or is used by the control device to control an application.

Vehicular vision system
11560092 · 2023-01-24 · ·

A vehicular vision system includes a camera disposed at a vehicle, at least one non-vision sensor disposed at the vehicle, and a display system of the vehicle that displays video images for viewing by the driver of the vehicle. Image data captured by the camera and sensor data sensed by the non-vision sensor are provided to a control of the vehicle. Responsive at least in part to processing at the control of image data captured by the camera, video images are displayed by a video display screen of the display system. The vehicular vision system determines an augmented reality overlay and the video display screen also displays the augmented reality overlay. The displayed augmented reality overlay pertains to at least one accessory of the equipped vehicle and/or is responsive at least in part to a driving condition of the equipped vehicle.