G06F2203/0381

Information processing device and information processing method
11513768 · 2022-11-29 · ·

An information processing device including a specifying unit configured to, based on a speech of a user, specify a selected spot that is intended by the user from visual information that is displayed, wherein the specifying unit is configured to specify the selected spot based on a non-verbal action and a verbal action of the user, is provided. Furthermore, an information processing method including, by a processor, based on a speech of a user, specifying a selected spot that is intended by the user from visual information that is displayed, wherein the specifying includes specifying the selected spot based on a non-verbal action and a verbal action of the user, is provided.

Control system using in-vehicle gesture input
11514687 · 2022-11-29 · ·

A control system using an in-vehicle gesture input, and more particularly, a system for receiving a vehicle occupant's gesture and controlling the execution of vehicle functions. The control system using an in-vehicle gesture input includes an input unit configured to receive a user's gesture, a memory configured to store a control program using an in-vehicle gesture input therein, and a processor configured to execute the control program. The processor performs an information display control for areas layered in a windshield screen according to the user's gesture.

CONTEXTUAL VISUAL AND VOICE SEARCH FROM ELECTRONIC EYEWEAR DEVICE

Augmented reality features are selected for presentation to a display of an electronic eyewear device by using a camera of the electronic eyewear device to capture a scan image and processing the scan image to extract contextual signals. Simultaneously, voice data from the user is captured by a microphone of the electronic eyewear device and voice-to-text conversion of the captured voice data is performed to identify keywords in the voice data. The extracted contextual signals and the identified keywords are then used to select at least one augmented reality feature that matches the extracted contextual signals and the identified keywords, and the selected augmented reality feature is presented to the display for user selection. The contextual information thus refines the search results to provide the augmented reality feature best suited for the context of the scan image captured by the electronic eyewear device.

ELECTRONIC BILLBOARD AND CONTROLLING METHOD THEREOF
20220374188 · 2022-11-24 · ·

An electronic billboard and a controlling method thereof are disclosed, wherein the electronic billboard includes plural displaying areas, a user interaction tracker and a data flow manager. Each one of the display areas correspondingly form a telecommunication connection with one of plural video delivery devices, and respectively displays the image data provided by the video delivery devices. The user interaction tracker is used to calculate the interaction indexes between at least one user and each one of the display area. The data flow manager regulates the flows of the image data respectively delivered to the display areas according to the interactive indexes.

MULTITOUCH DATA FUSION
20230055434 · 2023-02-23 ·

A method for performing multi-touch (MT) data fusion is disclosed in which multiple touch inputs occurring at about the same time are received to generating first touch data. Secondary sense data can then be combined with the first touch data to perform operations on an electronic device. The first touch data and the secondary sense data can be time-aligned and interpreted in a time-coherent manner. The first touch data can be refined in accordance with the secondary sense data, or alternatively, the secondary sense data can be interpreted in accordance with the first touch data. Additionally, the first touch data and the secondary sense data can be combined to create a new command.

INVOKING AUTOMATED ASSISTANT FUNCTION(S) BASED ON DETECTED GESTURE AND GAZE
20230053873 · 2023-02-23 ·

Invoking one or more previously dormant functions of an automated assistant in response to detecting, based on processing of vision data from one or more vision components: (1) a particular gesture (e.g., of one or more “invocation gestures”) of a user; and/or (2) detecting that a gaze of the user is directed at an assistant device that provides an automated assistant interface (graphical and/or audible) of the automated assistant. For example, the previously dormant function(s) can be invoked in response to detecting the particular gesture, detecting that the gaze of the user is directed at an assistant device for at least a threshold amount of time, and optionally that the particular gesture and the directed gaze of the user co-occur or occur within a threshold temporal proximity of one another.

Vehicle Remote Control Method and Vehicle Remote Control Device
20220365527 · 2022-11-17 · ·

When a subject vehicle having an autonomous travel control function is remotely operated with a remote operation device, detected coordinate information indicating a temporal transition in detected coordinates of a gesture detected by a touch panel of the controller is acquired, and the change amount of a physical change occurring on the remote operation device is detected to acquire operation device transition information indicating a temporal transition in the change amount. Then, the frequency characteristics of the detected coordinate information are compared with the frequency characteristics of the operation device transition information to determine whether or not there is correlation, and when there is correlation, the subject vehicle is controlled to execute autonomous travel control.

AI solution selection for an automated robotic process

A method for selecting an AI solution for an automated robotic process including receiving at least one functional media including information indicative of brain activity by a human engaged in a task of interest, analyzing the functional media, identifying an activity level in at least one brain region, identifying a brain region parameter and an activity parameter; identifying an action parameter based in part on the brain region parameter or the activity parameter; and selecting a component of the AI solution in part on the brain region parameter, the activity parameter, or the action parameter.

CONTROL SYSTEM AND METHOD USING IN-VEHICLE GESTURE INPUT

Provided are a control system and method using an in-vehicle gesture input, and more particularly, a system for receiving an occupant's gesture and controlling the execution of vehicle functions. The control system using an in-vehicle gesture input includes an input unit configured to receive a user's gesture, a memory configured to store a control program using an in-vehicle gesture input therein, and a processor configured to execute the control program. The processor transmits a command for executing a function corresponding to a gesture according to a usage pattern.

CONTROL SYSTEM USING IN-VEHICLE GESTURE INPUT
20230045996 · 2023-02-16 · ·

Provided is a control system using an in-vehicle gesture input, and more particularly, a system for receiving a vehicle occupant's gesture and controlling the execution of vehicle functions. The control system using an in-vehicle gesture input includes an input unit configured to receive a user's gesture, a memory configured to store a control program using an in-vehicle gesture input therein, and a processor configured to execute the control program. The processor performs an information display control for areas layered in a windshield screen according to the user's gesture.