G06F16/63

In-vehicle device and method for managing user interfaces

A method for managing user interface includes displaying related information of an in-vehicle device using a first user interface. An authority of accessing related information of a handheld device is acquired when the handheld device is in communication with the in-vehicle device. Once the related information of the handheld device is obtained, the related information of the in-vehicle device and the related information of the handheld device are displayed using a second user interface.

In-vehicle device and method for managing user interfaces

A method for managing user interface includes displaying related information of an in-vehicle device using a first user interface. An authority of accessing related information of a handheld device is acquired when the handheld device is in communication with the in-vehicle device. Once the related information of the handheld device is obtained, the related information of the in-vehicle device and the related information of the handheld device are displayed using a second user interface.

Configuring a playlist or sequence of compositions or stream of compositions
11334619 · 2022-05-17 · ·

A method, apparatus and system that enables a user to find and act-upon a sound-containing composition, in a group of compositions. One or more sound-segments, which are intended to prompt a user's memory, may be associated with each composition in a group of compositions. A recognition sound-segment may include a portion of its associated composition, which is more recognizable to users than the beginning part of its associated composition. A recognition-segment may contain one or more highly recognizable portion(s) of a composition. When the user is trying to locate or select a particular composition, the recognition-segments are navigated and played-back to the user, based upon a user-device context/mode. When a user recognizes the desired composition from its recognition-segment, the user may initiate a control action to playback; arrange; and/or act-upon, the composition that is associated with the currently playing recognition-segment.

Integrating relational database temporal tables with a distributed programming environment

Certain aspects of the present disclosure provide techniques for identifying temporal data in data streams to create a temporal database for a stream(s) application to query for temporal data. An example technique includes receiving streams of data at a streams engine and processing the streams of data according to a priority order. The streams engine identifies whether the database is a temporal database and identifies temporal data in each stream of data based on frame analysis, natural language processing techniques, metadata, and optical character recognition. Further, the streams engine generates context data corresponding to the temporal data. The streams engine generates a temporal data record based on the temporal data and context data, and the streams engine generates a reliability factor. The temporal data record and reliability factor are stored in the temporal database for a stream application to query regarding temporal information at a later point in time.

Integrating relational database temporal tables with a distributed programming environment

Certain aspects of the present disclosure provide techniques for identifying temporal data in data streams to create a temporal database for a stream(s) application to query for temporal data. An example technique includes receiving streams of data at a streams engine and processing the streams of data according to a priority order. The streams engine identifies whether the database is a temporal database and identifies temporal data in each stream of data based on frame analysis, natural language processing techniques, metadata, and optical character recognition. Further, the streams engine generates context data corresponding to the temporal data. The streams engine generates a temporal data record based on the temporal data and context data, and the streams engine generates a reliability factor. The temporal data record and reliability factor are stored in the temporal database for a stream application to query regarding temporal information at a later point in time.

Methods and systems for voice recognition in autonomous flight of an electric aircraft
11335203 · 2022-05-17 · ·

A system for voice recognition in autonomous flight of an electric aircraft that includes a computing device communicatively connected to the electric aircraft configured to receive at least a voice datum from a remote device, wherein the voice datum is configured to include at least an expression datum, generate, using a first machine-learning process, a transcription datum as a function of the at least a voice datum, extract at least a query as a function of the transcription datum, generate, using a second machine-learning process, a communication output as a function of the at least a query, and adjust a flight plan as a function of the communication output.

Methods and systems for voice recognition in autonomous flight of an electric aircraft
11335203 · 2022-05-17 · ·

A system for voice recognition in autonomous flight of an electric aircraft that includes a computing device communicatively connected to the electric aircraft configured to receive at least a voice datum from a remote device, wherein the voice datum is configured to include at least an expression datum, generate, using a first machine-learning process, a transcription datum as a function of the at least a voice datum, extract at least a query as a function of the transcription datum, generate, using a second machine-learning process, a communication output as a function of the at least a query, and adjust a flight plan as a function of the communication output.

Playback device queue access levels
11727134 · 2023-08-15 · ·

Based on a credential, an access level of a playback queue for a first control interface and a first subset and second subset of media items in the playback queue may be determined. Media items in the playback queue that were added via a second control interface may be included in the first subset. Media items that were added via a control interface different from the second control interface may be included in a second subset. Information may be provided which identifies the first subset of the media items in the playback queue and the second subset of the media items in the playback queue.

SYSTEM AND METHOD FOR PROVIDING VOICE ASSISTANT SERVICE REGARDING TEXT INCLUDING ANAPHORA
20220138427 · 2022-05-05 ·

A system and method for providing a voice assistant service for text including an anaphor are provided. A method, performed by an electronic device, of providing a voice assistant service includes: obtaining first text generated from a first input, detecting a target word within the first text and generating common information related to the detected target word, using a first natural language understanding (NLU) model, obtaining second text generated from a second input, inputting the common information and the second text to a second NLU model, detecting an anaphor included in the second text and outputting an intent and a parameter, based on common information corresponding to the detected anaphor, using the second NLU model, and generating response information related to the intent and the parameter.

Gesture-Based and Video Feedback Machine
20230251721 · 2023-08-10 ·

A system and method for providing gesture-based and video-based query feedback received from a user utilizes a system having a video display device, a microphone, a memory having instructions stored thereon, and a processor configured to execute the instructions on the memory to cause the system to perform a method. The processor executing instructions cause the system to select a first set of gestures for use when interacting with the user, determine whether the user understands the first set of gestures, and when the user understands the first set of gestures, processor executing additional instructions to further cause the system to output one or more feedback queries as query audio or video data to the user, capture one or more input gestures as video data in response to the one or more feedback queries, identify the one or more response gestures within the video data, and when the one or more gestures identified within the video data are recognized as corresponding to one or more gestures from the first set of gestures, record a query response corresponding to the recognized one or more gestures as a feedback response to the one or more feedback queries.