Patent classifications
G10L15/10
IDENTIFICATION OF ANOMALIES IN AIR TRAFFIC CONTROL COMMUNICATIONS
A processor may identify an anomaly in one or more communications. A processor may monitor the one or more communications for an utterance. A processor may perform natural language processing (NLP) on the utterance. A processor may generate an understanding of the utterance using natural language understanding (NLU). A processor may detect the anomaly from the understanding of the utterance. A processor may execute a response, responsive to detecting the anomaly.
Discovering windows in temporal predicates
A method and system are provided. The method includes separating a predicate that specifies a set of events into a temporal part and a non-temporal part. The method further includes comparing the temporal part of the predicate against a predicate of a known window type. The method also includes determining whether the temporal part of the predicate matches the predicate of the known window type. The method additionally includes replacing (i) the non-temporal part of the predicate by a filter, and (ii) the temporal part of the predicate by an instance of the known window type, responsive to the temporal part of the temporal predicate matching the predicate of the known window type. The instance is parameterized with substitutions used to match the temporal part of the predicate to the predicate of the known window type.
Discovering windows in temporal predicates
A method and system are provided. The method includes separating a predicate that specifies a set of events into a temporal part and a non-temporal part. The method further includes comparing the temporal part of the predicate against a predicate of a known window type. The method also includes determining whether the temporal part of the predicate matches the predicate of the known window type. The method additionally includes replacing (i) the non-temporal part of the predicate by a filter, and (ii) the temporal part of the predicate by an instance of the known window type, responsive to the temporal part of the temporal predicate matching the predicate of the known window type. The instance is parameterized with substitutions used to match the temporal part of the predicate to the predicate of the known window type.
MITIGATING VOICE FREQUENCY LOSS
Computer-implemented methods, computer program products, and computer systems for mitigating frequency loss may include one or more processors configured for receiving first audio data corresponding to unobstructed user utterances, receiving second audio data corresponding to first obstructed user utterances, generating a frequency loss (FL) model representing frequency loss between the first audio data and the second audio data, receiving third audio data corresponding to one or more second obstructed user utterances, processing the third audio data using the FL model to generate fourth audio data corresponding to a frequency loss mitigated version of the second obstructed user utterances, and transmitting the fourth audio data to a recipient computing device. The first obstructed user utterances are obstructed by a facemask and the one or more second obstructed user utterances is obstructed by the facemask. The FL model may be executed as an audio plugin in a web conferencing program.
MITIGATING VOICE FREQUENCY LOSS
Computer-implemented methods, computer program products, and computer systems for mitigating frequency loss may include one or more processors configured for receiving first audio data corresponding to unobstructed user utterances, receiving second audio data corresponding to first obstructed user utterances, generating a frequency loss (FL) model representing frequency loss between the first audio data and the second audio data, receiving third audio data corresponding to one or more second obstructed user utterances, processing the third audio data using the FL model to generate fourth audio data corresponding to a frequency loss mitigated version of the second obstructed user utterances, and transmitting the fourth audio data to a recipient computing device. The first obstructed user utterances are obstructed by a facemask and the one or more second obstructed user utterances is obstructed by the facemask. The FL model may be executed as an audio plugin in a web conferencing program.
Automatically executing operations sequences
Method, system and product for automatic execution of operations sequences. An operations sequence, which includes a first operation immediately followed by a second operation, is obtained. The operations sequence or portion thereof is automatically executed, at least by performing: in response to a determination that a first element required for performing the first operation is available for user interaction in a first state of the computing device, mimicking a user interaction with the first element to perform the first operation, whereby causing a current state of the computing device to change from the first state to a second state; and in response to a determination that a second element required for performing the second operation is available for user interaction in the second state, mimicking user interaction with the second element to perform the second operation.
Dynamic adjustment of story time special effects based on contextual data
The disclosure provides technology for enabling a computing device to provide context sensitive special effects that supplement a text source as it is read aloud. An example method includes receiving, by a processing device, audio data comprising a spoken word of a user, analyzing contextual data associated with the user, determining a match between the audio data and data of a text source; and initiating a physical effect in response to the determining the match, wherein the physical effect corresponds to the text source and is based on the contextual data.
Dynamic adjustment of story time special effects based on contextual data
The disclosure provides technology for enabling a computing device to provide context sensitive special effects that supplement a text source as it is read aloud. An example method includes receiving, by a processing device, audio data comprising a spoken word of a user, analyzing contextual data associated with the user, determining a match between the audio data and data of a text source; and initiating a physical effect in response to the determining the match, wherein the physical effect corresponds to the text source and is based on the contextual data.
Onboard device, traveling state estimation method, server device, information processing method, and traveling state estimation system
An onboard device estimates a traveling state of a vehicle that may be influenced by the psychological state of a driver, based on an utterance of the driver without the use of various sensors, and includes: a voice collection unit for collecting a driver's voice; a traveling state collection unit for collecting traveling state information representing a traveling state of a vehicle; a database generation unit for generating a database by associating voice information corresponding to the collected voice with the collected traveling state information; a learning unit for learning an estimation model, with pairs including the voice information and the traveling state information recorded in the generated database being used as learning data; and an estimation unit for estimating the traveling state of the vehicle that may be influenced by a psychological state of the driver by using the estimation model, based on an utterance of the driver.
Onboard device, traveling state estimation method, server device, information processing method, and traveling state estimation system
An onboard device estimates a traveling state of a vehicle that may be influenced by the psychological state of a driver, based on an utterance of the driver without the use of various sensors, and includes: a voice collection unit for collecting a driver's voice; a traveling state collection unit for collecting traveling state information representing a traveling state of a vehicle; a database generation unit for generating a database by associating voice information corresponding to the collected voice with the collected traveling state information; a learning unit for learning an estimation model, with pairs including the voice information and the traveling state information recorded in the generated database being used as learning data; and an estimation unit for estimating the traveling state of the vehicle that may be influenced by a psychological state of the driver by using the estimation model, based on an utterance of the driver.