Facilitating inferential sound recognition based on patterns of sound primitives
09749762 · 2017-08-29
Assignee
Inventors
Cpc classification
G10H2250/315
PHYSICS
G08B21/0423
PHYSICS
G10H2210/031
PHYSICS
G08B17/10
PHYSICS
G08B21/182
PHYSICS
G10H2210/301
PHYSICS
International classification
G08B17/10
PHYSICS
Abstract
The disclosed embodiments provide a system that performs a sound-recognition operation. During operation, the system recognizes a sequence of sound primitives in an audio stream, wherein a sound primitive is associated with a semantic label comprising one or more words that describe a sound characterized by the sound primitive. Next, the system feeds the sequence of sound primitives into a finite-state automaton that recognizes events associated with sequences of sound primitives. Finally, the system feeds the recognized events into an output system that generates an output associated with the recognized events to be displayed to a user.
Claims
1. A method for performing a sound-recognition operation, comprising: recognizing a sequence of sound primitives in an audio stream, wherein a sound primitive is associated with a semantic label comprising one or more words that describe a sound characterized by the sound primitive, wherein recognizing the sequence of sound primitives comprises, performing a feature-detection operation on a sequence of sound samples from the audio stream to detect a set of sound features, wherein each sound feature comprises a measurable characteristic for a time window of consecutive sound samples, and wherein detecting the sound feature involves generating a coefficient indicating a likelihood that the sound feature is present in the time window, creating a set of feature vectors from coefficients generated by the feature-detection operation, wherein each feature vector comprises a set of coefficients for sound features in the set of sound features, and identifying the sequence of sound primitives from the sequence of feature vectors; feeding the sequence of sound primitives into a finite-state automaton that recognizes events associated with sequences of sound primitives; and feeding the recognized events into an output system that generates an output associated with the recognized events to be displayed to a user.
2. The method of claim 1, wherein the finite-state automaton is a non-deterministic finite-state automaton that can exist in multiple states at the same time; and wherein the non-deterministic finite-state automaton maintains a probability value for each of the multiple states that the finite-state automaton can exist in.
3. The method of claim 1, wherein feeding the sequence of sound primitives into the finite-state automaton comprises: feeding the sequence of sound primitives into a first-level finite-state automaton that recognizes first-level events from the sequence of sound primitives to generate a sequence of first-level events; feeding the sequence of first-level events into a second-level finite-state automaton that recognizes second-level events from the sequence of first-level events to generate a sequence of second-level events; and repeating the process for zero or more additional levels of finite-state automatons to generate the recognized events.
4. The method of claim 3, wherein if a probability value for a state in the non-deterministic finite-state automaton does not meet an activation-potential-related threshold value after a state-transition operation, the probability value for the state is set to zero.
5. The method of claim 3, wherein the finite-state automaton performs state-transition operations by performing computations involving one or more sequence matrices containing coefficients that define state transitions.
6. The method of claim 1, wherein the output system triggers an alert when a probability that a tracked event is occurring exceeds a threshold value.
7. A non-transitory computer-readable storage medium storing instructions that when executed by a computer cause the computer to perform a sound-recognition operation, the method comprising: recognizing a sequence of sound primitives in an audio stream, wherein a sound primitive is associated with a semantic label comprising one or more words that describe a sound characterized by the sound primitive, wherein recognizing the sequence of sound primitives comprises, performing a feature-detection operation on a sequence of sound samples from the audio stream to detect a set of sound features, wherein each sound feature comprises a measurable characteristic for a time window of consecutive sound samples, and wherein detecting the sound feature involves generating a coefficient indicating a likelihood that the sound feature is present in the time window, creating a set of feature vectors from coefficients generated by the feature-detection operation, wherein each feature vector comprises a set of coefficients for sound features in the set of sound features, and identifying the sequence of sound primitives from the sequence of feature vectors; feeding the sequence of sound primitives into a finite-state automaton that recognizes events associated with sequences of sound primitives; and feeding the recognized events into an output system that generates an output associated with the recognized events to be displayed to a user.
8. The non-transitory computer-readable storage medium of claim 7, wherein the finite-state automaton is a non-deterministic finite-state automaton that can exist in multiple states at the same time; and wherein the non-deterministic finite-state automaton maintains a probability value for each of the multiple states that the finite-state automaton can exist in.
9. The non-transitory computer-readable storage medium of claim 7, wherein feeding the sequence of sound primitives into the finite-state automaton comprises: feeding the sequence of sound primitives into a first-level finite-state automaton that recognizes first-level events from the sequence of sound primitives to generate a sequence of first-level events; feeding the sequence of first-level events into a second-level finite-state automaton that recognizes second-level events from the sequence of first-level events to generate a sequence of second-level events; and repeating the process for zero or more additional levels of finite-state automatons to generate the recognized events.
10. The non-transitory computer-readable storage medium of claim 9, wherein if a probability value for a state in the non-deterministic finite-state automaton does not meet an activation-potential-related threshold value after a state-transition operation, the probability value for the state is set to zero.
11. The non-transitory computer-readable storage medium of claim 9, wherein the finite-state automaton performs state-transition operations by performing computations involving one or more sequence matrices containing coefficients that define state transitions.
12. The non-transitory computer-readable storage medium of claim 7, wherein the output system triggers an alert when a probability that a tracked event is occurring exceeds a threshold value.
13. A system that performs a sound-recognition operation, comprising: at least one processor and at least one associated memory; and a sound-recognition system that executes on the at least one processor, wherein during operation, the sound-recognition system, recognizes a sequence of sound primitives in an audio stream, wherein a sound primitive is associated with a semantic label comprising one or more words that describe a sound characterized by the sound primitive, wherein while recognizing the sequence of sound primitives, the sound-recognition system, performs a feature-detection operation on a sequence of sound samples from the audio stream to detect a set of sound features, wherein each sound feature comprises a measurable characteristic for a time window of consecutive sound samples, and wherein detecting the sound feature involves generating a coefficient indicating a likelihood that the sound feature is present in the time window, creates a set of feature vectors from coefficients generated by the feature-detection operation, wherein each feature vector comprises a set of coefficients for sound features in the set of sound features, and identifies the sequence of sound primitives from the sequence of feature vectors; feeds the sequence of sound primitives into a finite-state automaton that recognizes events associated with sequences of sound primitives, and feeds the recognized events into an output system that generates an output associated with the recognized events to be displayed to a user.
14. The system of claim 13, wherein the finite-state automaton is a non-deterministic finite-state automaton that can exist in multiple states at the same time; and wherein the non-deterministic finite-state automaton maintains a probability value for each of the multiple states that the finite-state automaton can exist in.
15. The system of claim 14, wherein if a probability value for a state in the non-deterministic finite-state automaton does not meet an activation-potential-related threshold value after a state-transition operation, the probability value for the state is set to zero.
16. The system of claim 15, wherein the finite-state automaton performs state-transition operations by performing computations involving one or more sequence matrices containing coefficients that define state transitions.
17. The system of claim 13, wherein while feeding the sequence of sound primitives into the finite-state automaton, the sound-recognition system: feeds the sequence of sound primitives into a first-level finite-state automaton that recognizes first-level events from the sequence of sound primitives to generate a sequence of first-level events; feeds the sequence of first-level events into a second-level finite-state automaton that recognizes second-level events from the sequence of first-level events to generate a sequence of second-level events; and repeats the process for zero or more additional levels of finite-state automatons to generate the recognized events.
18. The system of claim 13, wherein the output system triggers an alert when a probability that a tracked event is occurring exceeds a threshold value.
Description
BRIEF DESCRIPTION OF THE FIGURES
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
DETAILED DESCRIPTION
(11) The following description is presented to enable any person skilled in the art to make and use the present embodiments, and is provided in the context of a particular application and its requirements. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present embodiments. Thus, the present embodiments are not limited to the embodiments shown, but are to be accorded the widest scope consistent with the principles and features disclosed herein.
(12) The data structures and code described in this detailed description are typically stored on a computer-readable storage medium, which may be any device or medium that can store code and/or data for use by a computer system. The computer-readable storage medium includes, but is not limited to, volatile memory, non-volatile memory, magnetic and optical storage devices such as disk drives, magnetic tape, CDs (compact discs), DVDs (digital versatile discs or digital video discs), or other media capable of storing computer-readable media now known or later developed.
(13) The methods and processes described in the detailed description section can be embodied as code and/or data, which can be stored in a computer-readable storage medium as described above. When a computer system reads and executes the code and/or data stored on the computer-readable storage medium, the computer system performs the methods and processes embodied as data structures and code and stored within the computer-readable storage medium. Furthermore, the methods and processes described below can be included in hardware modules. For example, the hardware modules can include, but are not limited to, application-specific integrated circuit (ASIC) chips, field-programmable gate arrays (FPGAs), and other programmable-logic devices now known or later developed. When the hardware modules are activated, the hardware modules perform the methods and processes included within the hardware modules.
(14) Overview
(15) The objective of sound-recognition systems is to provide humans with relevant information extracted from sounds. People recognize sounds as belonging to specific categories, such as sounds associated with a car, sounds associated with a baby crying, or sounds associated with shattering glass. However, a car can produce a wide variety of sounds that a person can recognize as falling into the car category. This is because a person typically has experienced sounds related to cars for many years, and all of these sounds have been incorporated into a semantic category associated with the concept of a car.
(16) At present, a sound category such as “car” does not make sense to a computer system. This is because a category for the concept of “car” is not actually a category associated with lower-level sound characteristics, but is in fact a “semantic category” that is associated with the activity of operating a car. In this example, the sound-recognition process is actually the process of identifying an “activity” associated with one or more sounds.
(17) When a computer system processes an audio signal, the computer system can group similar sounds into categories based on patterns contained in the audio signal, such as patterns related to frequencies and amplitudes of various components of the audio signal. Note that such sound categories may not make sense to people. However, the computer system can easily categorize such sound categories, which we refer to as “sound primitives.” (Note that the term “sound primitive” can refer to both machine-generated sound categories, and human-defined categories matching machine-generated sound categories.) We refer to the discrepancy between human-recognized sound categories and machine-recognized sound categories as the “human-machine semantic gap.”
(18) We now describe a system that monitors an audio stream to recognize sound-related activities based on patterns of sound primitives contained in the audio stream. Note that these patterns of sound primitives can include sequences of sound primitives and also overlapping sound primitives.
(19) Computing Environment
(20)
(21) Fat edge device 130 also includes a real-time audio acquisition unit 122, which can acquire and digitize an audio signal. However, in contrast to skinny edge device 110, fat edge device 120 possesses more internal computing power, so the audio signals can be processed locally in a local meaning-extraction module 124.
(22) The output from both local meaning-extraction module 124 and cloud-based meaning-extraction module 132 feeds into an output post-processing module 134, which is also located inside cloud-based virtual device 130. This output post-processing module 134 provides an Application-Programming Interface (API) 136, which can be used to communicate results produced by the sound-recognition process to a customer platform 140.
(23) Referring to the model-creation system 200 illustrated in
(24) Model-Building Process
(25) During the model-building process, the system can use an unsupervised learning technique to generate a model to recognize a set of sound primitives as is illustrated in the flow chart that appears in
(26) For example, a sound feature can comprise a 5-second sliding time window comprising a set of audio samples acquired at 46 millisecond intervals from an audio stream. In general, the set of sound features can include: (1) an average value for a parameter of a sound signal over the time window; (2) a spectral-content-related parameter for a sound signal over the time window; and (3) a shape-related metric for a sound signal over the time window. More specifically, the set of sound features can include: (1) a “pulse” that comprises a peak in intensity of a highest energy component of the sound signal, which can be compared against a delta function, and wherein parameters for the pulse can include a total energy, a duration, and a peak energy; (2) a “shock ratio,” which relates to a local variation in amplitude of the sound wave; (3) a “wave-linear length,” which measures a total length of the sound wave over the time window; (4) a “spectral composition of a peak” over the time window; (5) a “trajectory of the leading spectrum component” in the sound signal over the time window; for example, the trajectory can be ascending, descending or V-shaped; (6) a “leading spectral component” (or a set of leading spectral components) at each moment in the time window; (7) an “attack strength,” which reflects a most brutal variation in sound intensity over the time window; and (8) a “high-peak number,” which specifies a number of peaks that are within 80% of the peak amplitude in the time window.
(27) Note that it is advantageous to use a sound feature that can be computed using simple incremental computations instead of more-complicated computational operations. For example, the system can compute the “wave-linear length” instead of the more computationally expensive signal-to-noise ratio (SNR).
(28) Next, the system creates a set of feature vectors from coefficients generated by the feature-detection operation, wherein each feature vector comprises a set of coefficients for sound features in the set of sound features (step 304). The system then performs a clustering operation on the set of feature vectors to produce a set of feature clusters, wherein each feature cluster comprises a set of feature vectors that are proximate to each other in a vector space that contains the set of feature vectors (step 306). This clustering operation can involve any known clustering technique, such as the “k-means clustering technique,” which is commonly used in data mining systems. This clustering operation also makes use of a distance metric, such as the “normalized Google distance,” to form the clusters of proximate feature vectors.
(29) The system then defines the set of sound primitives, wherein each sound primitive is defined to be associated with a feature cluster in the set of feature clusters (step 308). Finally, the system associates semantic labels with sound primitives in the set of sound primitives, wherein a semantic label for a sound primitive comprises one or more words that describe a sound characterized by the sound primitive (step 310).
(30) Referring to the flow chart in
(31) After the model for recognizing the set of sound primitives has been generated, the system generates a model that recognizes “events” from patterns of lower-level sound primitives. Like sound primitives, events are associated with concepts that have a semantic meaning, and are also associated with corresponding semantic labels. Moreover, each event is associated with a pattern of one or more sound primitives, wherein the pattern for a particular event can include one or more sequences of sound primitives, wherein the sound primitives can potentially overlap in the sequences. For example, an event associated with the concept of “wind” can be associated with sound primitives for “rustling” and “blowing.” In another example, an event associated with the concept of “washing dishes” can be associated with a sequence of sound primitives, which include “metal clanging,” “glass clinking” and “running water.”
(32) Note that the model that recognizes events can be created based on input obtained from a human expert. During this process, the human expert defines each event in terms of a pattern of lower-level sound primitives. Moreover, the human expert can also define higher-level events based on patterns of lower-level events. For example, the higher-level event “storm” can be defined as a combination of the lower-level events “wind,” “rain” and “thunder.” Instead of (or in addition to) receiving input from a human expert to define events, the system can also use a machine-learning technique to make associations between lower-level events and higher-level events based on feedback from a human expert as is described in more detail below. Once these associations are determined, the system converts the associations into a grammar that is used by a non-deterministic finite-state automaton to recognize events as is described in more detail below.
(33) Note that a sound primitive can be more clearly defined by examining other temporally proximate sound primitives. For example, the sound of an explosion can be more clearly defined as a gunshot if it is followed by more explosions, the sound of people screaming, and the sound of a police siren. In another example, a sound that could be either a laugh or a bark can be more clearly defined as a laugh if it is followed by the sound of people talking.
(34) Sound-Recognition Process
(35)
(36) Next, the system feeds the sequence of sound primitives into a finite-state automaton that recognizes events associated with sequences of sound primitives. This finite-state automaton can be a non-deterministic finite-state automaton that can exist in multiple states at the same time, wherein the non-deterministic finite-state automaton maintains a probability value for each of the multiple states that the finite-state automaton can exist in (step 508). Finally, the system feeds the recognized events into an output system that triggers an alert when a probability that a tracked event is occurring exceeds a threshold value (step 510).
(37)
EXAMPLE
(38)
(39) Non-Deterministic Finite-State Automaton
(40) As mentioned above, the system can recognize events based on other events (or from sound primitives) through use of a non-deterministic finite-state automaton. An exemplary state-transition process 800 for an exemplary non-deterministic finite-state automaton is illustrated in
(41) Matrix Operations
(42)
(43) In some embodiments, the system receives feedback from a human who reviews the highest-level feature vector 916 and also listens to the associated audio stream, and then provides feedback about whether the highest-level feature vector 916 is consistent with the audio stream. This feedback can be used to modify the lower-level matrices through a machine-learning process to more accurately produce higher-level feature vectors. Note that this system can use any one of a variety of well-known machine-learning techniques to modify these lower-level matrices.
(44) Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present invention. Thus, the present invention is not limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein.
(45) The foregoing descriptions of embodiments have been presented for purposes of illustration and description only. They are not intended to be exhaustive or to limit the present description to the forms disclosed. Accordingly, many modifications and variations will be apparent to practitioners skilled in the art. Additionally, the above disclosure is not intended to limit the present description. The scope of the present description is defined by the appended claims.