Patent classifications
G10H2250/315
Facilitating inferential sound recognition based on patterns of sound primitives
The disclosed embodiments provide a system that performs a sound-recognition operation. During operation, the system recognizes a sequence of sound primitives in an audio stream, wherein a sound primitive is associated with a semantic label comprising one or more words that describe a sound characterized by the sound primitive. Next, the system feeds the sequence of sound primitives into a finite-state automaton that recognizes events associated with sequences of sound primitives. Finally, the system feeds the recognized events into an output system that generates an output associated with the recognized events to be displayed to a user.
ENVIRONMENTAL SOUND GENERATING APPARATUS, ENVIRONMENTAL SOUND GENERATING SYSTEM USING THE APPARATUS, ENVIRONMENTAL SOUND GENERATING PROGRAM, SOUND ENVIRONMENT FORMING METHOD AND STORAGE MEDIUM
An environmental sound generating apparatus generates an environmental sound signal representing an environmental sound that forms sound environment by being emitted. The environmental sound has at least one chain of phonemes constituted of individual phonemes which sound-emission start timings as one of attributes thereof are sequentially shifted. A plurality of subgroups are prepared each formed by combining individual plural pitches from pitches constituting a primary pitch group that is a group of phonemes musically treated as consonances if sounded simultaneously. One of the plurality of subgroups selected at random is set to each of the sections of the chain of phonemes, and each of the individual phonemes of each section of the chain of phonemes is set to a pitch selected at random from the plural pitches constituting the selected subgroup to attain hypersonic effects.
Sound effect synthesis
Disclosed herein is a sound synthesis system for generating a user defined synthesised sound effect, the system comprising: a receiver of user defined inputs for defining a sound effect; a generator of control parameters in dependence on the received user defined inputs; a plurality of sound effect objects, wherein each sound effect object is arranged to generate a different class of sound and each sound effect object comprises a sound synthesis model arranged to generate a sound in dependence on one or more of the control parameters; a plurality of audio effect objects, wherein each audio effect object is arranged to receive a sound from one or more sound effect objects and/or one or more other audio effect objects, process the received sound in dependence on one or more of the control parameters and output the processed sound; a scene creation function arranged to receive sound output from one or more sound effect objects and/or audio effect objects and to generate a synthesised sound effect in dependence on the received sound; and an audio routing function arranged to determine the arrangement of audio effect objects, sound effect objects and scene creation function such that one or more sounds received by the scene creation function are dependent on the audio routing function; wherein the determined arrangement of audio effect objects, sound effect objects and the scene creation function by the audio routing function is dependent on the user defined inputs.
Delivery sound masking and sound emission
An unmanned aerial vehicle (UAV) may emit masking sounds during operation of the UAV to mask other sounds generated by the UAV during operation. The UAV may be used to deliver items to a residence or other location associated with a customer. The UAV may emit sounds that mask the conventional sounds generated by the propellers and/or motors to cause the UAV to emit sounds that are pleasing to bystanders or do not annoy the bystanders. The UAV may emit sounds using speakers or other sound generating devices, such as fins, reeds, whistles, or other devices which may cause sound to be emitted from the UAV. Noise canceling algorithms may be used to cancel at least some of the conventional noise generated by operation of the UAV using inverted sounds, while additional sound may be emitted by the UAV, which may not be subject to noise cancelation.
ELECTRONIC MUSICAL INSTRUMENT, ELECTRONIC KEYBOARD MUSICAL INSTRUMENT, AND METHOD OF GENERATING MUSICAL SOUND
An electronic musical instrument according to one embodiment includes playing operators specifying respective pitches, and at least one processor which is configured to execute processing of acquiring string sound data including a fundamental sound component and a harmonic tone component corresponding to a specified pitch, acquiring stroke sound waveform data that does not include the fundamental sound component and the harmonic tone component corresponding to the specified pitch but includes components other than the fundamental sound component and the harmonic tone component, and synthesizing the string sound data and stroke sound data corresponding to the stroke sound waveform data at a set ratio.
SOUND EFFECT SYNTHESIS
Disclosed herein is a sound synthesis system for generating a user defined synthesised sound effect, the system comprising: a receiver of user defined inputs for defining a sound effect; a generator of control parameters in dependence on the received user defined inputs; a plurality of sound effect objects, wherein each sound effect object is arranged to generate a different class of sound and each sound effect object comprises a sound synthesis model arranged to generate a sound in dependence on one or more of the control parameters; a plurality of audio effect objects, wherein each audio effect object is arranged to receive a sound from one or more sound effect objects and/or one or more other audio effect objects, process the received sound in dependence on one or more of the control parameters and output the processed sound; a scene creation function arranged to receive sound output from one or more sound effect objects and/or audio effect objects and to generate a synthesised sound effect in dependence on the received sound; and an audio routing function arranged to determine the arrangement of audio effect objects, sound effect objects and scene creation function such that one or more sounds received by the scene creation function are dependent on the audio routing function; wherein the determined arrangement of audio effect objects, sound effect objects and the scene creation function by the audio routing function is dependent on the user defined inputs.
Computationally efficient language based user interface event sound selection
A computer user interface (UI) is capable of generating a sound when a predetermined event occurs. The sound generated when the predetermined event occurs may possess at least some characteristics of a predominant natural language used by a user and/or a location of a computer implementing the UI. This enables the user to quickly assimilate the sound generated when the predetermined event occurs. Because the user quickly assimilates the sound generated when the predetermined event occurs, the user is able to rapidly respond to the predetermined event, at times using the computer UI, which reduces undesirable memory use, processor use and/or battery drain associated with a computing device that implements the computer UI.
Real time audification of neonatal electroencephalogram (EEG) signals
The present invention discloses a method and system of providing a real time audification of neonatal EEG signals. The method comprises the steps of: receiving preprocessed neonatal EEG signals; changing a characteristic of the preprocessed signals in a phase vocoder; resampling the output signals from the vocoder to a predetermined audio frequency range; converting the resampled signals into stereo signals; and selecting a plurality of channels from the stereo signals as the output audio signals.
Electronic musical instrument, electronic keyboard musical instrument, and method of generating musical sound
An electronic musical instrument according to one embodiment includes playing operators specifying respective pitches, and at least one processor which is configured to execute processing of acquiring string sound data including a fundamental sound component and a harmonic tone component corresponding to a specified pitch, acquiring stroke sound waveform data that does not include the fundamental sound component and the harmonic tone component corresponding to the specified pitch but includes components other than the fundamental sound component and the harmonic tone component, and synthesizing the string sound data and stroke sound data corresponding to the stroke sound waveform data at a set ratio.
COMPUTATIONALLY EFFICIENT LANGUAGE BASED USER INTERFACE EVENT SOUND SELECTION
A computer user interface (UI) is capable of generating a sound when a predetermined event occurs. The sound generated when the predetermined event occurs may possess at least some characteristics of a predominant natural language used by a user and/or a location of a computer implementing the UI. This enables the user to quickly assimilate the sound generated when the predetermined event occurs. Because the user quickly assimilates the sound generated when the predetermined event occurs, the user is able to rapidly respond to the predetermined event, at times using the computer UI, which reduces undesirable memory use, processor use and/or battery drain associated with a computing device that implements the computer UI.