Patent classifications
G06F16/683
DETECTING MEDIA WATERMARKS IN MAGNETIC FIELD DATA
Methods, apparatus, systems and articles of manufacture (e.g., physical storage media) to detect media watermarks in magnetic field data are disclosed herein. Example media monitors disclosed herein include a magnetic field estimator to determine first magnetic field data, the magnetic field estimator in communication with a magnetometer. Disclosed example media monitors also include a correlator to correlate the first magnitude field data with a reference sequence to determine second magnetic field data. Disclosed example media monitors further include a watermark decoder to process the second magnetic field data to detect an audio watermark encoded in an audio signal.
DETECTING MEDIA WATERMARKS IN MAGNETIC FIELD DATA
Methods, apparatus, systems and articles of manufacture (e.g., physical storage media) to detect media watermarks in magnetic field data are disclosed herein. Example media monitors disclosed herein include a magnetic field estimator to determine first magnetic field data, the magnetic field estimator in communication with a magnetometer. Disclosed example media monitors also include a correlator to correlate the first magnitude field data with a reference sequence to determine second magnetic field data. Disclosed example media monitors further include a watermark decoder to process the second magnetic field data to detect an audio watermark encoded in an audio signal.
MULTI-TRACK AUDIO IN A SECURITY SYSTEM
A method, system, server and device are disclosed. According to one or more embodiments, a server is provided. A first audio track is received which includes first audio originating from a premises client at a premises location. A second audio track is received which includes second audio originating from a remote client. A first pan angle is determined for the first audio track and a second pan angle is determined for the second audio track. The second pan angle is different from the first pan angle. A stereo composite track is generated based on the first pan angle and the second pan angle, where the stereo composite track includes the first audio track and the second audio track.
MULTI-TRACK AUDIO IN A SECURITY SYSTEM
A method, system, server and device are disclosed. According to one or more embodiments, a server is provided. A first audio track is received which includes first audio originating from a premises client at a premises location. A second audio track is received which includes second audio originating from a remote client. A first pan angle is determined for the first audio track and a second pan angle is determined for the second audio track. The second pan angle is different from the first pan angle. A stereo composite track is generated based on the first pan angle and the second pan angle, where the stereo composite track includes the first audio track and the second audio track.
Creating a Printed Publication, an E-Book, and an Audio Book from a Single File
As an example, a server may receive, from a computing device, a submission created by an author. The submission includes book data associated with a book and author data associated with the author. The author data includes incarceration data indicating whether the author was incarcerated. The server may determine, based on the author data and the book data, that the submission is publishable. The server may create, based on the book data, a printable book, an e-book, and an audio book and make one or more of the printable book, the e-book, and the audio book available for acquisition.
Audio playout report for ride-sharing session
In one aspect, an example method to be performed by a computing device includes (a) determining that a ride-sharing session is active; (b) in response to determining the ride-sharing session is active, using a microphone of the computing device to capture audio content; (c) identifying reference audio content that has at least a threshold extent of similarity with the captured audio content; (d) determining that the ride-sharing session is inactive; and (e) outputting an indication of the identified reference audio content.
Audio playout report for ride-sharing session
In one aspect, an example method to be performed by a computing device includes (a) determining that a ride-sharing session is active; (b) in response to determining the ride-sharing session is active, using a microphone of the computing device to capture audio content; (c) identifying reference audio content that has at least a threshold extent of similarity with the captured audio content; (d) determining that the ride-sharing session is inactive; and (e) outputting an indication of the identified reference audio content.
Automated clinical documentation system and method
A method, computer program product, and computing system for proactive encounter scanning is executed on a computing device and includes obtaining encounter information of a patient encounter. The encounter information is proactively processed to determine if the encounter information is indicative of one or more medical conditions and to generate one or more result set. The one or more result sets are provided to the user.
Background audio identification for speech disambiguation
Implementations relate to techniques for providing context-dependent search results. A computer-implemented method includes receiving an audio stream at a computing device during a time interval, the audio stream comprising user speech data and background audio, separating the audio stream into a first substream that includes the user speech data and a second substream that includes the background audio, identifying concepts related to the background audio, generating a set of terms related to the identified concepts, influencing a speech recognizer based on at least one of the terms related to the background audio, and obtaining a recognized version of the user speech data using the speech recognizer.
Sound recognition model training method and system and non-transitory computer-readable medium
A sound recognition model training method comprises determining a relationship between a sound event and first parameter and deciding a second parameter in response to the relationship, performing sampling on the sound event using the first parameter and the second parameter to generate training audio files, and inputting at least part of the training audio files to a sound recognition model for training the sound recognition model, wherein a length of each of the training audio files is associated with the first parameter, a time difference between every two of the training audio files is associated with the second parameter, and the sound recognition model is used for determining a sound classification.