Multi-party conversation analyzer and logger

09800721 · 2017-10-24

Assignee

Inventors

Cpc classification

International classification

Abstract

In one aspect, the present invention facilitates the investigation of networks of criminals, by gathering associations between phone numbers, the names of persons reached at those phone numbers, and voice print data. In another aspect the invention automatically detects phone calls from a prison where the voiceprint of the person called matches the voiceprint of a past inmate. In another aspect the invention detects identity scams in prisons, by monitoring for known voice characteristics of likely imposters on phone calls made by prisoners. In another aspect, the invention automatically does speech-to-text conversion of phone numbers spoken within a predetermined time of detecting data indicative of a three-way call event while monitoring a phone call from a prison inmate. In another aspect, the invention automatically thwarts attempts of prison inmates to use re-dialing services. In another aspect, the invention automatically tags audio data retrieved from a database, by steganographically encoding into the audio data the identity of the official retrieving the audio data.

Claims

1. A prison phone monitoring system comprising: an audio database containing digitized audio signals storing a plurality of voice models and a plurality of recorded telephone calls; a call monitoring system for interfacing with a call network for monitoring a first telephone call, wherein the call monitoring system is configured to: detect a 3-way call event, wherein the 3-way call event indicates a new voice joining the monitored first telephone call; search the audio database for a voice model matching the new voice; determine a phone number for an individual associated with the matching voice model; and utilizing a call network query to determine a busy status of the phone number during the first telephone call.

2. The prison phone monitoring system of claim 1, wherein the monitoring of the first telephone call further comprises recording the first telephone call and saving the recorded first telephone call to the audio database.

3. The prison phone monitoring system of claim 1, wherein the call monitoring system is further configured to: create a new voice model corresponding to the new voice, if no voice model matching the new voice is found in the audio database.

4. The prison phone monitoring system of claim 1, wherein the determined phone number is associated with the matching voice model based on prior monitored communications by the individual using the phone number.

5. The prison phone monitoring system of claim 4, wherein the busy status of the phone number is determined using the call network query, wherein the call network query cannot be detected by the individual.

6. The prison phone monitoring system of claim 1, wherein the busy status is ascertained using automated dialing means.

7. The prison phone monitoring system of claim 1, wherein the call network query comprises a SIP (Session Initialization Protocol) request issued to the determined phone number.

8. A method for monitoring communications within a prison phone system comprising: storing digitized audio signals corresponding to a plurality of voice models and a plurality of recorded telephone calls in an audio database; monitoring a first telephone call using a call monitoring system configured to interface with a call network; detecting a 3-way call event in the monitored first telephone call, wherein the 3-way call event indicates a new voice joining the monitored first telephone call; searching the audio database for a voice model matching the new voice; determining a phone number for an individual associated with the matching voice model; and utilizing a call network query to determine a busy status of the phone number during the first telephone call.

9. The method of claim 8, wherein the monitoring of the first telephone call further comprises recording the first telephone call and saving the recorded first telephone call to the audio database.

10. The method of claim 8, further comprising: creating a new voice model corresponding to the new voice, if no voice model is found in the audio database that matches the new voice.

11. The method of claim 8, wherein the determined phone number is associated with the matching voice model based on prior monitored communications by the individual using the phone number.

12. The method of claim 11, wherein the busy status of the phone number is determined using the call network query, wherein the call network query cannot be detected by the individual.

13. The method of claim 12, wherein the busy status is ascertained using automated dialing means.

14. The method of claim 12, wherein the telephone network query comprises a SIP (Session Initialization Protocol) request issued to the determined phone number.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) FIG. 1 is a screen shot of a graphical user interface according to the present invention, before any data is selected to be displayed.

(2) FIG. 2 is a screen shot of a graphical user interface according to the present invention, showing an example menu of possible ways of selecting a span of time over which to analyze phone calls.

(3) FIG. 3 is a screen shot of a graphical user interface according to the present invention, showing a pop-up calendar which may be used as an aid to selecting a range of time over which to analyze calls.

(4) FIG. 4 is a screen shot of a graphical user interface according to the present invention, showing how the position of a second menu within the GUI may change depending on what is selected on a first menu.

(5) FIG. 5 is a screen shot of a graphical user interface according to the present invention, showing how a second menu is used to select the type of data to be displayed from the selected span of time.

(6) FIG. 6 is a screen shot of a graphical user interface according to the present invention, a table which was dynamically populated based on criteria specified by menu-selected time range and menu-selected type of data.

(7) FIG. 7 is a screen shot of a graphical user interface according to the present invention, showing a pop-over graphical menu allowing an investigator to mark a call after having listened to all or part of that call.

(8) FIG. 8 is a screen shot of a graphical user interface according to the present invention, showing a pop-over note pad which allows investigators to enter notes concerning a given call.

(9) FIG. 9 is a screen shot of a graphical user interface according to the present invention, showing how a second table is dynamically populated with data about likely imposters when a given suspect call is selected.

(10) FIG. 10 is a screen shot of a graphical user interface according to the present invention, showing the suspected imposter table replaced by a table of notes on a given suspected imposter.

(11) FIG. 11 is a screen shot of a graphical user interface according to the present invention, showing three dynamically generated tables (suspicious call report table, table of likely imposters concerning a selected call, and table of calls made by a selected inmate).

(12) FIG. 12 is a screen shot of a graphical user interface according to the present invention, showing three dynamically generated tables, where the elevator bars to the side of the first two tables have been slid down some.

(13) FIG. 13 depicts speech from two participants in a conversation displayed separately within a graphical user interface according to the present invention.

(14) FIG. 14 depicts a speech segment from one participant in a conversation having been manually selected within a graphical user interface according to the present invention.

(15) FIG. 15 is a graphical user interface for reviewing recorded conversations according to one aspect of the present invention.

DETAILED DESCRIPTIONS OF SOME PREFERRED EMBODIMENTS

(16) Within this document, the terms “voice print”, “voice signature”, “voice print data”, “voice signature data”, and “voice model” may all be used interchangeably to refer to data derived from processing speech of a given person, where the derived data may be considered indicative of characteristics of the vocal tract of the person speaking. The terms “speaker identification” and “voice identification” may be used interchangeably in this document to refer to the process of identifying which person out of a number of people a particular speech segment comes from. The terms “voice verification” and “speaker verification” are used interchangeably in this document to refer to the process of processing a speech segment and determining the likelihood that that speech segment was spoken by a particular person. The terms “voice recognition” and “speaker recognition” may be used interchangeably within this document to refer to either voice identification or voice verification.

(17) Within this document, speech mannerisms will be deemed to include use of particular combinations of words (such as colloquialisms), frequency of use of various words (such as swear words, both as modifiers for other words, and not as modifiers), frequency and context of use of words and sounds some people use as conversational filler (such as the sound “aah”, or the word “like”), phrases which may be habitually used to start speech segments (such as the phrase “OK”, or the word “so”), regional accents, elongation of pronunciation of words (for instance where the elongation may indicate a state of mind such as contemplation), etc.

(18) Within this document, the term “enrolled participant” will refer to a participant who's voice or other biometric identifying data (such as fingerprint data, retina scan data, or photographic data) is enrolled in the system, who has been recognized by the system as enrolled. In applications of the present invention concerned with identifying conversation participants or controlling in some way who participates in a conversation, the terms “allowed” and “authorized” when used to describe a conversation participant will refer either to a conversation participant who's voice or other biometric identifying data is recognized as enrolled in the system (and who is authorized to participate in a given portion of a conversation without an automated action being taken), or a conversation participant who is identified as part of a class of participants (such as women, or men, or persons who's voice or other biometric identifying data is not recognized), and who, based on his or her individual identity or class identity, is permitted to participate in a given portion of a conversation without an automated action being taken (such as an alert being generated and/or an automated message being played and/or a call being disconnected).

(19) Within this document, the terms “unauthorized” and “unallowed”, when used to describe a conversation participant, will refer to persons who have not been identified as “allowed” or “authorized”. Within this document, the term “disallowed”, when used to describe a conversation participant, will refer either to a conversation participant who's voice or other biometric identifying data is recognized as enrolled in the system, or a conversation participant who is identified as part of a class of participants (such as women, or men, or persons who's voice or other biometric identifying data is not recognized), and who, based on his or her individual identity or class identity, is prohibited from participating in a given portion of a conversation.

(20) Within this document, the term “inter-prosodic” shall refer to all non-vocabulary-based characterization of the quality of conversation between two or more persons. For instance a conversation between two people might be characterized as confrontational, friendly, teaching, order giving/taking cooperative, etc. The term “trans-prosodic” within this document shall refer to non-vocabulary-based patterns of the quality of a given speaker's speech across a substantial portion of a conversation or lengthy speech segment (for instance, across a portion of a conversation or speech segment long enough for an emotional pattern to be distinguishable).

(21) Within this document, the term “key-phrase” shall be used to denote any word, phrase, or phonetic utterance which one might wish to detect or search for in a conversation. Within this document, the term “call control official” shall be used to refer to any person who has the authority to interrupt a phone call which is in progress. Such interruption could be by joining the call, temporarily disconnecting one or more parties from the call, disconnecting the entire call, or disabling one or more parties ability to be heard on the call.

(22) In a preferred embodiment, when a user of the graphical user interface of the present invention first sits down to use the system, he sees a screen such as the one shown in FIG. 1, which may usefully be thought of as consisting of five regions as follows: Call Report Selection Area 101 allows the user to select a span of time and a type of calls to display over that span of time. Call Report Area 102 contains a note that this area will be used to display the call report on the data selected in Call Report Selection Area 101. Call Analyzer Area 103 contains a note that this area will be used to display a table of information relevant to whatever call is selected on the call report table which will be generated in Call Report Area 102. Call Records Selection Area 104 allows the user to select a set of call records via menu criteria such as phone number, inmate ID, etc. Call Records Area 105 contains a note explaining that a table of call records will be generated in that area given criteria specified in Call Records Selection Area 104.

(23) FIG. 2 shows how time-span menu 200 expands (showing selectable options) when time-span menu drop-down arrow 201 is clicked. In a preferred embodiment, time-span menu 200 contains selections “Yesterday and Today”, “Today”, “Last X days”, “Since . . . ”, “Past Week”, and “Past Month”. Depending on which selection is chosen, dynamically generated additional time-span data box 202 will appear. For instance, if “Today” is selected, box 202 will not appear, whereas if “Last X days” is selected, data box 202 will appear. If “Since . . . ” is selected, dynamically generated pop-over calendar 300 will appear, as shown in FIG. 3. Pop-over calendar 300 may also include time field 301, into which a user may type a time of day, if desired. In an alternate embodiment, time span menu 202 may include a “Between” selection, and a time range may be selected in pop-over calendar 300.

(24) Depending on what menu selection is selected from time-span menu 200, time-span data box 202 may change size and the position of other objects within the display area may change to accommodate this change. For instance, FIG. 4 shows call-type menu 400 changes position relative to FIG. 1 or 2 when “Since . . . ” is selected on time-span menu 200, making time-span data box 202 increase in size to accommodate a date or date range.

(25) FIG. 5 shows the selections available on call-type menu 500, which drops down when call-type drop-down arrow 501 is clicked. In a preferred embodiment, selections on call-type menu 500 include “Suspicious Calls”, “Unrated Suspicious Calls”, “Officer Rated Calls”, “A Specific CSN”, “Calls Using This Inmate ID # . . . ”, and “Calls to This Phone # . . . ”. If any of the selections containing “ . . . ” are selected, then an additional call-type data box appears, allowing the user to type additional call type data (such as a phone number, Call Sequence Number (CSN), or inmate ID number).

(26) Once a user has entered time-span and call-type data, Recorded Call Search Button 600 can be clicked to dynamically generate report table 601, as shown in FIG. 6. Column 1 (the left-most column) in report table 600 is both numerically coded and color coded to indicate level of severity. For instance, in the Suspicious Call Report shown in FIG. 6, the table is sorted (from top to bottom) in order of how likely it is that the voice on the call is the voice of an imposter. The color red is used to represent a highly suspicious call, while the color green is used to represent a very-low-suspicion call.

(27) The second column in table 601 contains the call sequence number (CSN). A unique CSN is assigned to each call made. The second column also has space for a note-denoting icon such as icon 602, which will be present if an official has entered any notes concerning the call. The third column in table 601 contains the date of the call. The fourth column in table 601 contains the ID number used to place the call. If the call is suspected to be made by an imposter, it is suspected that this number does not belong to the person who's voice is actually heard on the call. The fifth column of table 601 contains the name that goes with the ID number is column 4, plus a small icon of a man with sun glasses if it has been determined that the call was actually placed by someone else.

(28) Coolum 6 of table 601 contains icons indicating the rating status of each call. In a preferred embodiment, if a call has not been rated by an official, a question mark icon will be present. When an official listens to a call, he may right-click on the icon in column 6 of table 601, and pop-over window 700 will appear (as shown in FIG. 7), allowing the user to select a rating Icon of his choice. In a preferred embodiment, a sun-glasses icon represents the rating “imposter”, a halo icon represents the rating “innocent” (not imposter), and a balance icon represents the rating “unsure” (of whether the person actually speaking on the call is the person who's ID number was used to place the call). If a user wishes to attach a note to a call record displayed in table 602, he may do so by right-clicking in column 2 of the call record, and a note window 800 will appear as a pop-over, allowing the user to type a note. In an alternate embodiment, notes may also be entered as voice annotation.

(29) In a preferred embodiment, when a row of table 601 is selected (for instance, by left-clicking anywhere in the row, then call analyzer table 900 is dynamically generated, and the row of table that generated table 900 is shaded as row 901 is shaded in FIG. 9. In a preferred embodiment, each row of table 900 represents a possible imposter, with the table sorted such that the most likely imposter is at the top, and successively less likely imposters appear on the rows below. Column 1 (the left-most column) of table 900 contains the possible imposter's name. Column 2 contains the possible imposter's ID number. Column 3 is a color-coded and numerically coded column containing the automatically generated Suspicious Call Finder (SCF) rating. In a preferred embodiment, the SCF rating is generated based on two factors. The first factor is the similarity of the possible imposter's voice to the voice heard on the call. The second factor is the frequency with which the possible imposter has been known to dial the number that was dialed (either under his own ID or on calls where he has previously been verified to be identifying himself as someone else).

(30) Column 4 of table 900 contains the number of calls the suspected imposter has made to the dialed phone number in the past. Column 5 of table 900 contains the amount of money the suspected imposter has spent calling the dialed number in the past. Column 5 contains the percentage of all calls made to the dialed number which were made by the suspected imposter. Column 6 contains an icon which indicates whether the suspected imposter's voice has been verified by an official to be on the call. Note that it is possible for more than one imposter to participate in a call, so some embodiments may allow more than one “verified” icon in column 6. In a preferred embodiment, the checkmark icon, the question mark icon, and the halo icon in column 6 represent “verified imposter”, “unknown”, and “innocent” respectively.

(31) In a preferred embodiment, the inmates whose voices and call histories were analyzed to generate possible imposter table 900 are chosen automatically based on system records indicating who has access to the phone from which the suspicious call was placed. Sometimes records may not be updated quickly when inmates are moved, so button 901 is provided so that officials can manually add inmates to the list of possible imposters. When inmates are manually added, on-the-fly computation will be done to rate the added inmate's voice and call history to rank that inmate among the possible imposters. Notes previously entered for the call selected in table 601 may be viewed by clicking button 902. Clicking button 902 replaces possible imposter table 900 with notes table 1000 (shown in FIG. 10). The user may switch back to viewing possible imposter table 900 by clicking button 1001.

(32) In a preferred embodiment, a third dynamically generated table 1100 may be generated by selecting a menu item from Call Records Search Menu 1101, and filling in associated Call Record Search Data Box 1102. For instance, in FIG. 11, call records table 1100 represents all the calls made by inmate number 165388, and in FIG. 12, call records table 1200 represents all the calls made to the phone number filled in in box 1102. In a preferred embodiment of the present invention, when dynamically generated tables (such as 601, 900, or 1200) do not fit in their allotted space either horizontally or vertically or both, vertical and/or horizontal elevators such as 1201, 1202, 1203 allow vertical and/or horizontal scrolling of the table which does not fit in its allotted space. Likewise, the entire display window may be scrolled by web browser elevator 1204. In a preferred embodiment, the graphical user interface of the present invention is implemented using asynchronous Java and XML (AJaX) programming standards which have evolved for websites and the like.

(33) In a preferred embodiment for use in correctional institutions, a graphical user interface such as shown in FIG. 13 may display inmate conversation waveform 1300 and called-party conversation waveform 1301 separately.

(34) In a preferred embodiment for use in correctional institutions, a graphical user interface such as shown in FIG. 14 allows call review personnel to highlight a portion of a conversation participants speech (such as inmate speech segment 1400), and build a speech model, voiceprint, voice signature of the like from that segment by clicking Voice Modeling button 1401. In a preferred embodiment, this function leads to a dialog box which allows the user to build a voice model or voiceprint, and store any voice model or voiceprint produced for further use. A preferred embodiment of the present invention allows automated searching of past conversations to detect voices of conversation participants whose voice characteristics fit a given voice model or voiceprint.

(35) A preferred embodiment of the present invention for use in correctional institutions allows officials to automatically flag future calls on which a voice matching a given voiceprint or fitting a given voice model is detected.

(36) Thus a preferred embodiment of the present invention allows correctional officers to quickly specify a segment of speech within a graphical interface, and search a set of past phone calls and/or flag future phone calls when the voice of the person speaking in the specified speech segment is detected on such calls. This allows easy searching for patterns of identity theft and the like, once an identity theft perpetrator or suspect has been identified by voice segment.

(37) A preferred embodiment of the present invention for use in financial institutions similarly facilitates the building of a voice model or voiceprint from a speech segment of a call on which a credit card fraud perpetrator or suspect's voice segment has been identified, and similarly allows the automated searching of past recorded conversations for occurrences of that person's voice. Such preferred embodiment also allows automated real-time flagging of calls where such perpetrator's voice is detected, allowing improved fraud prevention, and facilitating automated call tracing to aid law enforcement officials in apprehending credit card fraud suspects.

(38) For example, if a customer named Fred had a credit card, and a credit card fraud perpetrator were to obtain Fred's personal identifying information (such as social security number and mother's maiden name), call Fred's credit card company, pretend to be Fred, claim to have moved, and have a new credit card sent to a different address, once the fraud was detected, the credit card company could (using the present invention) find the archived phone conversation, make a model of the perpetrator's voice, automatically detect instances of the perpetrator's voice on the phone in the future, and automatically block future fraud attempts of the perpetrator, while simultaneously in real time alerting law enforcement officials of the phone number the perpetrator calls in from.

(39) In a preferred embodiment, call review officials can use a graphical interface such as shown in FIG. 14 to mark a speech segment which is exemplary of a given emotional state of given speaker, and model of that emotion in that person's voice can be stored. Subsequently, past conversations in which that speaker participated can be searched for instances of similar emotional states. Likewise, future conversations may be monitored in real time to detect the occurrence of a similar emotional state, and such detection can be configured to flag such conversations for later review, or alert personnel in real time when such conversations are taking place.

(40) In a preferred embodiment, call review officials can use a graphical interface such as shown in FIG. 14 to mark a speech segment which is exemplary of a given speech mannerism or phonetic pattern, and store a model of that speech mannerism or phonetic pattern. Subsequently, past conversations can be searched for instances of a similar speech mannerism or phonetic pattern. Likewise, future conversations may be monitored in real time to detect the occurrence of a similar speech mannerism or phonetic pattern, and such detection can be configured to flag such conversations for later review, or alert personnel in real time when such conversations are taking place.

(41) In a preferred embodiment of the present invention for use in financial institutions, certain conversation participants are automatically determined to be “genuine” (either in real time or by subsequent evidence), and speech models or voiceprints are automatically built and saved from such verified genuine recordings to aid in fraud prevention on future calls. For example, if within a given conversation a customer arranges to make a particular payment, and such arranged payment is subsequently made, then, once that payment has been made, the voice in that original recorded conversation can be assumed to be the voice of the customer.

(42) Key points for Financial Services fraudulent transaction detector.

(43) I. Classify Customer Service to Customer phone conversations as potential or non potential fraud transactions. A. Potential Fraudulent transactions (examples) i. Change of Address ii. New credit card request iii. Change in password iv. Credit limit increase requests B. No Fraud Potential Transactions (examples) i. Clarification of a charge ii. Arrangements for direct payment II. Build voice models by credit card number or corresponding customer ID A. Build Voice models for No Fraud Potential Transactions— i. If significantly different than previous voice model, build a new model ii. If similar to previous voice model, add it to previous voice model and make a new model B. Build Voice models for Fraudulent transactions i. When a confirmed impostor is detected, build a voice model for the transaction and add the voice model to the impostor models III. Fraud detection A. When calls come into the Call Center i. Check transaction type ii. If Potential Fraudulent transaction use JLG Technologies software to look for impostor voice model matches iii. Provide real time alerts on high potential loss calls iv. Use lie detection software on high potential fraud calls B. Post Call Processing i. If a call is considered fraudulent after the fact, add the perpetrators voice to the impostor group and use this as part of the fraudulent voice detection group IV. Reporting A. The system generates reports to provide potential fraudulent activity i. Low scoring voice model customer calls that have high scoring impostor scores ii. Use lie detection on high potential fraud calls

(44) In a preferred embodiment, the voice model used to ongoingly verify the identity of a conversation participant is automatically updated due to circumstances detected in the conversation. For instance, if it is determined that the conversation participant Fred has switched from speaking to an adult to speaking to a child, a voice model previously derived during a speech segment when Fred was speaking to a child would be substituted for the model used when Fred is speaking to an adult. In a preferred embodiment, the boundaries of the speech segments when Fred is speaking to an adult and when Fred is speaking to a child would be determined by speech-to-text conversion, and detection of phrases spoken by Fred such as “put <name> on the phone”, or “let me speak to <name>”, or phrases spoken by another conversation participant such as “I'll put <name> on the phone”, or “here she is”, or “here's <name>”. In a preferred embodiment, a second detected circumstance which would cause a switch in the voice model used to ongoingly identify Fred would be the detection of a new conversation participant's voice just before the speech segment from Fred for which the second voice model is used.

(45) In the above example, in a preferred embodiment, a third detected circumstance that will cause the system to switch to using a different voice model for ongoing verification of Fred's identity would be the detection of speech from Fred's side of the line (determined, for instance, by telephonic directional separation in an institutional phone call) which is no longer easily verifiable as coming from Fred using the voice model used to verify Fred's previous conversation segment. In a preferred embodiment, when the system switches to using a different voice model for Fred, the system will keep using that voice model until that voice model is no longer a good match, or until circumstances are detected that indicate the system should switch to a different model. As above, such circumstances may include detected words spoken by Fred or another conversation participant or both, and such circumstances may include a change in the detected voice of the person with whom Fred is conversing.

(46) In a preferred embodiment for use in applications where it is desirable to perform ongoing identity verification via voice, when the calculated probability that an imposter is speaking rises above a certain level (or the calculated probability that the correct speaker is speaking falls below a certain level), the system performs an automated search for a best match of the speaker's voice with voice models of suspected imposters. In a preferred embodiment, this imposter search is done in an efficient manner. In one preferred embodiment, voices of persons who could be imposters are divided into a plurality of groupings based on certain measured characteristics of the voice, and only voice models within such grouping as the voice being checked would fall are checked for a match to the possible imposter speaking.

(47) In the literature, persons who's voices get classified into one of a plurality of classifications are sometimes referred to as “cohort speakers”. The concept of cohort speakers can be thought of as a way to save time searching for speaker's identity, by comparing a small region of one speaker's speech-feature space with like region of another speaker's speech-feature space, rather than doing an exhaustive comparison of the entire feature space. This is analogous of comparing a portion of a finger print, and only going further in the comparison if that portion appears to be a match.

(48) In a preferred embodiment, another way in which the search is done in an efficient manner is to limit the set of potential imposters to the set of persons known to have access to the phone on which the potential imposter is speaking, at the time the potential imposter is speaking. This limitation feature would, for instance, be useful in limiting the set of potential imposters who might impersonate a given prison inmate, but it would not be as useful in applications such as ongoing voice verification of customers during telephone banking and other financial transactions which might be carried out over the phone. In such financial applications, potential imposter voice models to be searched could, for instance, start with voice models derived from voices previously recorded in transactions which were later determined to be fraudulent.

(49) In a preferred embodiment for use in prisons, rather than simply beginning with voice verification (based on an assumed identity of an inmate from PIN provided or the like), the system starts by performing voice identification, based on a maximum-likelihood search within a set of voice models believed to be representative of all possible persons who could be speaking on a given telephone. Once the most likely identity is derived, that identity is compared to the claimed identity of the inmate speaking. If the claimed identity and the maximum-likelihood-search-derived identity are identical, then the system switches into ongoing voice verification mode.

(50) In a preferred embodiment of the present invention for use in applications where the set of possible conversation participants is known and closed (for instance in a correctional institution, where the set of all possible persons who might use a given telephone is known), the present invention first identifies which voice model or voiceprint most closely matches the voice characteristics of a conversation participant being identified. In such an embodiment, the present invention then switches to ongoing-voice-verification mode. In such a preferred embodiment, the threshold used by the present invention to decide whether the identity of the conversation participant being monitored has changed is chosen based upon the difference between how far off the parameters of the voice of the person speaking are from the parameters of the voice model the speaker has been identified to be associated with compared to how far off the parameters of the voice of the person speaking are from the parameters of the voice model of the next most likely person who the speaker might have been identified as.

(51) An example is provided here for added clarity. Suppose the set of possible conversation participants within a given prison cellblock consists of Fred, Bob, Joe, Ted, and Al. One of these men gets on the phone and begins speaking. The present invention compares the voice parameters of the man who is speaking with the previously stored voice parameters of Fred Bob Joe Ted and Al. The closest match is found to be Al. The next closest match is found to be Ted. After the conversation participant has been identified as Al, the system switches into voice verification mode. If the voice parameters of the person speaking closely match the stored voice parameters of Al, and the voice parameters of the person speaking are very different from the stored voice parameters of Ted, a “large” tolerance will be allowed in the ongoing identity verification of Al as the conversation progresses. On the other hand if the voice parameters of the person speaking closely match the stored voice parameters of Al, and the voice parameters of the person speaking are also fairly close to the stored voice parameters of Ted (though not as close as they are to the voice parameters of Al), a “small” tolerance will be allowed in the ongoing identity verification of Al is the conversation progresses. Also, if the voice parameters of the person speaking are not very close to the stored voice parameters of Al but they're a bit closer to the stored voice parameters of Al than they are to the stored voice parameters of Ted, then again a “small” tolerance will be allowed in the ongoing voice verification of Al.

(52) Thus, in applications where there is little tolerance for errors in voice identification and/or verification, the present invention allows larger variations in a given person's voice before identifying such person as an imposter in closed-population situations where no other persons have similar voices than the present invention allows before identifying such person as an imposter in closed-population situations where one or more other persons have voices similar to the voice of the person speaking Thus, the present invention may be thought of in such applications as having an “imposter detection” threshold which is representative of the amount by which the voice parameters of a given conversation participant are allowed to very during a conversation before that participant is considered an imposter, and the imposter detection threshold for different conversation participants within a given closed set of possible conversation participants will differ from one conversation participant to another, depending on whether there are other persons within the possible set of conversation participants whose voices are similar to the conversation participant whose voice is being identified and/or ongoingly verified within a conversation.

(53) Within this document, the variation tolerance allowed for the measured parameters of a given speaker's voice during ongoing voice verification within a given conversation will be referred to as that speaker's voice identity tolerance within that conversation. Voice identity tolerance may be defined one-dimensionally or multi-dimensionally, depending on the nature of the voice parameter measurements made in a given embodiment of the present invention.

(54) In a preferred embodiment of the present invention incorporating initial voice identification followed by subsequent ongoing voice verification, should the ongoingly monitored parameters of the speaker's voice very by more than the allowed voice identity tolerance (or imposter detection tolerance), the system switches back from voice verification mode into voice identification mode, and once again the most likely identity of the speaker is chosen from within a known closed possible set of speakers, based upon a maximum likelihood search.

(55) In a preferred embodiment of the present invention employing a combination of voice identification and voice verification techniques, statistics indicative of the certainty to which a given conversation participant has been identified in a given conversation are stored along with the voice data and/or text data derived from that conversation through speech-to-text conversion.

(56) In a preferred embodiment, more than one voice model or voiceprint may be stored for a given conversation participant. For instance, if it is found that the voice parameters of a given conversation participant are distinctly and predictably different when he or she speaks to a child on the phone versus when he or she speaks to an adult on the phone it may be advantageous to store two separate voice models or voiceprints, then it would be to try and average the voice parameters of that conversation participant between those two different modes of speaking.

(57) In a preferred embodiment of the present invention employing ongoing voice identity verification within a known close population, and employing voice characterization of an n-dimensional nature, the n-dimensional distance from the voice model of the assumed “correct” conversation participant to the voice model of the “most likely imposter” within the closed population is determined, and ongoing voice verification is done in such a way that if the ongoingly measured voice parameters of the person speaking drift away from their ideal position in voice parameter space by more than half the distance to the closest possible imposter than a “likely imposter” alarm condition is generated.

(58) It can be seen that within a multidimensional model space, while “maximum allowed drift distance” condition determination is optimal if the drift in parameters of the speaker's voice parameters go in the exact direction of the n-dimensional position of the most likely imposter's voice, this alarm condition's tolerance may be unnecessarily tight if the drift of the ongoingly measured voice parameters of the speaker within the n-dimensional space are in a direction other than the direction toward the most likely imposter, and the next most likely imposter is significantly further away in the n-dimensional space than the first most likely imposter is. Plus, in a preferred embodiment the allowed drift distance (within n-dimensional voice model space) of the ongoingly measured voice parameters of the conversation participant differs depending on the direction (within n-dimensional parameter space) of the drift detected.

(59) During the process of ongoing voice identity verification, the shorter the speech segment used to derive voice model parameters is, the less accurate the derived voice model parameters can be assumed to be. However, if the length of a speech segment used to verify identity of a conversation participant is inadvertently made so long that part of it may contain speech from more than one conversation participant, then the identity verification accuracy available from the speech segment is actually less than the identity verification accuracy available from a shorter speech segment which only contains speech from one conversation participant.

(60) In a preferred embodiment of the present invention for use in correctional institutions and the like, the length of speech segments used for ongoing identity verification are chosen based on the boundary conditions of those speech segments. Since it is much more likely that an imposter will enter a conversation immediately following a pause in speech on the side of the line on which the imposter enters the conversation, normally such pauses are used to determine the boundaries of speech segments used in ongoing identity verification. However, it is also possible that inmates in a correctional institution will learn of such an algorithm and will attempt to have an imposter join a conversation by overlapping the imposter's speech with the speech of the “correctly” verified conversation participant. A preferred embodiment of the present invention therefore employs a second speech segment boundary determining mechanism which operates by detecting the simultaneous presence of two voices. Such a detection may be done, for instance, through phase locked frequency domain measurements, or time domain measurements which detect the presence of the periodic pulse trains of more than one set of vocal chords at the same time.

(61) In addition to the use of voiceprints and voice models (such as Gaussian mixture models) known in the art, in a preferred embodiment, the present invention additionally utilizes detection of and classification of speech mannerisms of conversation participants as part of the evidence used to ongoingly verify the identity of conversation participants. In a preferred embodiment, measured speech mannerisms include statistical probabilities of use of words accents timing and the like derived from previous known samples of a given conversation participant's speech.

(62) Real-world applications often make computational efficiency an important benchmark of system performance. Within the present invention one of the methods used to increase computational efficiency without sacrificing too much in the way of false “imposter” detection, is not to analyze all of the speech within a given speech segment. In a preferred embodiment speech segments which are considered to be “long”, and which are considered to be highly unlikely to contain speech from more than one individual only need to have a fraction of their speech content analyzed for identity verification purposes. As long as “enough” of the speech within the segment is analyzed, the certainty to which identity verification is carried out can be fairly high. One way in which the present invention is able to select a subset of speech to be analyzed, is to analyze every Nth 10 ms of speech, where the number N is chosen to be small enough to allow reliable voice Identity verification. It should be noted that this style of data reduction may not be usable in situations where speech mannerisms including temporal patterns and speech are being analyzed as part of the identity verification process.

(63) When a speech feature space is mathematically defined such that a certain area of that speech feature space is occupied only by features of the speech of a known subset of possible speakers, the mapping of such known subsets of possible speakers to areas of such a speech feature space is known as a “codebook”. It is mathematically possible to define a codebook such that the known subsets of possible speakers referred to by the codebook are each individual speakers. In a case where the codebook is defined such that the known subsets of possible speakers referred to by the codebook each contain a group of possible speakers, the individuals in such a group of possible speakers are referred to as “cohort speakers”.

(64) In a preferred embodiment, digital signal processing involved in derivation of voice prints, voice models, phonetic sequences, and text from audio is carried out on DSP processors on boards commercially available graphics cards such as those used for high-end computer games. In a preferred embodiment, one or more such graphics boards is installed in a personal computer (PC) to implement the computational hardware of the present invention. Carrying out digital signal processing on such cards facilitates far more economical implementation of the features of the present invention than implementing all the computation of the present invention on typical personal computer CPUs.

(65) A call monitor/review interface according to a preferred embodiment of the present invention is shown in FIG. 15. Graphical face symbol/name pairs such as 1506 are added to the display as they are detected in a live monitoring application, or all appear when a call to be reviewed is accessed. In a preferred embodiment, during call monitoring using a simple telephone interface, pressing a particular DTMF digit on the monitoring phone will cause the present invention to enunciate identifying information about the individual speaking.

(66) Within this document, means for prompting shall be construed to include recorded voice prompt played under computer control, synthetic voice prompts played under computer control, text prompting on a visual computer-controlled display, prompting by displaying detected condition information stored in a database in response to a database query, and any other means for prompting known in the art. Within this document, means for deriving a voice print or voice model shall include means such as referenced directly or indirectly in U.S. Pat. No. 7,379,868, and any speaker identification and voice identification publications known in the art.

(67) Within this document, means for recording voice signals from telephone conversations shall be construed to include means disclosed in documents such as U.S. Pat. Nos. 7,844,252 and 7,889,847 (which are herein included by reference), and hardware such as available from Adtrans for converting analog telephone signals to digital form, and compressing digital speech streams by known compression algorithms such as G729, and hardware which implements a SIP stack and communicates compressed or uncompressed digital audio in packet form over the Internet.

(68) Within this document, the term loudspeaker shall be construed to include ear-mounted audio speakers such as headphones, earbuds and earphones as well as conventional free-standing loudspeakers.

(69) Within this document, means for monitoring a telephone line for a suspected 3-way call event includes an analog-to-digital converter and may include one or more means known in the art for detecting a 3-way call event, such as silence detection means, click and pop detection means, keyword detection means, special information tone detection means, means for detecting a change in voice characteristics, line impedance change detection means, volume level change detection means, means for distinguishing yelling, or any combination of these means.

(70) Within this document, each of the following are considered to be suspicious call criteria: identity scamming detection 3-way calls spoken phone number detected added called party high interest group inmate high interest group called party newly identified link matches historical prior confirmed suspicious calling pattern new called party voice seen on call. previously identified high interest called party voice detected whispering detected prior inmate voice detected

(71) For the purposes of this patent application, the above list shall constitute the complete list of suspicious call criteria.

(72) Within this document, the term “voice characteristic data” shall be used to refer to voice model data or voice print data derived from a sample of spoken utterances from an individual. Within this document, the phrase “alerting a call control official” shall be deemed to construe either: placing a phone call to a call control official and playing an audio message, or sounding an audio alarm that can be heard by a call control official, or displaying a visual indicator that can be seen by a call control official, or displaying detected condition information stored in a database in response to a database query, or actuating a vibrating alarm that can be felt by a call control official, or some combination of these. In this document, the term “call-restricting action” shall be construed to mean either alerting a call-control official as to the nature of the call that may merit being cut off, or simply automatically cutting off the call.

(73) Within this document, the term “directional microphone” shall include not only any microphone which is inherently acoustically directional, but also any array of microphones whose signals are processed in either the analog or digital domain to produce a single output signal such that the array behaves like a directional microphone. If all the signals from an array of microphones are recorded in a time-synchronized manner, the recorded signals may be post-processed in different ways after the fact to selectively listen in different directions.

(74) The foregoing discussion should be understood as illustrative and should not be considered to be limiting in any sense. While this invention has been particularly shown and described with references to preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the claims.