Personal Voice-Based Information Retrieval System
20180007201 · 2018-01-04
Inventors
Cpc classification
G10L15/02
PHYSICS
G10L17/24
PHYSICS
H04L67/02
ELECTRICITY
G10L13/08
PHYSICS
G10L15/06
PHYSICS
G06F3/167
PHYSICS
G10L15/22
PHYSICS
H04M3/4938
ELECTRICITY
International classification
H04M3/493
ELECTRICITY
G10L13/08
PHYSICS
G10L15/22
PHYSICS
Abstract
The present invention relates to a system for retrieving information from a network such as the Internet. A user creates a user-defined record in a database that identifies an information source, such as a web site, containing information of interest to the user. This record identifies the location of the information source and also contains a recognition grammar based upon a speech command assigned by the user. Upon receiving the speech command from the user that is described within the recognition grammar, a network interface system accesses the information source and retrieves the information requested by the user.
Claims
1. A method, comprising: (a) receiving a speech command from a voice-enabled device, over a network, by a speech-recognition engine coupled to a media server by an interactive voice response application including a user-defined search, the speech-recognition engine adapted to convert the speech command into a data message, the media server adapted to identify and access at least one or more websites containing information of interest to a particular user, the speech-recognition engine adapted to select particular speech-recognition grammar describing the speech command received and assigned to fetching content relating to the data message converted from the speech command and assigned to the user-defined search including a web request, along with a uniform resource locator of an identified web site from the one or more websites containing information of interest to the particular user and responsive to the web request; (b) selecting, by the media server, at least one information-source-retrieval instruction stored for the particular speech-recognition grammar in a database coupled to the media server and adapted to retrieve information from the at least one or more websites; (c) accessing, by a web-browsing server, a portion of the information source to retrieve information relating to the speech command, by using a processor of the web-browsing server, which processor (i) performs an instruction that requests information from an identified web page, (ii) utilizes a command to execute a content extractor within the web-browsing server to separate a portion of the information that is relevant from other information on the web page using a name of a named object including the information, the information derived from only a portion of the web page containing information pertinent to the speech command, the content extractor adapted to use a content- descriptor file containing a description of the portion of information and the content-descriptor file adapted to indicate a location of the portion of the information within the information source; (d) selecting by the web-browsing server, the information relating to the speech command from the information source and retrieving only the portion of the information requested by the speech command according to the at least one information-source-retrieval instruction; (e) converting the information retrieved from the information source into an audio message by a speech-synthesis engine, the speech-synthesis engine coupled to the media server; and (f) transmitting the audio message by the voice-enabled device to the particular user.
2. The method of claim 1, wherein the speech command is received by at least one of a landline telephone, a wireless telephone, and an Internet Protocol telephone and the media server is operatively connected to at least one of a local-area network, a wide-area network, and the Internet.
3. The method of claim 2, wherein the media server functions as a user-interface system adapted to provide access to a voice-browsing system.
4. The method of claim 2, further comprising: a clipping engine adapted to initially generate the content-descriptor file that indicates the location of the portion of the information within the identified website.
5. A voice-browsing system for retrieving information from an information source that is periodically updated with current information, by speech commands received from a particular user provided via a voice-enabled device after establishing a connection between the voice-enabled device and a media server of the voice-browsing system, said voice-browsing system comprising: (a) a speech-recognition engine including a processor and coupled to the media server, the media server initiating a voice-response application once the connection between the voice-enabled device and the voice-browsing system is established, the speech-recognition engine adapted to receive a speech command from a particular user via the voice-enabled device, the media server configured to identify and access the information source via a network, the speech-recognition engine adapted to convert the speech command into a data message by selecting speech-recognition grammar established to correspond to the speech command received from the particular user and assigned to perform searches; (b) the media server further configured to select at least one information-source-retrieval instruction corresponding to the speech-recognition grammar established for the speech command, the at least one information-source-retrieval instruction stored in a database associated with the media server and adapted to retrieve information; (d) a web-browsing server coupled to the media server and adapted to access at least a portion of the information source to retrieve information indicated by the speech command, by using a processor of the web-browsing server, which processor (i) performs an instruction that requests information from an identified web page, and (ii) utilizes a command to execute a content extractor within the web-browsing server to separate a portion of the information from other information, the information derived from only a portion of a web page containing information relevant to the speech command, wherein the content extractor uses a content-descriptor file containing a description of the portion of information and wherein the content-descriptor file indicates a location of the portion of the information within the information source, and selecting, by the web-browsing server, the information relevant from the information source and retrieving only the portion of the information that is relevant according to the at least one information-source-retrieval instruction; and (e) a speech-synthesis engine including a processor and coupled to the media server, the speech-synthesis engine adapted to convert the information retrieved from the information source into audio and convey the audio by the voice-enabled device.
6. The voice-browsing system claim 5, further comprising: an interface to an associated website by the network to locate requested information.
7. The voice-browsing system of claim 5, wherein the voice-enabled device accesses the voice-browsing system by at least one of a landline telephone, a wireless telephone, and an Internet Protocol telephonic connection and wherein the media server operatively connects to the network, by at least one of a local-area network, a wide-area network, and the Internet.
8. The voice-browsing system of claim 5, wherein the media server functions as a user-interface system adapted to provide access to a voice-browsing system.
9. The voice-browsing system of claim 5, further comprising: a clipping engine adapted to generate the content-descriptor file, by which, an instruction is used by the web-browsing server to request information from the identified website and the information is displayed on the voice-enabled device, wherein the information is only the portion of the web page containing information relevant to the speech command.
10. A method of selectively retrieving information in response to spoken commands received by a voice-browsing system, the method comprising: (a) identifying, as one of a plurality of speech commands of a speech-recognition lexicon, audio data indicative of words spoken into a microphone of an electronic-communication device of a user; (b) using the identified speech command to access a corresponding descriptor file from a plurality of descriptor files stored in a database associated with the voice-browsing system, and using the corresponding descriptor file to identify (i) a web-accessible information source, and (ii) request information; (c) using the request information to fetch, from the information source identified by an accessed descriptor file, response data including a named object including content; (d) using the named object to extract the content from the response data; (e) generating audio response data containing indicia of a message for the user, which message is responsive to the identified speech command, and which message is based on the extracted content; and (f) directing a command to play the audio response data using the electronic-communication device of the user.
11. The method of claim 10, wherein the content is located in the response data using the named object regardless of the location of the named object within the response data.
12. The method of claim 11, wherein the fetching occurs on a web browsing server, and wherein the web browsing server receives the identified speech command from a different server.
13. The method of claim 12, further comprising: using Internet Protocol to communicate with the electronic-communication device of the user.
14. The method of claim 12, further comprising: using a telecommunication network to communicate with the electronic-communication device of the user.
15. The method of claim 12, wherein the electronic-communication device of the user is a voice-enabled wireless unit that is not a telephone.
16. The method of claim 12, wherein the corresponding descriptor file identifies the web accessible information source and information used to generate proper requests to the information source with a specific URL format including search parameters.
17. The method of claim 12, wherein part using the request information to fetch comprises fetching the response data from a database stored on a Local Area Network (LAN) or a Wide Area Network (WAN).
18. The method of claim 12, further comprising: using the named object to determine the beginning and end of the content within the response data.
19. An apparatus with a capability of selectively retrieving information in response to spoken commands, the apparatus comprising: (a) a transceiver coupled to a network and capable of sending to and receiving information via the network from an electronic-communication device of a user, which device has a microphone; (b) a database containing a plurality of descriptor files, each of the descriptor files identifying (i) a web-accessible information source, and (ii) request information; (c) a speech-recognition engine, coupled to the transceiver and having access to the database, programmed to automatically identify, as one of a plurality of speech commands of a speech-recognition lexicon, audio data indicative of words spoken into the microphone of the electronic-communication device of a user; (d) a media server, coupled to the speech-recognition engine and having access to the database, programmed to access a descriptor file from the plurality of descriptor files in the database based on the identified speech command; (e) a web browsing server, coupled to the media server and programmed: (i) to retrieve, from the information source identified by the accessed descriptor file, responsive data specified by the request information identified by the accessed descriptor file, wherein the response data includes a named object including content; and (ii) to use the name of the named object to extract the content from the response data; and (f) a synthesizer coupled to the web browsing server and programmed to generate audio response data containing indicia of a message for the user, which message is responsive to the identified speech command, and which message is based on the extracted content; (g) the apparatus is programmed to direct a command to play an audio response data using the electronic-communication device of the user.
20. The apparatus of claim 19, wherein the web browsing server is further programmed to use the accessed descriptor file to format a request for a content fetcher.
21. The apparatus of claim 20, wherein the content fetcher is executed in response to a command included in the accessed descriptor file that is executed on the web browsing server.
22. The apparatus of claim 19, wherein the speech-recognition engine is within the media server.
23. The apparatus of claim 19, wherein the web browsing server is further programmed to use the named object to determine the beginning and end of the content within the responsive data.
24. A method of executing improved functionality of a voice-responsive system to allow selective retrieval of different kinds of information in response to commands spoken via an electronic communication device of a user in communication with the voice-responsive system, the method comprising: (a) storing, in a storage device accessible by the voice-responsive system, a speech recognition grammar that is associated with an executable function; and (b) storing, in the storage device, for the executable function, an executable function definition configured to be executed by a web browsing server of the voice-responsive system upon recognizing that a command, spoken by a user of an electronic-communication device, corresponds to the speech recognition grammar; (c) wherein the executable function definition identifies: (i) information used to generate requests to an information source that includes a URL to identify the information source and to extract content from a named object within response data obtainable from the information source accessible by the URL; and (ii) information used to format an audible message from extracted content, so that a command to synthesize an audio response message will generate a coherent sentence that responds to the command spoken by a user, wherein the audio response message is adapted to be played on a speaker of the electronic-communication device of the user.
25. The method of claim 24, further comprising: storing, in the storage device, a pronounceable name associated with an improved executable functionality.
26. The method of claim 24, further comprising: using a web page to input the speech recognition grammar associated with the executable function into the storage device of the voice-responsive system.
27. The method of claim 24, further comprising: using a web page to input the executable function into the storage device of the voice-responsive system.
28. The method of claim 24, further comprising: using a web page to input the information used to format a responsive message into the storage device of the voice-responsive system.
29. The method of claim 24, wherein the executable function includes instructions specifying information to be retrieved when a request is made.
30. The method of claim 24, wherein the executable function definition contains instructions for generating the URL in a form that depends on a word of the first speech recognition grammar.
31. An apparatus having a capability of selectively retrieving information in response to spoken commands, comprising: (a) a microphone; and (b) a speaker coupled to the microphone; and (c) wherein the electronic-communication device is in communication with a remote computer system via a network to initiate user-defined searches; and (d) wherein the remote computer system comprises: (i) a speech-recognition engine, coupled to a transceiver and having access to a database, programmed to identify, as one of a plurality of speech commands of a speech-recognition lexicon, audio data indicative of words spoken into the microphone of the electronic-communication device of a user; (ii) a media server, coupled to the speech-recognition engine and having access to a database containing a plurality of descriptor files, programmed to use the identified speech command to access the corresponding descriptor file from the plurality of descriptor files, wherein the corresponding descriptor file is used to identify (i) a web-accessible information source, and (ii) request information; (iii) a web browsing server programmed: (A) to use the request information to fetch, from the information source identified by the accessed descriptor file, response data including a named object including particular content; and (B) to use a name associated with the named object to extract the content from the response data; (iv) a speech-synthesizer coupled to the web browsing server and programmed to generate audio response data containing indicia of a message for the user, which message is responsive to the identified speech command, and which message is based on the extracted content; and (v) wherein the remote computer system is programmed to direct a command to play the audio response data on the speaker.
32. The apparatus of claim 31, wherein the network is the Internet.
33. The apparatus of claim 31, wherein the network is a telecommunication network.
34. The apparatus of claim 31, wherein the electronic-communication device is a voice-enabled wireless unit that is not a telephone.
35. The apparatus of claim 31, wherein the web browsing server is further programmed to use the named object to determine the beginning and end of the content within the responsive data.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0012]
[0013]
[0014]
[0015]
[0016]
[0017]
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
[0018] The present invention uses various forms of signal and data transmission to allow a user to retrieve customized information from a network using speech communication. In the preferred embodiment of the present invention, a user associates information of interest found on a specific information source, such as a web site, with a pronounceable name or identification word. This pronounceable name/identification word forms a recognition grammar in the preferred embodiment. When the user wishes to retrieve the selected information, he may use a telephone or other voice enabled device to access a voice browser system. The user then speaks a command described in the recognition grammar associated with the desired information. The voice browsing system then accesses the associated information source and returns to the user, using a voice synthesizer, the requested information.
[0019] Referring to
[0020] The clipping client 110 allows a user 100 to create a set of instructions for use by the voice browsing system 108 in order to report personalized information back to the user upon request. The instruction set is created by “clipping” information from the identified web site. A user 100 may be interested in weather for a specific city, such as Chicago. The user 100 identifies a web site from which he would like to obtain the latest Chicago weather information. The clipping client 110 is then activated by the user 100.
[0021] The clipping client 110 displays the selected web site in the same manner as a conventional web browser such as Microsoft's® Internet Explorer.
[0022] Table 1 below is an example of a content descriptor file created by the clipping client of the preferred embodiment. This content descriptor file relates to obtaining weather information from the web site www.cnn.com.
TABLE-US-00001 TABLE 1 table name : portalServices column : service content: weather column: config content: [cnn] Input=_zip URL=http://cgi.cnn.com/cgi-bin/weather/redirect?zip=zip Pre-filter=″\n″ ″ Pre-filter= ″ < [ ″ <: > ] + >″ ″ Pre-filter=/\s+/ I Pre-filter=″ [ \ ( \) \ I ] ″ ! ″ Output=_location Output=first_day_name Output=first_day_weather Output=first_day_high_F Output=first_day_high_C Output=first_day_low_F Output=first_day_low_c Output=second_day_name Output=second_day_weather Output=second_day_high_F Output=second_day_high_C Output=second_day_low_F Output=second_day_low_C Output=third_day_name Output=third_day_weather Output=third_day_high_F Output=third_day_high_C Output=third_day_low_F Output=third_day_low_C Output=fourth_day_name Output=fourth_day_weather Output=fourth_day_high_F Output=fourth_day_high_C Output=fourth_day_low_F Output=fourth_day_low_C Output=undef Output=_current_time Output=fourth_day_low_C Output=undef Output=_current_time Output=_current_month Output=_current_day Output=_current_weather Output=_current_temperature_F Output=_current_temperature_C Output=_humidity Output=_wind Output=_pressure Output=_sunrise Output=_sunset Regular_expression=WEB SERVICES: (.+) Forecast FOUR-DAY FORECAST (\S+) (\S+) HI GH (\S+) F (\S+) C LOW (\S+) F (\S+) C (\S+) (\S+) HIGH (\S+) F (\S+) C LOW (\S+ ) F (\S+) C (\S+) (\S+) HIGH (\S+) F (\S+) C LOW (\S+) F (\S+) C (\S+) (\S+) HIG H −(\S+) C LOW (\S+) F (\S+) C WEATHER MAPS RADAR ( .+) Forecast CURRENT C ONDITIONS (.+) !local!, (\S+) (\S+) (.+) Temp: (\S+) F, (\S+) C Rel. Humidity: ( \S+) Wind: (.+) Pressure: ( .+) Sunrise: ( .+) Sunset: ( .+)
[0023] Finally, the clipping client 110 prompts the user to enter an identification word or phrase that will be associated with the identified web site and information. For example, the user could associate the phrase “Chicago weather” with the selected URL 202 and related weather information 204. The identification word or phrase is stored as a personal recognition grammar that can now be recognized by a speech recognition engine of the voice browsing system 108 which will be discussed below. The personal recognition grammar, URL address 202, and a command for executing a content extraction agent are stored within a database used by the voice browser system 108 which will be discussed below.
[0024] The voice browsing system 108 used with the preferred embodiment will now be described in relation to
[0025] The database 300 may also contain a listing of pre-recorded audio files used to create concatenated phrases and sentences. Further, database 300 may contain customer profile information, system activity reports, and any other data or software servers necessary for the testing or administration of the voice browsing system 108.
[0026] The operation of the media servers 304 will now be discussed in relation to
[0027] The speech recognition function is performed by a speech recognition engine 500 that converts voice commands received from the user's voice enabled device 10 (i.e., any type of wire line or wireless telephone, Internet Protocol (IP) phones, or other special wireless units) into data messages. In the preferred embodiment voice commands and audio messages are transmitted using the PSTN 308 and data is transmitted using the TCP/IP communications protocol. However, one skilled in the art would recognize that other transmission protocols may be used. Other possible transmission protocols would include SIP/VoIP (Session Initiation Protocol/Voice over IP), Asynchronous Transfer Mode (ATM) and Frame Relay. A preferred speech recognition engine is developed by Nuance Communications of 1380 Willow Road, Menlo Park, Calif. 94025 (www nuance.com) The Nuance engine capacity is measured in recognition units based on CPU type as defined in the vendor specification The natural speech recognition grammars (i.e., what a user can say that will be recognized by the speech recognition engine) were developed by Webley Systems.
[0028] In the preferred embodiment, when a user access the voice browsing system 108, he will be prompted if he would like to use his “user-defined searches.” If the user answers affirmatively, the media servers 304 will retrieve from the database 300 the personal recognition grammars 402 defined by the user while using the clipping client 10.
[0029] The media servers 304 also contain a speech synthesis engine 502 that converts the data retrieved by the web browsing servers 302 into audio messages that are transmitted to the user's voice enabled device 306. A preferred speech synthesis engine is developed by Lernout and Hauspie Speech Products, 52 Third Avenue, Burlington, Mass. 01803 (www.lhslcom).
[0030] A further description of the web browsing server 302 will be provided in relation to
[0031] Upon receiving a user-defined web site record 400 from the database 300 in response to a user request, the web browsing server 302 invokes the “content extraction agent” command 406 contained in the record 400. The content extraction agent 600 retrieves the content descriptor file 604 associated with the user-defined record 400. As mentioned, the content descriptor file 604 directs the extraction agent where to extract data from the accessed web page and how to format a response to the user utilizing that data. For example, the content descriptor file 604 for a web page providing weather information would indicate where to insert the “city” name or ZIP code in order to retrieve Chicago weather information. Additionally, the content descriptor file 604 for each supported URL indicates the location on the web page where the response information is provided. The extraction agent 600 uses this information to properly extract from the web page the information requested by the user.
[0032] The content extraction agent 600 can also parse the content of a web page in which the user-desired information has changed location or format. This is accomplished based on the characteristic that most hypertext documents include named objects like tables, buttons, and forms that contain textual content of interest to a user. When changes to a web page occur, a named object may be moved within a document, but it still exists. Therefore, the content extraction agent 600 simply searches for the relevant name of desired object. In this way, the information requested by the user may still be found and reported regardless of changes that have occurred.
[0033] Table 2 below contains source code for a content extraction agent 600 used by the preferred embodiment.
TABLE-US-00002 TABLE 2 # ! /usr/ local/www/bin/sybper15 #$Header: /usr/local/cvsroot/webley/agents/service/web_dispatch.pl,v 1.6 # Dispatches all web requests #http://wcorp.itn.net/cgi/flstat?carrier=ua&flight_no=155&m cn_abbr=jul&date= 6&stamp=ChLN~PdbuuE*itn/ord,itn/cb/sprint_hd #http://cig.cnnfn.com/flightview/rlm?airline=amt&number=300 require ″config_tmp.pl″; # check parameters die ″Usage: $0 service [params]\n″ if $#ARGV < 1; #print STDERR @ARGV; # get parameters my ( $service, @param ) = @ARGV; # check service My ($services = ( weather_cnn => ‘webget.pl weather_cnn’, weather_lycos => ‘webget.pl ‘weather_lycos’, weather_weather => ‘webget.pl weather_weather’, weather_snap => ′webget.pl weather_snap’, weather_infospace => ‘webget.pl weather_infospace’, stockQuote_yahoo => ′webget.pl stock′, flightStatus_itn => ′webget.pl flight_delay’, yellowPages_yahoo => ′yp_data.pl′, yellowPages_yahoo => ′yp_data.pl′, newsHeaders_newsreal => ′news.pl′, newsArticle_newsreal => ′news.pl′, ) ; # test param my $date= ′date′; chop ( $date ); my ( $short_date ) = $date = ~ / \s+(\{w3}\s+\d{1, 2}) \s+/; my % Test = ( weather_cnn => ′60053′, weather_lycos => ′60053′, weather_weather => ′60053′, weather_snap => ′60053′, weather_infospace => ′60053′, stockQuote_yahoo => ′msft′, flightStatus_itn => ′ua 155 ′ . $short_date, yellowPages_yahoo => ′tires 60015′, newsHeaders_newsreal => ′ 1 ′, newsArticle_newsreal => ′1 1′ , ) ; die ″$date: $0: error: no such service: $service (check this script) \n″ unless $Services{ $service }; # prepare absolute path to run other scripts my ( $path, $script ) = $0 =~ ml{circumflex over ( )} (.*/) ([ {circumflex over ( )} / ] * ) | ; # store the service to compare against datatable my $service_stored = $service; # run service While ( ! ( $response = ′$path$Services { $service } @param’ ) ) ( # response failed # check with test parameters $ response = ′$path$Services { $service } $Test{ $service }”; If ( response ) { $service = &switch_service ( $service ) ; # print “wrong paramnet values were supplied:; $service - @param\n″; # die ″$date: $0: error: wrong parameters: $service - @param\n″; } else { # change priority and notify $service = &increase_attempt ( $service ) ; } } # output the response print $response; sub increase_attempt { my ( $service ) = @_; my ( $service_name ) = split( /_/, $service ) ; print STDERR ″$date: $0: attn: changing priority for service: $service\n″; # update priority &db_query ( ″update mcServiceRoute ″ .″set priority = ( select max ( priority ) from mcServiceRoute ″ . ″where service = ′$service name′ ) + 1, . ″date = getdate ( ), ″ . ″attempt = attempt + 1 ″ . ″where route = ′$script $service′ ″ ) ; # print ″---$route===\n″; # find new route my $route @{ &db_query( ″select route from mcServiceRoute ″ .″where service = ′$service_name′ ″ .″and attempt < 5 ″ . ″order by priority ″) } −> [ 0 ] { route }; &db_query( ″update mcServiceRoute ″ . ″set attempt = 0 ″ . ″where route = ‘$script $service’ “ ) ; if ( $route eq ″$script $service_stored” ) ; ( $service_name, $service ) =split ( /\s+/, $route ) ; die ″$date: $0: error: no route for the service: $service (add More) \n”″ unless $service; return $service; } sub switch service { my ( $service ) = @_; my ( $service_name) = split (/_/, $service ); print STDERR ″$date: $0: attn: changing priority for service: $service\n″; # update priority &db_query ( ″update mcServiceRoute ″ . ″set priority = ( select max ( priority for ) from mcServiceRoute ″ . ″where service = ′$service_name′ ) + 1, . ″date ~ getdate ( ) ″ . ″where route = ′$script $service′ ″ ) ; # print ″---$route===\n″; - # find new route my $route = @( &db_query ( “select route from mcServiceRoute ″ .″where service = ′$service_name′ ″ . ″and attempt < 5 “ . ″order by priority ″) } −> [ 0 ] { route }; die ″ $ date : $ 0 : error : there is the only service: $route (add more) \n″ if ( $route eq ″$script $service″ or $route eq ″$script $service_stored″ ) ; (service_name, $service ) = split ( / \s+/, $route ) ; die ″$date: $0: error: no route for the service: $service (add more)\n″ unless $service; return $service; }
Table 3 below contains source code of the content fetcher 602 used with the content extraction agent 600 to retrieve information from a web site
TABLE-US-00003 TABLE 3 #!/usr/local/www/bin/sybper15 # -T # -w # $Header: /usr/local/cvsroot/webley/agents/service/webget. pl, v 1.4 # Agent to get info from the web. # Parameters: service_name [service_parameters], i.e. stock msft or weather 60645 # Configuration stored in files service_name.ini # if this file is absent the configuration is received from mcServices table # This script provides autoupdate to datatable if the .ini file is newer. $debug = 1; use URI : : URL; use LWP : : UserAgent; use HTTP : :Request: : Common; use Vail : :VarList; use Sybase : : CT lib; use HTTP: :Cookies; #print ″Sybase: :CT lib $DB_USR, $DB_PWD, $DB SRV;″; Open ( STDERR, ″>>$0.log″ ) if $debug; #open ( STDERR, ″>&STDOUT″ ) ; $log = ′date’; #$response = ‘.url.pl http://cgi.cnn.com/cgi-bin/weather/redirect?zip=60605”; #$response= ‘pwd′; #print STDERR ″pwd = $response\n″; #$response = ‘ls’ ; #print STDERR ″ls = $response\n″; chop ( $log ) ; $log . = ″pwd=″ . ‘pwd‘ ; chop ( $log ) ; #$debug2 = 1; my $service = shift; $log .= ″ $service: ″. join( ′ : ′, @ARGV ) . ″\n″; print STDERR $log if $debug; #$response = • . /url .pl ″http://cgi.cnn.com/cgi-bin/weather/redirect?zip=60605” ; my @ini = &read_ini ( $service ) ; chop ( @ ini ) ; my $section= ″ ″; do ($section = &process_section( $section ) } while $section; #$response = ‘ ./url.pl http://cgi.cnn.com/cgi-bin.weather/redirect?zip=60605” ’ ; exit; ####################################################### sub read_ini { my ( $service ) = @_; my @ini = ( ); # first, try to read file $0 =~ ml{circumflex over ( )} ( .*/) [{circumflex over ( )}/]; $service = $1 . $service; if ( open( INI, ″$service.ini″ ) ) { @ini = ( < INI > ) ; return @ini unless ( $DB_SRV ) ; # update datatable my $file_time = time − int ( ( −M ″$service. ini″ ) * 24 * 3600 ) ; # print ″time $file_time\n″; my $dbh = new Sybase: :CTlib $DB_USR, $DB_PWD, $DB_SRV; unless ( $dbh) { print STDERR ″webget.pl: Cannot connect to dataserver $DB_SRV:$DB_USR:$DB_PWD\n″; return @ini; } my @row_refs = $dbh−>ct_sql ( ″select lastUpdate from mcServices where service = ‘$service′ ″, undef, 1 ); if ($dbh −> { RC } == CS_FAIL ) { print STDERR ″webget.pl: DB select from mcServices failed\n”; return @ini; } unless ( defined @row_refs ) { # have to insert my ( @ini_escaped ) = map { ( my $x = $_) =~ s/ \ ’ / \ ’ / g; $x; }@ini; $dbh −> ct_sql ( ″insert mcServices values ( ′$service′, ′@ini_escaped′, $file time; ) ″); if ( $dbh −> { RC } == CS_FAIL ) print STDERR ″webget.pl: DB insert to mcServic:es failed\n″; } return @ ini; # print ″time $file_time:”$row_refs [ 0 ] −> { ‘lastUpdate’ }.”\n”; If ( $file_time −> ref_refs [0 ] −> { ‘last update’ } ) { # have to update my ( @ini_escaped = map { ( my $x = $_ ) =~ s/ \ ‘ / \ ‘ \ ‘ /g; $x; } @ini; $dbh −> ct_sql ( ″update mcServices set config = ′@ini_escaped′, lastUpdate = $file_time where service = ′$service′ ″ ); if ( $dbh −> { RC } − CS_FAIL ) { print STDERR ″webget.pl: DB update to mcServices failed\n″; } } return @ini; } else { print STDERR ″$0: WARNING: $service.ini n/a in ″ . - ‘pwd’ . ″Try to read DB\n″; } # then try to read datatable die ″webget.pl: Unable to find service $service\n″ unless ( $DB_SRV ) ; my $dbh = new Sybase: : CTlib $DB_USR, $DB_PWD, $DB_SRV; die ″webget.pl: Cannot connect to dataserver $DB SRV: $08 USR: $08 PWD\n″ unless ( $dbh ) ; my @row_refs = $dbh−>ct sql ( ″″;;elect con.fiJ from mcServices where service = ′$service′ “ , undef, 1 ); die ″webget.pl: DB select from mcServices failed\n″ if $dbh −> { RC } = = CS FAIL; die ″webget.pl: Unable to find service $service\n″ unless ( defined @row_refs ) ; $row_refs [ 0 ] −> { ′config′ } =~ s/\n /\n\r/g; @ini = split ( /\r/, $row_refs [ 0 ] −>{ ′config′ } ) ; return @ini; ################################################################### sub process_section { my ($prev_section ) = @_; my ($section, $output, $content ); my %PAram; my %Content; # print″ ################################\n”; foreach (@ini ) { # print; # chop; s/\s+$//; s/{circumflex over ( )}\[(.*) \ ] ) { # get section name if ( /{circumflex over ( )}\ [(.*) \ ] ) { # print ″$_: $section:$prev_section\n″; last if $section; next if $1 eq ″print″; # next if $prev_section ne ″ ″ and $prev_section ne $1; if ($prev_section eq $1 ) $prev_section = “ “; next; } $section = $1; } # get parameters Push ( @{ $Param{ $1 } }, $2 ) if $section and / ( [ {circumflex over ( )} = ] +) = (.*) /; - } # print″++++++++++++++++++++++++++++++++++\n″; return 0 unless $section; # print ″section $section\n″; # substitute parameters with values map { $Param{ URL }−>[ 0 ] =~ s/$Param{ Input }−>[ $.sub.— ] /$ARGV [ $.sub.— ] /g }0 . . S# { $Param{ Input } }; # get page content ( $Content{={ ′TIME′ }, $content ) = get_url_content ( $ { $ Param { URL } } [ 0 ] ) ; # filter it map { if (/\″([″\″]+)\″([″\″]*)\″/or /\/([″\/]+)\/([″\/]*)\//) ( my $out = $2; $content =~ s/$1/$out/g; } } @ ($Param{ ″Pre-filter″}}; #print STDERR $content; # do main regular expression unless ( @values = $content =~ / $! Param { Regular expression } } [ 0 ] / ) ( &die_hard ( $ { $Param(Regular_expression } } [ 0 ], $content ) ; return $section; } %Content = map { ( $Param{ Output }−>[ $_] , $values [ $_ ] ) } 0 . . $ # ( $Param { Output } ) ; # filter it map { if ( / ( [{circumflex over ( )}\”]+)\″] +) \″ ( [″\″]+) \″ ( [″\″]*) \″/ or / ( [{circumflex over ( )}\/]+) \/ ( [{circumflex over ( )}\/] +) \/ ([{circumflex over ( )}\/]*) \/ / ) ( my $out = $3; $Content{ $1 } =~ s/$2/$out/g; } } @{ $Param { “Post-filter” } }; #calculate it map # calculate it map { if ( /([″′=]+)=(.*)/ my $eval = $2; map { $eval =~ s/$_/$Content( $_ }/g } keys %Content; $Content{ $1 } = eval( $eval ) ; } } @{ ( $ Param{ Calculate } } ; # read section [print] foreach $i ( 0 .. $#ini ) { next unless $ini [ $i] /{circumflex over ( )}\ [print\]/; foreach ( $i + 1 . . $#ini ) { last if $ini [ $_ ] =~ /{circumflex over ( )}\ [.+\]/; $output .= $ini [$_1] . ″\n″; } last; } # prepare output map { $output =~ s/$_/$Content{ $_ }/g } keys %Content; print $output; return 0; } ########################################################################### sub get_url_content [ my ( $url ) = @_; print STDERR $url if $debug; $response = ‘ ./url.pl ′$url′ ; $response = ‘ ./url.pl ′$url′ ; Return ( $time − time, $response ); my $ua = LWP: :UserAgent −> new; $ua −> agent ( ′Mozilla/4.0 [en] (X11; I; FreeBSD 2.2.8- STABLE i386)′ ) ; # $ua −> proxy( [′http′, ′https′], ′http://proxy.webley:3128/′ ) ; # $ua −> no_proxy (′webley′, ′vail′ ) ; my $cookie = HTTP: :Cookies −> new; $ua −> cookie_jar ( $cookie ) ; $url = url $url; print ″$url\n″ if $debug2; my $time = time; my $res= $ua −> request ( GET $url ) ; print ″Response: ″ . ( time − $time ) . ″sec\n″ if $debug2; Return ( $time − time, $res −> content ) ; } ########################################################################### sub die hard { my ( $re, $content ) = @_; - my ( $re_end, $pattern ); while ( $content ! ~ /$re/ ) { if ($re =~ s/ (\({{circumflex over ( )}\(\) ]+\) [{circumflex over ( )}\(\)]*$) / / ) { $re_end = $1 . $re_end; } else } $re_end = $re; last; } } $content=~ /$re/; $re\n Possible misuse: $re_end: \n Matched: $&\n Mismatched: $’\n ″ if $debug; if ( $debug ) { print STDERR ″Content:\n $content\n″ unless $′; } } ##########################################################################
[0034] Once the web browsing server 302 accesses the web site specified in the CRL 404 and retrieves the requested information, it is forwarded to the media server 304. The media server uses the speech synthesis engine 502 to create an audio message that is then transmitted to the user's voice enabled device 306. In the preferred embodiment, each web browsing server is based upon Intel's Dual Pentium III 730 MHz microprocessor system.
[0035] Referring to
[0036] The media server 304 then accesses the database 300 and retrieves the personal recognition grammars 402. Using the speech synthesis engine 502, the media server 304 then asks the user, “Which of the following user-defined searches would you like to perform” and reads to the user the identification name, provided by the recognition grammar 402, of each user-defined search. The user selects the desired search by speaking the appropriate speech command or pronounceable name described within the recognition grammar 402. These speech recognition grammars 402 define the speech commands or pronounceable names spoken by a user in order to perform a user-defined search. If the user has a multitude of user-defined searches, he may speak the command or pronounceable name described in the recognition grammar 402 associated with the desired search at anytime without waiting for the media server 304 to list all available user-defined searches. This feature is commonly referred to as a “barge-in” feature. The media server 304 uses the speech recognition engine 500 to interpret the speech commands received from the user. Based upon these commands, the media server 304 retrieves the appropriate user-defined web site record 400 from the database 300. This record is then transmitted to a web browsing server 302. A firewall 310 may be provided that separates the web browsing server 302 from the database 300 and media server 304. The firewall provides protection to the media server and database by preventing unauthorized access in the event the firewall 312 for the web browsing server fails or is compromised. Any type of firewall protection technique commonly known to one skilled in the art could be used, including packet filter, proxy server, application gateway, or circuit-level gateway techniques.
[0037] The web browsing server 302 accesses the web site 106 specified by the URL 404 in the user-defined web site record 400 and retrieves the user-defined information from that site using the content extraction agent and specified content descriptor file specified in the content extraction agent command 406. Since the web browsing server 302 uses the URL and retrieves new information from the Internet each time a request is made, the requested information is always updated.
[0038] The content information received from the responding web site 106 is then processed by the web browsing server 302 according to the associated content descriptor file This processed response is then transmitted to the media server 304 for conversion into audio messages using either the speech synthesis engine 502 or selecting among a database of prerecorded voice responses contained within the database 300.
[0039] It should be noted that the web sites accessible by the personal information retrieval system and voice browser of the preferred embodiment may use any type of mark-up language, including Extensible Markup Language (XML), Wireless Markup Language (WML), Handheld Device Markup Language (HDML), Hyper Text Markup Language (HTML), or any variation of these languages.
[0040] The descriptions of the preferred embodiments described above are set forth for illustrative purposes and are not intended to limit the present invention in any manner. Equivalent approaches are intended to be included within the scope of the present invention. While the present invention has been described with reference to the particular embodiments illustrated, those skilled in the art will recognize that many changes and variations may be made thereto without departing from the spirit and scope of the present invention. These embodiments and obvious variations thereof are contemplated as falling within the scope and spirit of the claimed invention.