Synchronization among plural browsers using a state manager
RE048126 ยท 2020-07-28
Assignee
Inventors
Cpc classification
G06F16/957
PHYSICS
International classification
Abstract
A technique for synchronizing a visual browser and a voice browser. A visual browser is used to navigate through visual content, such as WML pages. During the navigation, the visual browser creates a historical record of events that have occurred during the navigation. The voice browser uses this historical record to navigate the content in the same manner as occurred on the visual browser, thereby synchronizing to a state equivalent to that of the visual browser. The creation of the historical record may be performed by using a script to trap events, where the script contains code that records the trapped events. The synchronization technique may be used with a multi-modal application that permits the mode of input/output (I/O) to be changed between visual and voice browsers. When the mode is changed from visual to voice, the record of events captured by the visual browser is provided to the voice browser, thereby allowing the I/O mode to change seamlessly from visual to voice. Likewise, the voice browser captures events which may be provided to the visual browser when the I/O mode is changed from voice to visual.
Claims
1. A system for using a visual browser for navigation through a collection of data and synchronization of the state of a voice browser with the state of said visual browser, said system comprising: a first wireless device further comprising: a first memory to store a first set of data and a first set of instructions; a first processor to execute said first instruction set and manipulate said first data set; a visual display for displaying visual output; an input device for accepting user data from a user; an audio speaker to render audio output; a microphone to accept audio input; and said visual browser stored in said first memory and operating in said first wireless device for interacting with a content page using one or more of said user data and said first data set, wherein said visual browser stores said visual browser state indicating a state of interaction with said content page, and wherein said visual browser displays said interaction through said visual display; a state manager for receiving said visual browser state and providing said received visual browser state to said voice browser to recreate the visual browser state at said voice browser at one of a plurality of granularity levels specified by said user, wherein each of said granularity levels .[.represent the.]. .Iadd.represents a .Iaddend.precision .Iadd.of browser inputs and outputs from a set of available precisions including a coarse level of precision and at least one finer level of precision .Iaddend.with which the voice browser recreates the visual browser state, and wherein said granularity levels comprise .[.one of.]. a page level granularity .Iadd.associated with the coarse level of precision and at least one finer level of granularity associated with the at least one finer level of precision.Iaddend., .Iadd.the at least one finer level of granularity comprising at least one of .Iaddend.a card level granularity, a field level granularity, and a cursor level granularity; and a second device further comprising: a second memory to store a second set of data and a second set of instructions; a second processor to execute said second instruction set and manipulate said second data set; said voice browser storable in said second memory and executable on said second processor, wherein the voice browser enables interaction of said user with said content page using said audio speaker and said microphone, and receives said visual browser state from said state manager; wherein said first wireless device is in communication with said second device and said state manager through a network; whereby the state of the voice browser is synchronized with the state of the visual browser from the information received from said state manager.
2. The system of claim 1, further comprising a data store for storing said visual browser state received by said state manager.
3. The system of claim 1, wherein said visual browser state comprises a historical record of events occurring during said navigation.
4. The system of claim 1, wherein said state manager allows said user to seamlessly switch between said visual browser and said voice browser when said voice browser state is synchronized with said visual browser state.
5. The system of claim 1, wherein said page level granularity comprises synchronization of said voice browser to point to said content page being browsed by said visual browser.
6. The system of claim 1, wherein said card level granularity comprises .[.synchronization of said voice browser to point to.]. a card within said content page.[., wherein said visual browser is browsing within said card of said content page.]..
7. The system of claim 1, wherein said field level granularity comprises .[.synchronization of said voice browser to point to.]. a field within a card in said content page.[., wherein said visual browser is browsing within said field in said card of said content page.]..
8. The system of claim 1, wherein said cursor level granularity comprises synchronization of said voice browser to point to the cursor position in the visual browser.
9. A system for using a voice browser for navigation through a collection of data and synchronization of the state of a visual browser with state of said voice browser, said system comprising: a first wireless device further comprising: a first memory to store a first set of data and a first set of instructions; a first processor to execute said first instruction set and manipulate said first data set; a visual display for displaying visual output; an input device for accepting user data from a user; an audio speaker to render audio output; a microphone to accept audio input; said visual browser stored in said first memory and operating in said first wireless device; a second device further comprising: a second memory to store a second set of data and a second set of instructions; a second processor to execute said second instruction set and manipulate said second data set; said voice browser storable in said second memory and executable on said second processor, wherein the voice browser enables interaction with a content page using said audio speaker and said microphone, and wherein said voice browser sends said voice browser state indicating a state of interaction with said content page to a state manager; wherein said first wireless device is in communication with said second device and said state manager via a network; and said state manager for receiving said voice browser state and providing said received voice browser state to said visual browser for allowing said visual browser to recreate the voice browser state at one of a plurality of granularity levels specified by said user, wherein each of said granularity levels .[.represent the.]. .Iadd.represents a .Iaddend.precision .Iadd.of browser inputs and outputs from a set of available precisions including a coarse level of precision and at least one finer level of precision .Iaddend.with which the visual browser recreates the voice browser state, and wherein said granularity levels comprise .[.one of.]. a page level granularity .Iadd.associated with the coarse level of precision and at least one finer level of granularity associated with the at least one finer level of precision.Iaddend., .Iadd.the at least one finer level of granularity comprising at least one of .Iaddend.a card level granularity, a field level granularity, and a cursor level granularity; and whereby the voice browser state is synchronized with the visual browser state from the information received from said state manager.
10. The system of claim 9, further comprising a data store for storing said voice browser state received by said state manager.
11. The system of claim 9, wherein said voice browser state comprises a historical record of events occurring during said navigation.
12. The system of claim 9, wherein said state manager allows said user to seamlessly switch between said voice browser and said visual browser, when said visual browser state is synchronized with said voice browser state.
13. The system of claim 9, wherein said page level granularity comprises synchronization of said visual browser to point to said content page being browsed by said voice browser.
14. The system of claim 9, wherein said card level granularity comprises .[.sycnchronization.]. .Iadd.synchronization .Iaddend.of said visual browser to point to a card within said content page.[., wherein said voice browser is browsing within said card of said content page.]..
15. The system of claim 9, wherein said field level granularity comprises .[.synchroniztion of said visual browser to point to.]. a field within a card in said content page.[., wherein said voice browser is browsing within said field card of said content page.]..
16. The system of claim 9, wherein said cursor level granularity comprises synchronization of said visual browser to point to cursor position pointed by said voice browser.
17. A system .[.of.]. .Iadd.for .Iaddend.using a visual browser for navigation through a collection of data, resulting in the generation of events, and the synchronization of the state of a voice browser with the state of said visual browser, comprising: a first device further comprising: a first memory to store a first set of data and a first set of instructions; a first processor to execute said first instruction set and manipulate said first data set; a visual display for displaying visual output; an input device for accepting user data from a user; an audio speaker to render audio output; a microphone to accept audio input; said visual browser stored in said first memory and operating in said first device for interacting with a content page using one or more of said user data and said first data set, wherein said visual browser displays said interaction through said visual display; a script engine for recording the generated events .Iadd.according to a user-specified one of a plurality of granularity levels .Iaddend.by executing instructions in a script, .Iadd.wherein each of said granularity levels represents a precision of browser inputs and outputs from a set of available precisions including a coarse level of precision and at least one finer level of precision, .Iaddend.wherein said script comprises a plurality of sets of instructions interpretable by the script engine, and wherein each of said sets of instructions being adapted to create a record of a particular one of the events, said script engine being adapted to invoke a particular one of said sets of instructions according to which said events is signaled by said visual browser; and a second device further comprising: a second memory to store second set of data and second set of instructions; a second processor to execute said second instruction set and manipulate said second data set; said voice browser storable in said second memory and executable on said second processor for enabling interaction with said content page using said audio speaker and said microphone; wherein said first wireless device is in communication with said second device and a state manager via a network; and wherein said state manager receives the record of events created by said script engine and provides the record of events to the voice browser; whereby the voice browser recreates the state of the visual browser.
18. A system .[.of.]. .Iadd.for .Iaddend.using a voice browser for navigation through a collection of data, resulting in the generation of events, and the synchronization of the state of a visual browser with the state of said voice browser, comprising: a first device further comprising: a first memory to store a first set of data and a first set of instructions; a first processor to execute said first instruction set and manipulate said first data set; a visual display for displaying visual output; an input device for accepting user data from a user; an audio speaker to render audio output; a microphone to accept audio input; said visual browser storable in said first memory and operating in said first device for interacting with a content page using one or more of said user data and said first data set, wherein said visual browser displays said interaction through said visual display; a script engine for recording the generated events .Iadd.according to a user-specified one of a plurality of granularity levels .Iaddend.by executing instructions in a script, .Iadd.wherein each of said granularity levels represents a precision of browser inputs and outputs from a set of available precisions including a coarse level of precision and at least one finer level of precision, .Iaddend.wherein said script comprises a plurality of sets of instructions interpretable by the script engine, and wherein each of said sets of instructions being adapted to create a record of a particular one of the events, said script engine being adapted to invoke particular one of said sets of instructions according to which of said events is signaled by said voice browser; and a second device further comprising: a second memory to store a second set of data and a second set of instructions; a second processor to execute said second instruction set and manipulate said second data set; said voice browser storable in said second memory and executable on said second processor for enabling interaction with said content page using said audio speaker and said microphone; wherein said first wireless device is in communication with said second device and a state manager via a network; and wherein said state manager receives the record of events created by said script engine, and provides the record of events to the visual browser; whereby the visual browser recreates the state of the voice browser.
.Iadd.19. A state manager for synchronizing browsers, the state manager comprising a processor and memory and configured to: receive a visual browser state from a first wireless device executing a visual browser operable to interact with a content page, the visual browser state indicative of a state of interaction with the content page, the visual browser stored and operating in the first wireless device for interacting with the content page using one or more of user data and a first data set, wherein the visual browser stores the visual browser state indicating the state of interaction with the content page, and wherein the visual browser displays the interaction through a visual display; and transmit the received visual browser state to a second wireless device including an audio speaker and microphone and executing a voice browser stored in a memory of the second wireless device, the voice browser configured to allow interaction of a user with the content page using an audio speaker and microphone, the second wireless device configured to receive the visual browser state and recreate the visual browser state at the voice browser at one of a plurality of granularity levels, wherein each of the granularity levels represents a precision of browser inputs and outputs from a set of available precisions including a coarse level of precision and at least one finer level of precision for recreating the visual browser state, wherein the granularity levels comprise a page level granularity associated with the coarse level of precision and at least one finer level of granularity associated with the at least one finer level of precision, the at least one finer level of granularity comprising at least one of a card level granularity, a field level granularity, and a cursor level granularity, and wherein the first wireless device is communicatively coupled to the second wireless device and a state of the voice browser is synchronized with the visual browser state..Iaddend.
.Iadd.20. The state manager of claim 19, wherein the visual browser state comprises a historical record of events associated during navigation with the visual browser..Iaddend.
.Iadd.21. The state manager of claim 19, wherein the second wireless device is configured to use the state of the voice browser, which is synchronized with the visual browser state, to navigate to the content page..Iaddend.
.Iadd.22. The state manager of claim 19, wherein the second wireless device is configured to use the card level granularity to point to a card within the content page..Iaddend.
.Iadd.23. The state manager of claim 19, wherein the second wireless device is configured to use the field level granularity to point to a field within a card in the content page..Iaddend.
.Iadd.24. The state manager of claim 19, wherein the second wireless device is configured to use the cursor level granularity to point to a cursor position..Iaddend.
.Iadd.25. The state manager of claim 19, wherein the voice browser is configured to allow interaction of a user with the content page using an audio speaker and microphone..Iaddend.
.Iadd.26. A state manager for synchronizing browsers, the state manager comprising a processor and memory and configured to: receive a voice browser state from a first wireless computing device executing a voice browser interacting with a content page, the voice browser storable in a memory of the first wireless computing device and executable on a processor of the first wireless computing device, wherein the voice browser enables interaction with the content page using an audio speaker and a microphone, and wherein the voice browser sends the voice browser state indicating a state of interaction with the content page to the state manager; in response to receipt of the voice browser state, synchronize the voice browser state with a visual browser state to recreate the voice browser state at one of a plurality of granularity levels, the voice browser state being indicative of a state of interaction with the content page; and transmit the visual browser state to a second wireless computing device executing a visual browser stored in a memory of the second wireless computing device for interacting with the content page, wherein the granularity levels are representative of a precision of browser inputs and outputs from a set of available precisions including a coarse level of precision and at least one finer level of precision with which the voice browser state is recreated, and wherein the granularity levels comprise a page level granularity associated with the coarse level of precision and at least one finer level of granularity associated with the at least one finer level of precision, the at least one finer level of granularity comprising at least one of a card level granularity, a field level granularity, and a cursor level granularity..Iaddend.
.Iadd.27. The state manager of claim 26, wherein the voice browser state comprises a historical record of events occurring during the interaction..Iaddend.
.Iadd.28. The state manager of claim 26, wherein the second wireless computing device is configured to use the visual browser state, which is synchronized with the voice browser state, to navigate to the content page..Iaddend.
.Iadd.29. The state manager of claim 26, wherein the second wireless computing device is configured to use the card level granularity to point to a card within the content page..Iaddend.
.Iadd.30. The state manager of claim 26, wherein the second wireless computing device is configured to use the field level granularity to point to a field within a card in the content page..Iaddend.
.Iadd.31. The state manager of claim 26, wherein the second wireless computing device is configured to use the cursor level granularity to point to a cursor position..Iaddend.
.Iadd.32. A wireless computing device comprising a first memory and a first processor, the wireless computing device configured to: execute a visual browser interacting with a content page, the visual browser stored in the memory and operating in the wireless computing device for interacting with the content page using one or more of user data and a first data set, wherein the visual browser displays the interaction through a visual display; and execute, in a script engine, a script operable to record events generated during the interacting according to a user-specified one of a plurality of granularity levels, wherein each of said granularity levels represents a precision of browser inputs and outputs from a set of available precisions including a coarse level of precision and at least one finer level of precision, the script comprising a plurality of sets of instructions, the sets of instructions operable to create a record of a particular one of the events based on an event indicated by the visual browser, wherein the recorded events are operable to allow a second wireless computing device executing a voice browser to recreate a state of the visual browser, wherein the second wireless computing device comprises a second memory and a second processor, and wherein the voice browser is storable in the second memory and executable on the second processor for enabling interaction with the content page using an audio speaker and a microphone, wherein a state manager receives a record of events created by the script engine and provides the record of events to the voice browser, and wherein the state of the voice browser is synchronized with the state of the visual browser from the information received from the state manager..Iaddend.
.Iadd.33. A wireless computing device comprising a first memory and a first processor, the wireless computing device configured to: execute a voice browser interacting with a content page, wherein the voice browser is storable in the first memory and executable on the first processor for enabling interaction with the content page using an audio speaker and a microphone; and execute, in a script engine, a script comprising a plurality of sets of instructions executable by the script engine, the script engine being configured to invoke a particular one of the sets of instructions based on an event indicated by the voice browser, wherein the particular one of the sets of instructions is invoked to create a record of a particular one of the events according to a user-specified one of a plurality of granularity levels, wherein each of said granularity levels represents a precision of browser inputs and outputs from a set of available precisions including a coarse level of precision and at least one finer level of precision, wherein the recorded event is transmitted via a state manager to a second wireless computing device executing a visual browser and recreating a state of the voice browser based on the recorded event, wherein the second wireless computing device comprises a second memory and a second processor, and wherein the visual browser is storable in the second memory and executable on the second processor for enabling interaction with the content page using one or more of user data and a first data set, wherein the visual browser displays the interaction through a visual display, and wherein the state of the voice browser is synchronized with the state of the visual browser from the information received from the state manager..Iaddend.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) The foregoing summary, as well as the following detailed description of preferred embodiments, is better understood when read in conjunction with the appended drawings. For the purpose of illustrating the invention, there is shown in the drawings exemplary constructions of the invention; however, the invention is not limited to the specific methods and instrumentalities disclosed. In the drawings:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
DETAILED DESCRIPTION OF THE INVENTION
(11) Overview
(12) Wireless devices have traditionally been able to operate in only one input/output (I/O) mode at a timei.e., either in an audio mode or a visual mode. For example, a traditional wireless telephone sends and receives audio (voice). Some new wireless telephones have a small display through which the user can view the wireless web. However, a user can use such a telephone only in one mode at a time, as the voice and visual features cannot generally be used in concert.
(13) One way to support the use of voice and visual I/O modes in concert is for the content with which the user interacts to be provided in two similar formsa visual markup language (such as Wireless Markup Language (WML)), and a voice markup language (such as Voice extensible Markup Language (VXML)). Supporting the concurrent use of voice and visual I/O modes in this manner generally requires that two browsers be running at the same timeone browser that generates visual images from version of the content that is in the visual markup language, and another browser that renders audio based on the version of the content that is in the voice markup language. In order to support relatively seamless switching between visual and voice I/O modes, it may be necessary to synchronize the visual and voice browsers so that both browsers are at the same navigational point, regardless of which browser the user has been using to interact with the content. The present invention provides a technique for performing this synchronization.
(14) Exemplary Architecture for Browser Synchronization
(15)
(16) Page 104 is provided to computing device 108. Computing device 108 may be any type of device that is capable of performing computation. As is known in the art, such a device typically has a memory that stores data and instructions; a processor adapted to execute the instructions and manipulate the data stored in the memory; means for input (e.g., keypad, touch screen, microphone, etc.), and means for output (liquid crystal display (LCD), cathode ray tube (CRT), audio speaker, etc.). A computing device may also have means for communicating with other computing devices over a networke.g., an Ethernet port, a modem, a wireless transmitter/receiver for communicating in a wireless communications network. Such a device may take the form of a personal computer (PC), laptop computer, or palm-sized computer. It will also be appreciated that many devices that are not traditionally labeled computers do, in fact, have computing capability. Wireless telephones, pagers, and wireless e-mail devices are examples such devices, and thus the generic term computing device applies to any such device, whether or not such device is traditionally described as a computer. In a preferred embodiment of the invention, computing device 108 is a wireless handset adapted to communicate in a wireless telephone network, although such an embodiment of computing device 108 is not limiting of the invention.
(17) Visual browser 110 is a software application which is stored on computing device 108 and which executes thereon. Visual browser 110 is adapted to receive content in the form of a visual markup language page and to render that content on a visual display 116 associated with computing device 108. As one example, visual browser 110 may be a WML browser that renders WML content on the LCD display of a wireless telephone that is adapted to allow its user to interact with the wireless web. Visual browser 110 may also be adapted to receive user data input from input device 120 associated with computing device 108. For example, input device 120 may be the keypad of a wireless telephone, and the user may use the keypad to enter data in order to interact with content that is being rendered on visual display 116 by visual browser 110. (E.g., the user may use the keypad to enter his or her name into the name field of page 104.)
(18) Page 106 is also provided to computing device 112. Like computing device 108, computing device 112 may be any type of computing device. Voice browser 114 is a software application which is stored on computing device 108 and which executes thereon. Voice browser 114 is adapted to receive content in the form of a voice markup language page and to render that content on audio speaker 118. Voice browser 114 may also be adapted to receive audio user input from microphone 122 or other audio input device. For example, the user may use microphone 122 to enter data into an audio form that is being rendered by voice browser 114. (E.g., the user may speak his name in response to the enter name voice prompt that voice browser 114 renders based on voice markup language page 106.)
(19) While computing device 112 may be any type of computing device, in a preferred embodiment computing device 112 is a relatively powerful server machine that renders voice markup pages for a large network. As discussed more particularly in connection with
(20) Visual browser 110 may be at some state with respect to the user's navigation through visual markup page 104. Likewise, voice browser 114 may be at some state with respect to the user's navigation through voice markup page 106. Since pages 104 and 106 represent the same underlying content 102, albeit in slightly different formats (e.g., WML vs. VXML), it is possible to synchronize the respective states of visual browser 110 and voice browser 114 with respect to the navigation. For example, using visual browser 110, the user may point a cursor to the address field of page 104. Thus, a description of the state of navigation through page 104 is that the cursor is presently pointed at the address field of page 104 and the browser is waiting for input in that field. An equivalent state of navigation through page 106 may be voice browser 114's rendering of an enter address audio prompt and waiting for audio input. Thus, in this example, if voice browser 114 is synchronized with visual browser 110, the appropriate action for voice browser 114 may be to render the enter address audio prompt.
(21) In accordance with the present invention, visual browser 110 and voice browser 114 may be synchronized by exchanging information as to their state. When the user is navigating through content 102 using visual browser 110, visual browser 110 may provide state information to state manager 124, which may store this state information in state database 126. At an appropriate time, state manager 124 may provide this state information to voice browser 114, whereby voice browser may re-create the state of visual browser 110. This process may also happen in the other direction. That is, while the user is navigating through content 102 using voice browser 114, voice browser 114 may provide state information to state manager 124 for storage in state database 126. At an appropriate time, state manager 124 may provide this state information to visual browser 110, whereby visual browser 110 may recreate the state of voice browser 114. What constitutes an appropriate time to transfer this state information depends on the application in which browsers are being synchronized. For example, an appropriate time to transfer state information may mean continuously, periodically, or every time the I/O mode in which the user is performing the navigation switches between visual and voice. The manner and format in which state information is recorded, stored, and transmitted is more particularly discussed below in connection with
(22)
(23) Exemplary page 104 comprises a plurality of cards 202-210. The relationship among cards 202-210 is shown by arrows. For example, card 202 displays a question to be answered by the user; the user navigates either to card 204 or card 206, depending upon which of the answer choices he or she selects at card 202. Similarly, cards 204 and 206 lead the user to different places depending upon the user's answer to a question. Navigation paths may converge; cards 204 and 206 may both lead to card 210.
(24) The state of navigation may be defined as the place at which the user is currently performing I/O, as identified from among the entire universe of content available to the user. The location of this I/O may be identified at varying levels of precision, and this precision may be referred to as the granularity of the state. For example, at the coarse end of the granularity scale, the state of the user's navigation may be defined as the particular page that the user is viewing. Thus, in the example of
(25) As an example of a slightly finer granularity, the state may be defined by the particular card the user is viewing. For example, the state may be defined as card 208 of page 104. At an even finer granularity, the state may be defined as the particular field of a card in which the user in entering inpute.g., the address field of card 208, as indicated by box 212. At an even finer granularity, the state may be defined as the position of the cursor in on the card, as indicated by box 214.
(26) The effect of using the various granularities is readily apparent when one envisions performing a synchronization between two browsers at the various granularities. Suppose the user is using a first browser, and the user's cursor is positioned at box 214. At the page level of granularity, the relevant state information is that the user is navigating somewhere in page 104, and thus an attempt to synchronize the first browser with a second browser will result in the second browser being pointed to some arbitrary point on page 204 (e.g., at the beginning of the first card). At the card level of granularity, it is known not only that the user is on page 104, but also that the user is somewhere within card 208. Thus, upon synchronization, the second browser will be pointed to an arbitrary point in card 208 (e.g., the beginning of the card), but not necessarily to the place where the user's cursor was pointed in the first browser. At the field level of granularity, it is known that the user is in the address field of card 208, and thus synchronization results in the second browser being pointed to the address field, but not necessarily to any point within the address field. At the cursor level of granularity, however, it is known that the user is not only in the address field but is in the middle of entering data in the field. Thus, the second browser can be synchronized to a state in which a partially filled-out address is placed in the address field displayed on the second browser, and the cursor is in such a position that the user continue where he or she left off.
(27) As noted above, a particularly useful application for browser synchronization is where one browser is a visual browser and the other is a voice browser. While voice browsers do not have cursors per se, the notion what it means for a cursor to be located at a particular position in a voice dialogue can be given meaning. For example, if the user had begun to enter the address 123 Elm Street, but has only entered as far as 123 El . . . in the visual browser prior to switching to voice, the voice browser could emulate the position of a cursor by prompting the user: You have entered: 123 El. Please continue speaking from that point.
(28) Events
(29)
(30) State Capturing and State Synchronization
(31) Turning to
(32) Browser 110 is adapted to signal events in such a way that specific actions can be taken in response to the occurrence of events. As one example, browser 110 may be coupled to script engine 402. Script engine 402 interprets scripts written in a scripting language such as JAVA, and causes computing device 108 to perform actions based on such scripts. (While script engine 402 is shown as being external to, and communicatively coupled with, browser 110, it should be noted that this structure is merely exemplary; in the alternative, browser 110 may include script engine 402.) An example of such a script that may be executed by script engine is event-recording script 404. Event recording script 404 contains interpretable code that is invoked upon each event generated in browser 110, where this code performs the function of recording the generated event and memorializing the event in event record 406. For example, one of the events generated by browser 110 may be a navigation from card 202 to card 204 (e.g., event 302, shown in
(33) Event record 406 may be used to synchronize browser 114 (shown in
(34) Moreover, although browser 110 may send event record 406, or the information derived therefrom, directly to browser 114, in an alternative embodiment browser 110 sends event record 406 or the derived information to state manager 124 (shown in
(35) Moreover, it should be understood that, while
(36)
(37) It should be noted that, while pages 104 and 106 represent the same content 102, their representations of that content is not necessarily identical. Returning briefly to
(38)
(39) A process of recording state information, and of using the recorded information to synchronize a second browser to the state of a first browser, is shown in
(40) At step 704, it is determined whether a triggering event has occurred. The triggering event detected at step 704 is an event that causes state information to be transmitted by the device on which the first browser executes to another device. The following is a non-exhaustive list of triggering events: expiration of a timer, a demand for synchronization to take place, or a mode change that results in browsing being switched from the first browser to the second browser (e.g., from a visual browser to a voice browser). However, it should be understood that the foregoing list is non-exhaustive, and that any triggering event may be detected at step 704 without departing from the spirit and scope of the invention. If it is determined at step 704 that no triggering event has occurred, then the process returns to step 702 to capture more state information.
(41) If it is determined at step 704 that a triggering event has occurred, then the captured state information is sent from the first browser to the second browser. The sending of state information from the first browser to the second browser may, optionally, include sending the state information to a state manager 124 (step 706), whereby state manager 124 stores the state information in data store 126 (step 708) for forwarding to the second browser at an appropriate time. However, the use of state manger 124 is not limiting of the invention, and the first browser may forward captured state information directly to the second browser.
(42) At step 710, the second browser receives state information that was captured by the first browser. As noted above, this state information may be received directly from the first browser, or, alternatively, may be received indirectly through a state manager that performs functions including the collecting of state information to be forwarded at an appropriate time.
(43) At step 712, it is determined whether an event occurs that triggers the second browser to synchronize its state to that of the first browser. Events that may trigger synchronization are non-exhaustively listed above in connection with step 704. If no triggering event has occurred, the process returns to step 710, wherein the second browser continues to receive state information captured by the first browser, and waits for a triggering event to occur. On the other hand, if a triggering event is detected at step 712, then the second browser adjusts its state to reflect the received state information (step 714). As discussed above, one way that this state adjustment can take place is if the state information includes a historical record of events that have occurred on the first browser, in which case the second browser may step through that same sequence of events (where the events may have, optionally, undergone a transformation to account for the fact that the first and second browser may be rendering the same content in slightly different formats (e.g., in different markup languages)).
(44) Following the adjustment of the second browser to reflect the state of the first, the two browsers continue with the process of capturing state data, and each browser's adjusting its state to reflect the state data captured by the other. It should be noted that this process of capturing and adjusting is a mutual process that proceeds in both directions. That is, each browser is capable of capturing state data (which is generally done when the browser is being used by a user to perform navigation), and is also capable of synchronizing to a given state based on the state data provided by the other browser. Thus, while
(45) Exemplary Environment for Synchronization of Visual and Voice Browsers
(46) With reference to
(47)
(48) The exemplary computing device 108 shown in
(49) Because visual browser 110 executes on computing device 108, application server 910 provides content in the form of visual markup language directly to computing device 108. That is, when the application 912 is operating in visual mode, application server 912 provides visual markup language content to switch 908, so that such content can be sent out to computing device 108. Computing device 108 then uses visual browser 110 to interact with a user of computing device 108 on the basis of the visual markup content. However, computing device 108, in this example, does not run a voice browser; rather, computing device 108 merely accepts audio input for voice browser 114 and renders audio output generated by voice browser 114. Voice browser 114 runs on computing device 112 which, in the example of
(50) Because visual browser 110 and voice browser 114 are located separately from each other in the example of
(51) It is noted that the foregoing examples have been provided merely for the purpose of explanation and are in no way to be construed as limiting of the present invention. While the invention has been described with reference to various embodiments, it is understood that the words which have been used herein are words of description and illustration, rather than words of limitations. Further, although the invention has been described herein with reference to particular means, materials and embodiments, the invention is not intended to be limited to the particulars disclosed herein; rather, the invention extends to all functionally equivalent structures, methods and uses, such as are within the scope of the appended claims. Those skilled in the art, having the benefit of the teachings of this specification, may effect numerous modifications thereto and changes may be made without departing from the scope and spirit of the invention in its aspects.