TRANSLATION METHOD AND APPARATUS, AND TRANSLATION SYSTEM
20220374613 · 2022-11-24
Assignee
Inventors
Cpc classification
G06F40/47
PHYSICS
International classification
G06F40/47
PHYSICS
Abstract
A translation method includes: selecting a source word from a source sentence; generating mapping information including location information of the selected source word mapped to the selected source word in the source sentence; and correcting a target word, which is generated by translating the source sentence, based on location information of a feature value of the target word and the mapping information.
Claims
1. A translation apparatus comprising: a memory configured to store mapping information, wherein, in the mapping information, at least one source word in a source sentence is mapped to location information of the at least one source word; and a processor configured to: encode the source sentence to generate vectors; determine feature values based on the generated vectors; determine a target word by performing decoding based on the determined feature values; determine whether location information of a maximum feature value, from among the determined features values, is included in the mapping information; and replace the determined target word with a source word mapping to the location information of the maximum feature value in response to the location information of the maximum feature value being included in the mapping information.
2. The translation apparatus of claim 1, wherein the processor is configured to generate a context vector using the determined feature values and the generated vectors, and determine the target word by performing decoding on the generated context vector and previous target word.
3. The translation apparatus of claim 1, wherein the processor is configured to determine the feature values using attention mechanism.
4. The translation apparatus of claim 2, wherein the at least one source word corresponds to any one of a proper noun, a numeral, a word including a numeral and a character, a word expressed by a target language, a word not registered in a dictionary, and a phrase including any one or any combination of any two or more of a proper noun, a numeral, a word including a numeral and a character, a word expressed by a target language, and a word not registered in a dictionary.
5. A processor-implemented machine translation method, the translation method comprising: generating mapping information, wherein, in the mapping information, at least one source word in a source sentence is mapped to location information of the f at least one source word; and encoding the source sentence to generate vectors; determining feature values based on the generated vectors; determining a target word by performing decoding based on the determined feature values; determining whether location information of a maximum feature value, from among the determined features values, is included in the mapping information; and replacing the determined target word with a source word mapping to the location information of the maximum feature value in response to the location information of the maximum feature value being included in the mapping information.
6. The translation method of claim 5, further comprises: generating a context vector using the determined feature values and the generated vectors; and determining the target word by performing decoding on the generated context vector and previous target word.
7. The translation method of claim 5, wherein the determining of the feature values comprises determining the feature values using attention mechanism.
8. The translation method of claim 5, wherein the at least one source word corresponds to any one of a proper noun, a numeral, a word including a numeral and a character, a word expressed by a target language, a word not registered in a dictionary, and a phrase including any one or any combination of any two or more of a proper noun, a numeral, a word including a numeral and a character, a word expressed by a target language, and a word not registered in a dictionary.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0038]
[0039]
[0040]
[0041]
[0042]
[0043]
[0044]
[0045]
[0046]
[0047]
[0048] Throughout the drawings and the detailed description, unless otherwise described or provided, the same drawing reference numerals will be understood to refer to the same elements, features, and structures. The drawings may not be to scale, and the relative size, proportions, and depiction of elements in the drawings may be exaggerated for clarity, illustration, and convenience.
DETAILED DESCRIPTION
[0049] The following detailed description is provided to assist the reader in gaining a comprehensive understanding of the methods, apparatuses, and/or systems described herein. However, various changes, modifications, and equivalents of the methods, apparatuses, and/or systems described herein will be apparent after an understanding of the disclosure of this application. For example, the sequences of operations described herein are merely examples, and are not limited to those set forth herein, but may be changed as will be apparent after an understanding of the disclosure of this application, with the exception of operations necessarily occurring in a certain order. Also, descriptions of features that are known in the art may be omitted for increased clarity and conciseness.
[0050] The features described herein may be embodied in different forms, and are not to be construed as being limited to the examples described herein. Rather, the examples described herein have been provided merely to illustrate some of the many possible ways of implementing the methods, apparatuses, and/or systems described herein that will be apparent after an understanding of the disclosure of this application.
[0051] Although terms such as “first,” “second,” and “third” may be used herein to describe various members, components, regions, layers, or sections, these members, components, regions, layers, or sections are not to be limited by these terms. Rather, these terms are only used to distinguish one member, component, region, layer, or section from another member, component, region, layer, or section. Thus, a first member, component, region, layer, or section referred to in examples described herein may also be referred to as a second member, component, region, layer, or section without departing from the teachings of the examples.
[0052] Throughout the specification, when a component is described as being “connected to,” or “coupled to” another component, it may be directly “connected to,” or “coupled to” the other component, or there may be one or more other components intervening therebetween. In contrast, when an element is described as being “directly connected to,” or “directly coupled to” another element, there can be no other elements intervening therebetween. Likewise, expressions, for example, “between” and “immediately between,” and “adjacent to” and “immediately adjacent to” may also be construed as described in the foregoing.
[0053] As used herein, the term “and/or” includes any one and any combination of any two or more of the associated listed items.
[0054] The terminology used herein is for describing various examples only, and is not to be used to limit the disclosure. The articles “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. The terms “comprises,” “includes,” and “has” specify the presence of stated features, numbers, operations, members, elements, and/or combinations thereof, but do not preclude the presence or addition of one or more other features, numbers, operations, members, elements, and/or combinations thereof.
[0055] Unless otherwise defined, all terms, including technical and scientific terms, used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure pertains based on an understanding of the present disclosure. Terms, such as those defined in commonly used dictionaries, are to be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and the present disclosure, and are not to be interpreted in an idealized or overly formal sense unless expressly so defined herein.
[0056]
[0057] Referring to
[0058] The sentence analyzer 110 analyzes a source sentence. For example, the sentence analyzer 110 performs a morpheme analysis and a syntax analysis on the source sentence. The sentence analyzer 110 generates a copy list by analyzing the source sentence. The copy list includes at least one source word to be copied into a target sentence and location information (or position information) of each of the source words. A detailed operation of the sentence analyzer 110 will be described hereinafter with reference to
[0059] The translator 120 completes the target sentence including target words by performing machine translation on the source sentence. For example, the translator 120 encodes the source sentence and sequentially determines the target words through decoding to complete the target sentence. A detailed operation of the translator 120 will be described hereinafter with reference to
[0060] The corrector 130 operates at each decoding time, for example, when each target word is determined, or operates when the target sentence is completed, for example, when an entire decoding is completed. In an example, when a target word at a current decoding time t is determined, the corrector 130 determines whether to correct the target word. The determination may be based on a location, or a position, of a source word associated with one or more feature values of the determined target word at the current decoding time t(e.g, a maximum feature value), and whether that location or position is on the copy list. Such an operation of the corrector 130 will be described with reference to
[0061] Through the operation of the corrector 130 described in the foregoing, the translation system 100 generates a corrected target sentence. Thus, a translation error is minimized and a translation accuracy is improved accordingly.
[0062]
[0063] In the example of 1990
4,868,520
, 2000
4,019,991
,
3,829,998
. [0065] Second source sentence: Hutton,
,
Lee Jihyun
1000
.
[0066] Table 1 illustrates an example of location information of source words in the first source sentence, and Table 2 illustrates an example of location information of source words in the second source sentence.
TABLE-US-00001 TABLE 1 Location information Source word 1 2
3
4
5
6 . 7 1990
8 4,868,520 9
10 , 11 2000
12 4,019,991 13
14 , 15
16
17 3,829,998 18
19
TABLE-US-00002 TABLE 2 Location information Source word 1 Hutton 2 , 3 4 , 5
6 Lee 7 Jihyun
8
9
10
11
12
13
14 1000
15
16
17
[0067] Referring to
[0068] For example, the sentence analyzer 200 selects “1990,” “4,868,520,” “2000,” “4,019,991,” and “3,829,998” corresponding to a numeral from the first source sentence. Additionally, the sentence analyzer 200 selects “Hutton,” “,” “Lee,” and “Jihyun” corresponding to a proper noun from the second source sentence. The sentence analyzer 200 also selects “1000
” corresponding to a numeral from the second source sentence.
[0069] In stage 220, the sentence analyzer 200 preprocesses the selected source word. The sentence analyzer 200 changes a transcription of the selected source word. For example, the sentence analyzer 200 romanizes the proper noun “” to be “Seok MiYeon,” and changes “1000
” to “10 million” or “10,000,000.” In addition, the sentence analyzer 200 processes selected source words as a single source word. For example, “Lee” and “Jihyun” are source words adjacent to each other without a comma, and thus the sentence analyzer 200 processes “Lee” and “Jihyun” as a single source word, “Lee Jihyun.”
[0070] In stage 230, the sentence analyzer 200 generates a copy list including the selected source word and location information of the selected source word. In other words, the sentence analyzer 200 generates mapping information including the selected source word and the location information of the selected source word that are mapped to each other. When the selected source word is preprocessed, the sentence analyzer 200 maps the location information of the selected source word and a result of the preprocessing. For example, the sentence analyzer 200 maps “Seok MiYeon,” which is a result of preprocessing “,” and location information 3 of “
,” Also, the sentence analyzer 200 maps “10 million,” which is a result of preprocessing “1000
,” and location information 14 of “1000
.” Similarly, the sentence analyzer 200 maps “Lee Jihyun,” which is a result of preprocessing “Lee” and “Jihyun,” and location information 6 of “Lee” and location information 7 of “Jihyun.”
[0071] Table 3, below, illustrates an example of a copy list associated with the first source sentence, and Table 4, below, illustrates an example of a copy list associated with the second source sentence.
TABLE-US-00003 TABLE 3 Location information Source word 7 1990 8 4,868,520 11 2000 12 4,019,991 17 3,829,998
TABLE-US-00004 TABLE 4 Location information Source word 1 Hutton 3 Seok MiYeon (preprocessing result) 6 Lee Jihyun (preprocessing result) 7 Lee Jihyun (preprocessing result) 14 10 million
[0072] A corrector (not shown), an example of which will be described later, uses the copy list generated by the sentence analyzer 200.
[0073]
[0074] Referring to
[0075] An attention mechanism is applied to the translator of the NMT model to align a source word and a target word corresponding to the source word. Referring to
[0076] Referring to
[0077] When the source words are encoded, the translator determines target words in sequential order through decoding. In the example of
[0078] The translator calculates feature values a.sub.t,1, a.sub.t,2, . . . , a.sub.t,n. The translator calculates the feature values a.sub.t,1, a.sub.t,2, . . . , a.sub.t,n based on RNN hidden state information s.sub.t-1 associated with a target word y.sub.t-1, at a previous decoding time t-1 and the vectors h.sub.1, h.sub.2, . . . , h.sub.n. For example, the translator calculates a.sub.t,1, based on s.sub.t-1 and h.sub.1. The translator calculates the remaining feature values a.sub.t,1, a.sub.t,2, . . . , a.sub.t,n through a similar method used to calculate the feature value a.sub.t,1.
[0079] A feature value indicates how much a vector or a source word contributes to determining the target word y.sub.t. For example, a.sub.t,1, indicates a degree of a contribution of h.sub.1, or a source word corresponding to location information 1, to the determination of the target word y.sub.t. Similarly, a.sub.t,n indicates a degree of a contribution of h.sub.n, or a source word corresponding to location information n, to the determination of the target word y.sub.t. Such a feature value is also referred to as an attention value.
[0080] The translator calculates a context vector c.sub.t using the feature values and the vectors. For example, the translator calculates c.sub.t=a.sub.t,1×h.sub.1+a.sub.t,2×h.sub.2+ . . . +a.sub.t,n×h.sub.n.
[0081] The translator determines RNN hidden state information s.sub.t at the current decoding time t based on the RNN hidden state information s.sub.t-1 associated with the target word y.sub.t-1 at the previous decoding time t-1 and on the target word y.sub.t-1 at the previous decoding time t-1. The translator determines the target word y.sub.t by performing decoding based on the RNN hidden state information s.sub.t at the current decoding time t.
[0082] In the example of
[0083] Referring to ,” to the determination of the target word “figure” is greatest, and thus a.sub.5,3 corresponds to a maximum feature value among a.sub.5,1, a.sub.5,2, . . . , a.sub.5,19.
[0084] The translator determines a target word at each subsequent decoding time, and completes a target sentence including determined target words.
[0085]
[0086]
[0087] As illustrated in
[0088] In stage 740, the translator sequentially determines target words at decoding times 6 through 9, and determines a target word “486,820” at decoding time 10. The corrector determines a maximum feature value among feature values a.sub.10,1, a.sub.10,2, . . . , a.sub.10,19 of the target word “486,820.” The translator focuses most on a source word “4,868,520” corresponding to location information 8 to determine the target word “486,820” because a.sub.10,8 corresponds to the maximum feature value among a.sub.10,1, a.sub.10,2, . . . , a.sub.10,18. The corrector determines whether location information 8 of a.sub.10,8 is included in the copy list 720. Since location information 8 is included in the copy list 720, the corrector replaces the target word “486,820” by “4,868,520” mapped to location information 8 in the copy list 720.
[0089] The translator determines a target word at each subsequent decoding time, and the corrector corrects or does not correct determined target words at subsequent decoding times.
[0090] A corrected target sentence 750 includes corrected target words. In the corrected target sentence 750, the target word “486,820” is replaced with the corrected target word “4,868,520” by the corrector, and thus it may be determined that a translation error is reduced and a translation accuracy is improved.
[0091]
[0092]
[0093] As illustrated in ” corresponding to location information 1 contributes greatest to determining “Sukmyun,” and thus the maximum feature value of “Sukmyun” is a.sub.1,1.
[0094] The corrector selects, from the target sentence 930, a target word corresponding to a preset type, for example, a proper noun, a numeral, a word including a numeral and a character, and a word that is not registered in a dictionary. The corrector selects, from the target sentence 930, “Sukmyun” corresponding to a word not registered in a dictionary or a proper noun, and “100” and “million” corresponding to a numeral.
[0095] The corrector verifies location information of each of the maximum feature value a.sub.1,1 of “Sukmyun,” a maximum feature value a.sub.8,8 of “100,” and a maximum feature value a.sub.9,8 of “million.” The corrector verifies the location information of a.sub.1,1 to be location information 1, and the location information of a.sub.8,8 and a.sub.9,8 to be location information 8.
[0096] The corrector determines whether location information 1 is included in a copy list 940. Since location information 1 is included in the copy list 940, the corrector replaces “Sukmyun” with “Seok MiYeon” mapped to location information 1.
[0097] The maximum feature values of the selected target words “100” and “million” have the same location information. As illustrated in
[0098] The target sentence 930 is corrected to be a corrected target sentence 950. In a case in which a proper noun in the source sentence 910 is either processed as an unknown word or not correctly translated, the corrected target sentence 950 includes the proper noun, or a preprocessed proper noun, of the source sentence 910. For example, when a proper noun “” is translated as “Sukmyun” and not processed as an unknown word, a preprocesses proper noun “Seok Mi Yeon” replaces it in the corrected target sentence 950. Thus, a translation accuracy may be improved and a translation error may be reduced.
[0099] <Translation of a Subword-Unit Source Sentence: a Subword-Level Machine Translation>
[0100] According to one example, a translation system may translate a word-unit source sentence as described with reference to
[0101]
[0102] A translation system may convert an original word-unit source sentence to a subword-unit source sentence. In the example of Hutton
,
. . . ” is converted to a source sentence 1010 “
H@@ u@@ tt@@ on
,
. . . .”
[0103] The sub-source words “H@@,” “u@@,” and “tt@@” include a tag @@ indicating a subword, and the sub-source word “on” does not include the tag. The sub-source word “on” is a last sub-source word of the original source word “Hutton,” and thus the sub-source word “on” does not include the tag.
[0104] A sentence analyzer 1020 converts the subword-unit source sentence 1010 to a word-unit sentence through preprocessing. That is, the sentence analyzer 1020 generates a single source word by combining the sub-source words through the preprocessing. For example, the sentence analyzer 1020 generates a single source word “Hutton” by combining the sub-source words “H@@,” “u@@,” “tt@@,” and “on.”
[0105] The sentence analyzer 1020 determines whether the single source word corresponds to a preset type. In response to the single source word corresponding to the preset type, the sentence analyzer 1020 maps location information of each of the sub-source words to the single source word. For example, the sentence analyzer 1020 maps, to
[0106] “Hutton,” location information 2 of “H@@,” location information 3 of “u@@,” location information 4 of “tt@@,” and location information 5 of “on.” The sentence analyzer 1020 generates a copy list including the mapped location information of each of the sub-source words and the single source word. Table 5 illustrates an example of the copy list associated with the source sentence 1010.
TABLE-US-00005 TABLE 5 Location information Source word 2 Hutton 3 Hutton 4 Hutton 5 Hutton
[0107] A translator encodes the source sentence 1010 including the sub-source words. The translator may be, for example, a subword-level NMT model.
[0108] The translator calculates feature values a.sub.2,1, a.sub.2,2, . . . , a.sub.2,n to determine a second target word after a first target word “Dear” is determined. Since the sub-source word “H@@” includes the tag, the second target word includes the tag. In other words, the second target word corresponds to a sub-target word of a subword unit. The second target word, which is a first determined sub-target word sub.sub.1, is “H@@.” Similarly, the translator determines sub-target words sub.sub.2 through sub.sub.4 in sequential order. Here, sub.sub.2 is “u@@,” sub.sub.3 is “tch@@,” and sub.sub.4 is “et.”
[0109] In this example, a.sub.2,2 is a maximum feature value among feature values of “H@@,” for example, a.sub.2,1, a.sub.2,2, . . . , a.sub.2,n, and a.sub.3,3 is a maximum feature value among feature values of the sub-target word “u@@,” for example, a.sub.3,1, a.sub.3,2, . . . , a.sub.3,n. Also, a.sub.4,4 is a maximum feature value among feature values of the sub-target word “tch@@,” for example, a.sub.4,1, a.sub.4,2, . . . , a.sub.4,n, and a5,5 is a maximum feature value among feature values of the sub-target word “et,” for example, a.sub.5,1, a.sub.5,2, . . . , a.sub.5,n. Determining a maximum feature value among feature values is described above, and thus a more detailed and repeated description of the determining of the maximum feature value is omitted here for brevity.
[0110] The translator determines a target word based on the determined sub-target words. In the example of
[0111] A corrector 1030 operates when a target word is determined based on sub-target words, or operates when a target sentence is completed. When the target word is determined based on the sub-target words, the corrector 1030 operates as follows.
[0112] In one example, when the target word “Hutchet” is determined, the corrector 1030 corrects the target word “Hutchet” based on whether location information of a maximum feature value of each of the sub-target words is included in the copy list. In other words, according to this example, the corrector 1030 may correct the target word “Hutchet” immediately after the target word “Hutchet” is determined, prior to the target sentence being completed. Referring to the copy list illustrated in
[0113] According to another example, the corrector 1030 may determine a representative value of respective maximum feature values of the sub-target words. For example, the corrector 1030 may select any one of the maximum feature values of the sub-target words. The corrector 1030 may determine whether location information of the representative value is included in the copy list. In response to the location information of the representative value being included in the copy list, the corrector 1030 may replace the target word with a source word mapped to the location information of the representative value. In the example of
[0114] Thus, the original source word “Hutton” is included in the target sentence, and thus a translation error may be reduced.
[0115] When the target word is determined based on the sub-target words, the corrector 1030 operates as described above. When the target sentence is completed, the corrector 1030 operates as follows. In one example, the translator or the corrector 1030 operates on the target sentence “Dear H@@ u@@ tch@@ et, your order . . . ” and converts the subword-based target sentence to a word-based sentence. For example, the translator or the corrector 1030 determines “Hutchet” based on “H@@,” “u@@,” “tch@@,” and “et,” and converts the target sentence “Dear H@@ u@@ tch@@ et, your order . . . ” to “Dear Hutchet, your order . . . .”
[0116] The corrector 1030 selects a target word corresponding to a preset type from the target sentence obtained through the converting of the subword-based target sentence. For example, the corrector 1030 selects “Hutchet” corresponding to a word not registered in a dictionary or corresponding to a proper noun from the target sentence obtained through the converting. For subsequent operations of the corrector1030, reference may be made to the description of the operations of the corrector 1030 performed when a target word is determined based on sub-target words. Thus, a more detailed and repeated description of such operations is omitted here for brevity.
[0117] <Translation of a Character-Unit Source Sentence: a Character-Level Machine Translation>
[0118] According to still another example, the translation system may translate a character-unit source sentence. The translation system may process each of characters in a character-unit source sentence using a method similar to a subword processing method described above with reference to
[0119] For example, when the translation system receives an original source sentence “ Hutton
,
. . . ” as an input, the translation system inputs a tag @ to a location of a word spacing in the original source sentence to convert the original source sentence to a character-unit source sentence “
”. Here, the translation system considers @ to be a single character, and _ is used as an indicator to distinguish each character in the character-unit source sentence.
[0120] A sentence analyzer generates a copy list as illustrated in Table 6 below by mapping, to a source word “Hutton” in the original source sentence, location information 6 of a character “H” and location information 11 of a character “n” in the character-unit source sentence “.” Here, the mapping of location information of each sub-source word and a source word may be applicable to the mapping of location information of each character and a source word, and thus more detailed and repeated descriptions will be omitted for brevity.
TABLE-US-00006 TABLE 6 Location information Source word 6 Hutton 7 Hutton 8 Hutton 9 Hutton 10 Hutton 11 Hutton
[0121] A translator encodes the character-unit source sentence, and performs decoding based on a result of the encoding. The translator may be, for example, a character-level NMT model. The translator determines a target character each time the translator performs the decoding. Here, the translator may determine the target character using a method similar to a target word determining method performed by the translator described with reference to
[0122] When the translator determines target characters “H_u_t_c_h_e_t” by performing the decoding, the translator determines a target word “Hutchet” based on the determined target characters. Here, a corrector replaces the target word “Hutchet” with a source word “Hutton” included in the copy list of Table 6. In other words, the corrector replaces the target word “Hutchet” with the source word “Hutton” before a translation is completed. Alternatively, when a translation of the character-unit source sentence into a character-unit target sentence is completed and the character-unit target sentence is converted to a word-unit target sentence, the corrector may correct the word-unit target sentence. For example, when a character-unit target sentence is “D_e_a_r_@_H_u_t_c_h_e_t_,@_y_o_u_r_@_o_r_d_e_r . . . ”, the translator converts the character-unit target sentence to a word-unit target sentence “Dear Hutchet, your order . . . .” Here, the corrector replaces a target word “Hutchet” in the word-unit target sentence to a source word “Hutton” included in the copy list of Table 6.
[0123] The operations of the corrector described above with reference to
[0124]
[0125] Referring to
[0126] The memory 1120 includes one or more instructions executable by the controller 1110.
[0127] When the instruction is executed by the controller 1110, the controller 1110 selects a source word from a source sentence. The controller 1110 generates mapping information including location information of the selected source word mapped to the selected source word. In addition, the controller 1110 corrects a target word based on the mapping information and location information of one or more feature values of the target word.
[0128] The descriptions provided with reference to
[0129]
[0130] A translation method to be described hereinafter with reference to
[0131] Referring to
[0132] In operation 1220, the translation apparatus or the translation system generates mapping information including location information of the selected source word mapped to the selected source word.
[0133] In operation 1230, the translation apparatus or the translation system corrects a target word based on location information associated with one or more feature values of the target word and the mapping information.
[0134] The descriptions provided with reference to
[0135]
[0136] A translation method to be described hereinafter with reference to
[0137] Referring to
[0138] In operation 1320, the translation apparatus or the translation system generates mapping information including location information of the selected source word mapped to the selected source word.
[0139] In operation 1330, the translation apparatus or the translation system determines a target word through a translator.
[0140] In operation 1340, the translation apparatus or the translation system corrects the target word based on whether location information associated with one or more feature values of the target word is included in the mapping information.
[0141] The descriptions provided with reference to
[0142]
[0143] A translation method to be described hereinafter with reference to
[0144] Referring to
[0145] In operation 1420, the translation apparatus or the translation system generates mapping information including location information of the selected source word mapped to the selected source word.
[0146] In operation 1430, the translation apparatus or the translation system completes a target sentence through a translator.
[0147] In operation 1440, the translation apparatus or the translation system corrects a target word selected from the target sentence based on whether location information associated with one or more feature values of the selected target word is included in the mapping information.
[0148] The descriptions provided with reference to
[0149] The sentence analyzer 110, the translator 120, and the corrector 130 in
[0150] The methods illustrated in
[0151] Instructions or software to control computing hardware, for example, one or more processors or computers, to implement the hardware components and perform the methods as described above may be written as computer programs, code segments, instructions or any combination thereof, for individually or collectively instructing or configuring the one or more processors or computers to operate as a machine or special-purpose computer to perform the operations that are performed by the hardware components and the methods as described above. In one example, the instructions or software include machine code that is directly executed by the one or more processors or computers, such as machine code produced by a compiler. In another example, the instructions or software includes higher-level code that is executed by the one or more processors or computer using an interpreter. The instructions or software may be written using any programming language based on the block diagrams and the flow charts illustrated in the drawings and the corresponding descriptions in the specification, which disclose algorithms for performing the operations that are performed by the hardware components and the methods as described above.
[0152] The instructions or software to control computing hardware, for example, one or more processors or computers, to implement the hardware components and perform the methods as described above, and any associated data, data files, and data structures, may be recorded, stored, or fixed in or on one or more non-transitory computer-readable storage media. Examples of a non-transitory computer-readable storage medium include read-only memory (ROM), random-access memory (RAM), flash memory, CD-ROMs, CD-Rs, CD+Rs, CD-RWs, CD+RWs, DVD-ROMs, DVD-Rs, DVD+Rs, DVD-RWs, DVD+RWs, DVD-RAMs, BD-ROMs, BD-Rs, BD-R LTHs, BD-REs, magnetic tapes, floppy disks, magneto-optical data storage devices, optical data storage devices, hard disks, solid-state disks, and any other device that is configured to store the instructions or software and any associated data, data files, and data structures in a non-transitory manner and provide the instructions or software and any associated data, data files, and data structures to one or more processors or computers so that the one or more processors or computers can execute the instructions. In one example, the instructions or software and any associated data, data files, and data structures are distributed over network-coupled computer systems so that the instructions and software and any associated data, data files, and data structures are stored, accessed, and executed in a distributed fashion by the one or more processors or computers.
[0153] While this disclosure includes specific examples, it will be apparent to one of ordinary skill in the art that various changes in form and details may be made in these examples without departing from the spirit and scope of the claims and their equivalents. The examples described herein are to be considered in a descriptive sense only, and not for purposes of limitation. Descriptions of features or aspects in each example are to be considered as being applicable to similar features or aspects in other examples. Suitable results may be achieved if the described techniques are performed in a different order, and/or if components in a described system, architecture, device, or circuit are combined in a different manner, and/or replaced or supplemented by other components or their equivalents. Therefore, the scope of the disclosure is defined not by the detailed description, but by the claims and their equivalents, and all variations within the scope of the claims and their equivalents are to be construed as being included in the disclosure.