SEMICONDUCTOR DEVICE, SEARCH SYSTEM AND SEARCH METHOD
20170277431 · 2017-09-28
Inventors
Cpc classification
G11C15/00
PHYSICS
G06F2212/65
PHYSICS
G06F16/90339
PHYSICS
International classification
Abstract
The present invention relates to a semiconductor device capable of providing extensibility in the entry direction and bit direction of a search table. The semiconductor device includes: a block search circuit that searches each of a plurality of block tables into which a search table, which is configured in the entry direction and in the bit direction, is divided in the entry direction and in the bit direction; a circuit that combines the search results of the block search circuits in the bit direction; and a control circuit that inputs a search key as well as the outputs of the combining circuits, and outputs hit information.
Claims
1. A semiconductor device comprising: a plurality of block search circuits corresponding to a plurality of block tables into which a search table, which is configured in the entry direction and the bit direction, is divided in the entry direction and in the bit direction; a plurality of combining circuits that combine the search results of the block search circuits in the bit direction; and a control circuit that inputs a search key as well as the outputs of the combining circuits, and outputs hit information.
2. The semiconductor device according to claim 1, wherein the block search circuit comprises: a hash calculating circuit that calculates the hash values of block search keys into which the search key is divided; a search device that searches the stored data by using the block search key and the hash value, and outputs a reference address; a reference table that converts the reference address into a map address; and a map table that is accessed by the map address and outputs the search result.
3. The semiconductor device according to claim 2, wherein the search device comprises: a memory that stores search target data as well as information indicating validity or invalidity of the search target data; a comparator that compares the block search key with the search target data read from the memory by using the hash value; and an address generator that generates the reference address based on the hash value.
4. The semiconductor device according to claim 3, wherein the memory is configured with RAM.
5. The semiconductor device according to claim 2, wherein the map table comprises a memory having the same number of storage locations as the squared number of entries in the block search circuit, in which a plurality of pieces of match information can be stored.
6. The semiconductor device according to claim 2, wherein the map table comprises: a memory that encodes match information and stores the encoded match information; and a decoder that decodes the match information read from the memory.
7. The semiconductor device according to claim 6, wherein the upper limit of the number of occurrences of Multi Match is set to a value smaller than the binary logarithm of the number of entries in the block search circuit.
8. The semiconductor device according to claim 1, wherein the block search circuit comprises: a hash calculating circuit that calculates the hash values of the block search keys into which the search key is divided; a search device that searches the stored data by using the block search key and the hash value, and output the search result and reference address; and a reference table that converts the reference address into a map address.
9. The semiconductor device according to claim 8, wherein the combining circuit comprises a map table that is accessed by each of the map addresses of the block search circuits in the bit direction, and wherein the combining circuit combines the search results read from the map table by taking the logical product or logical sum.
10. The semiconductor device according to claim 1, wherein the combining circuit combines the search results of the block search circuits in the bit direction by taking the logical product or logical sum.
11. A search system comprising: a plurality of block search circuits corresponding to a plurality of block tables into which a search table, which is configured in the entry direction and in the bit direction, is divided in the entry direction and in the bit direction; a plurality of combining circuits that combine the search results of the block search circuits in the bit direction; a control circuit that inputs the combined search result; and a CPU that inputs a search key to the control circuit and inputs hit information from the control circuit.
12. The search system according to claim 11, wherein the block search circuit comprises a hash calculating circuit that calculates hash values of block search keys into which the search key is divided; a search device that searches the stored data by using the block search key and the hash value, and outputs a reference address; a reference table that converts the reference address into a map address; and a map table that is accessed by the map address and outputs the search result.
13. The search system according to claim 12, wherein the search table comprises: a memory that stores search target data as well as information indicating validity or invalidity of the search target data; a comparator that compares the block search key with the search target data read from the memory by using the hash value; and an address generator that generates the reference address based on the hash value.
14. The search system according to claim 12, wherein the combining circuit combines the search results of the block search circuits in the bit direction by taking the logical product or logical sum.
15. A search method comprising the steps of: dividing a search key into a plurality of block search keys; searching each of a plurality of block search circuits corresponding to a plurality of block tables into which a search table, which is configured in the entry direction and in the bit direction, is divided in the entry direction and in the bit direction; and combining the search results of the block search circuits in the bit direction.
16. The search method according to claim 15, further comprising the steps of: calculating a hash value of each block search key; searching the stored data by using the block search key and the hash value and outputting a reference address; converting the reference address into a map address; and accessing by using the map address and outputting the search result.
17. The search method according to claim 16, further comprising the steps of: storing search target data as well as information indicating validity or invalidity of the search target data; comparing the block search key with the search target data read by the hash value; and generating the reference address based on the hash value.
18. The search method according to claim 16, wherein the search results of the block search circuits in the bit direction are combined by taking the logical product or logical sum.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0009]
[0010]
[0011]
[0012]
[0013]
[0014]
[0015]
[0016]
[0017]
[0018]
[0019]
[0020]
[0021]
[0022]
[0023]
[0024]
[0025]
[0026]
[0027]
[0028]
[0029]
[0030]
DETAILED DESCRIPTION
[0031] Hereinafter, embodiment, example, and variations will be described with reference to the accompanying drawings. However, in the following description, like components are denoted by like reference numerals and their description will not be repeated.
[0032]
[0033] According to the embodiment, it is possible to handle a situation in which multiple hits occur in the block search circuit, with an ability to flexibly extend or shorten in the bit direction. With this configuration, it is possible to handle a search key of variable length.
Example
[0034] A system according to an example will be described with reference to
[0035] The system has command, address, and word as an input interface from the CPU 20 of the ASE 10, and has address and word as an output interface to the CPU 20. As shown in
[0036] In the ASE, RAM is assigned as a storage medium to a search device. By using RAM as the storage medium that configures the search device, it is possible to reduce power consumption more than the case of using TCAM.
[0037] The search table will be described with reference to
[0038] The configuration and search operation of the ASE 10 will be described with reference to
[0039] (1) The controller 13 receives a search command and a search key [S] of S bit length and equally divides the search key by the number of blocks. Then, the controller 13 transmits the block search keys into which the search key is divided by Sb bit length, to the respective ASE blocks 11.
[0040] (2) Each ASE block 11 performs a matching comparison between the input block search key and the divided search table (sub table) stored in the ASE block 11. Then, the ASE block 11 outputs the result. The table search may hit a plurality of entries with one search key. This is defined as Multi Match. Thus, each ASE block 11 outputs Multi Match result in the particular ASE block. The process of Multi Match in the bit direction is described below in (3), and the process of Multi Match in the entry direction is described below in (4).
[0041] (3) The merge circuit 12 combines the results output from the respective ASE blocks 11 for each of the entries in the bit direction. The details are described below.
[0042] (4) The controller 13 combines the outputs of the merge circuits 12 in the entry direction by an encoder not shown. At this time, when there are a plurality addresses of the search table, which are output results, due to Multi Match, the encoder selects one result according to the priority. The controller 13 outputs an address of A bit length (Address [A]) as well as hit information (Hit/Miss).
[0043] The controller 13 performs the timing control and input/output control of the whole device. Upon input, the controller 13 properly processes the input data according to the command input from the outside. Upon searching, the controller 13 performs a merge process and an encoder process.
[0044] Each ASE block 11 is provided with RAM, which enables search process with a single ASE block.
[0045] The search process of the ASE block 11 will be described with reference to
[0046] Note that the search results of the ASE blocks 11 arranged in the entry direction are combined by the encoder as described above. The encoder functions as a priority encoder and outputs one search result according to the priority upon occurrence of Multi Match.
[0047] Next, the merge circuit (combining circuit) will be described with reference to
[0048] First, the multiplexer 121 of the merge circuit 12 selects the search result of Eb bit of the ASE block #1, and the AND circuit 122 obtains the logical product of the register 123. At this time, the register 123 is initialized such that all Eb bits are set to “High”. The output of the AND circuit 122 (the search result of the ASE block #1) is input to the register 123.
[0049] Next, the multiplexer 121 of the merge circuit 12 selects the search result of Eb bit of the ASE block #2, and the AND circuit 122 obtains the logical product of the register 123. At this time, the resister 123 stores the search result of the ASE block #1. The output of the AND circuit 122 (the logical AND of the search result of the ASE block #1 and the search result of the ASE block #2) is input to the register 123.
[0050] Then, in response to the input of the search result of the ASE block #3 and the input of the search result of the ASE block #4, the logical product of the search result of the ASE block #1, the search result of the ASE block #2, the search result of the ASE block #3, and the search result of the ASE block #4 is stored in the register 123.
[0051] In this way, the search results of the ASE blocks are combined in the bit direction by the merge circuit 12. The merge circuit 12 activates the operation of each ASE block in order of time series by using the respective functions of the ASE block, which can contribute to reduced power consumption.
[0052] Next, the search operation of the respective functions included in the ASE block 11 will be described with reference to
[0053] The hash calculating circuit 111 calculates a hash value (h) from a block search key [Sb], which is input data, by a hash function. The hash value (h) is used to determine the word line to be read from a memory array 1121 that configures the subsequent search device 112. In other words, the hash value (h) represents the line address of the memory array 1121. The hash calculating circuit 111 is provided to prevent variation in data stored in the search device 112.
[0054] The search device 112 compares the input data (block search keys) with the values of the tables (divided search tables) that are all read at the same time from the memory array 1121. Then, the search device 112 outputs the search result (result, reference address). The details are described below.
[0055] The reference table 113 is the table that has the map address referred to in the map table 114.
[0056] The map table 114 is the table that holds Multi Match information of the sub tables stored in the ASE block 11.
[0057] Next, the search device 112 will be described with reference to
[0058] In the case of storing a plurality of entries, sometimes the hash values calculated from a plurality of search keys are the same. This is called a hash collision. When a hash collision occurs, the value is added to the memory array 1121 on the word line. The number of search keys that can be stored in the RAM 1121 on one word line is six in
[0059] Further, as described with reference to
[0060] Next, the output of the Multi Match information will be described with reference to
[0061] The reference table 113 is a table that stores the address information (map table address (9 bits)) referred to in the map table 114. The reference table 113 is configured with RAM (memory). The map table 114 stores the Multi Match result within the ASE block 11. The map table 114 includes a memory array 1141 configured with RAM, a row decoder (RD) 1142, and a sense amplifier (S/A) 1143. In the map table 114, the bit in which “1 (High)” is stored on the entry matches, and Multi Match occurs with a plurality of entries marked with “High”. Further, in the map table 114, the positions of bits in which “High” is stored are the addresses (entries) of Multi Match. For example, in the top row of the map table 114, both the fifth and seventh bits from the left store “High”, so that Multi Match occurs in the fifth and seventh entries. Because the map table 114 is read in units of rows, the match information of all the entries is read at the same time.
[0062] The merge circuit 12 is provided to handle the situation in which a plurality of hits (Multi Match) occurs in the ASE block 11. As a result, it is possible to the extension of the bit length, and thus to flexibly use a search key of variable length. In this way, it is possible to expand the range of applications to deep packet inspection (DPI) aiming at monitoring all the layers L1 to L7 of each packet flowing on the network.
First Variation
[0063] First variation is an improvement of the map table 114 that is implemented in the embodiment. A map table according to the first variation will be described with reference to
[0064] As shown in
[0065] For example, if data stored in the ASE block 11 is the same in all entries, Multi Match results concentrate on only one entry in the map table 114 and “High” is stored in all the bits of the particular entry. At this time, the number of bits is equal to the number of entries of the ASE block. In this case, the capacity in the entry direction is entirely useless.
[0066] On the other hand, when no Multi Match occurs, “High” is stored in one bit in each entry. At this time, “High” is not stored in the same bit in all entries.
[0067] Consequently, when match is set to “High” and mismatch is set to “Low” in the map table 114, the number of values “High” is just equal to the number of entries in the entire map table. Thus, the memory capacity of the number of entries (entry direction)×the number of entries (bit direction) is wasteful.
[0068] In the map table 114A of the first variation, as a method for compressing the memory capacity, the Multi Match information is stored as encoded data in order to reduce waste memory capacity. In the embodiment, the Multi Match information is expressed in such a way that the number of bits equal to the number of entries is prepared and “High” is stored in the matching entries. In the first variation, the matching entry information is encoded, so that the memory capacity can be compressed when the relationship “log.sub.2 (the number of entries)×the number of occurrences of Multi Match<the number of entries” is satisfied as shown in
[0069] For example, when the number of entries is “1024”, the number of entries “1024”/log.sub.2 (the number of entries “1024”)=102.4. Thus, the memory capacity can be reduced by setting the number of occurrences of Multi Match to 100 or less. Further, when the number of entries is set to 1024 as described above and the number of occurrences of Multi Match is set to 64, the memory capacity can be reduced to about half (more precisely, 64×10/1024=0.625). In this case, the number of decoders required is equal to the number of occurrences of Multi Match. Even if the number of occurrences of Multi Match is limited in the embodiment, the memory capacity may not be compressed in the bit direction.
Second Variation
[0070] A map table according to a second variation will be described with reference to
[0071] For example, when the lower limit of the number of occurrences of Multi Match stored in the map table 114B is set to “4”, the number of rows in the entry direction can be reduced to one fourth. Further, the number of rows in the entry direction can be reduced to one half by only setting the lower limit of the number of occurrences of Multi Match to “2”. As described above, the proposed method that can also compress the map table with feasible limits is very useful.
[0072] In the embodiment, supporting all cases results in a wasteful situation, which is improved in the first and second variations by setting some limits (limitations according to the actual use) to allow compression of memory capacity.
Third Variation
[0073] A map table according to a third variation will be described with reference to
[0074] When the number of occurrences of Multi Match is 4 or less, each address is encoded and stored in the map table 114C similarly to the first variation. When the number of occurrences of Multi Match is 5 or more, “High” is stored in the specified location of the map table 114C similarly to the embodiment.
[0075] The operation performed in the entry direction is the same as the operation of the embodiment. The operation performed in the bit direction is different from the operation of the embodiment.
[0076] A way to store data in the map table 114C is to first store the encoded information. When the number of entries is set to “1024”, the encoded information is stored for every 10 bits. Next, when the search operation is performed, all data stored in the entries are read and the encoded information is decoded by a decoder 1144C. Further, all the decoded data are logically summed by an OR circuit 1145C to put the Multi Match information together into a whole. For this reason, the number of decoders required is equal to the number of occurrences of Multi Match. This is also applied to the first variation.
Fourth Variation
[0077] Fourth variation is an improvement of the map table by a method different from the method of the first variation. A map table according to the fourth variation will be described with reference to
[0078] As described in the embodiment, after the search process is performed in each ASE block 11, it is necessary to combine the Multi Match information by using the merge circuit 12. If each of the ASE blocks 11 inputs Multi Match information into the multiplexer 121 of the merge circuit 12, the number of required lines increases. Thus, for example, the multiplexer of the merge circuit is divided and placed in each ASE block, in which Multi Match information input from the left adjacent ASE block is multiplexed with the Multi Match information of the particular ASE block and is transmitted to the right adjacent ASE block. In this way, as shown in
[0079] Thus, in the fourth variation, the map table is treated as a shared memory to eliminate the need to connect a line for the number of entries as input and output by each ASE block, and a merge circuit 12D is formed by combining the map table and the merge circuit. Thus, the merge circuit 12D includes a map table 114D, an AND circuit 122D, and a register 123D.
[0080] The ASE block 11D according to the fourth variation includes the hash calculating circuit 111, the search device 112, and the reference table 113. Although the reference table and the map table are mounted on each ASE block in the embodiment, this connection relationship is separated in the fourth variation. The search process in each ASE block 11D is completed in the reference table 113, and the output data of the reference table 113, which is the search result to each ASE block 11D, is transferred to the map table 114D of the merge circuit 12D. At this time, a line of log.sub.2 (the number of entries) is connected to the map table 114D from each ASE block 11D. In this way, the number of connected lines can be reduced.
[0081] The operation in the fourth variation needs to be controlled so that the ASE blocks 11D do not access the map table 114D, which is the shared memory, at the same time. For example, the ASE blocks 11D access the map table 114D sequentially from the ASE block 11D on the left side. In this case, each ASE block receives a signal form the left adjacent ASE block that notifies of completion of the access and then accesses map table 114D.
[0082] (1) A search key is given to the ASE block #1. At this time, the ASE block 11D performs a search process and transfers the output result of the reference table 113 to the area #1 of the map table 114D. The map table 114D outputs Multi Match information according to the input value (address). Then, the Multi Match information is stored in the register 123D.
[0083] (2) A search key is given to the ASE block #2. At this time, the ASE block 11D performs a search process and transfers the output result of the reference table 113 to the area #2 of the map table 114D. The map table 114D outputs Multi Match information according to the input value (address). At the same time, the value is read from the register 123D to obtain the logical product (“AND”) of the Multi Match information and the value of the register by using the AND circuit 122D. Then, the result is stored in the register 123D.
[0084] (3) The same process as the process described in (2) is performed until the ASE block #n.
[0085] The mounting area can be reduced by sharing the memory of the address through one line of a large bit width (# entry). The reduction in the mounting area can also contribute to a reduction in the power consumption.
Fifth Variation
[0086] When ASE is provided as IP to the field programmable gate array (FPGA) or other similar device, the usable memory capacity is limited but the use scene varies depending on the user. Further, when ASE is implemented in FPGA, limitations in the entry direction and the bit direction arise due to the finite resources available.
[0087] Thus, in the fifth variation, the same ASE block is flexibly used depending on the application of either expansion in the entry direction or expansion in the bit direction. The ASE according to the fifth variation is used appropriately either as ASE-d extending in the entry direction or as ASE-c extending in the bit direction.
[0088] The ASE-d according to the fifth variation will be described with reference to
[0089] (1) As shown in
[0090] (2) As shown in
[0091] (3) As shown in
[0092] (4) As shown in
[0093] (5) As shown in
[0094] Next, ASE-c according to the fifth variation will be described with reference to
[0095] As shown in
[0096] When ASE is provided as IP in FPGA or other similar device, the usable memory capacity is limited but the use scene varies depending on the user. By using this method, the memory capacity can be flexibly allocated according to the use scene.
[0097] The ASEs according to the embodiment examples and the first to fifth variations can be applied, for example, to a network search engine and to a search system in which the search engine is used.
[0098] The invention made by the present inventors has been specifically described based on the embodiment, examples, and variations. However, it is needless to say that the present invention is not limited to the foregoing embodiment, examples, and variations, and various modifications and alterations can be made within the scope of the present invention.
[0099] In the description of the embodiment examples, match information is defined as “High” and combined by using the AND circuit in the merge circuit. However, it is also possible that the match information is defined as “Low” and combined by using the OR circuit in the merge circuit.