METHOD AND SYSTEM FOR MANAGING A CACHE MEMORY

20230176982 · 2023-06-08

Assignee

Inventors

Cpc classification

International classification

Abstract

A management system for managing a cache memory including a randomization module configured for generating a random value for each process of accessing the cache memory, and for transforming addresses of the cache memory with said random value into randomized addresses, a history table configured to store therein on each line an identification pair associating a random value corresponding to an access process, with an identifier of the corresponding access process, so forming identification pairs that are operative to dynamically partition the cache memory while registering the access to the cache memory, and a state machine configured to manage each process of accessing the cache memory according to the identification pairs stored in the history table.

Claims

1. A method for managing a cache memory provided to equip an electronic device comprising a processor and a main memory, wherein the managing cache memory comprises the following steps: generating a random value (R) for each process of accessing the cache memory, transforming the addresses (A) of the cache memory with said random value into corresponding addresses, said randomized addresses (Ar) being configured for indexing the cache memory, associating each random value corresponding to an access process, with an identifier (PID) of said access process, so forming pairs of identifications that are operative to dynamically partition the cache memory while registering the access to said cache memory, storing each identification pair composed of a random value and of a corresponding identifier in a history table, and managing each process of accessing the cache memory according to said identification pairs stored in said history table.

2. The method according to claim 1, further comprising the following steps: receiving a current request comprising the current identifier (PIDc) of a current access process and a current access address (Ac) to the cache memory, going through the lines of the history table to verify whether an identifier (PIDi) is present in the history table that matches the current identifier (PIDc) of the current access process, in the positive, calculating the randomized address (Ar) of said current access address using the random value associated with the identifier PIDi found in the history table corresponding to the current identifier PIDc, and in the negative, triggering a cache fault giving rise to the generation of a current random value configured to be associated with the identifier of said current access process to form a current identification pair, and storing said current identification pair on an available line of the history table.

3. The method according to claim 1, further comprising the step of storing, in a field of the cache memory, referred to as permission field (PP), a permission vector (VP) of dimension equal to the number of lines of the history table, each component of said permission vector corresponding to one and only one line of said history table and whose value indicates legitimacy or non-legitimacy of the access process referenced by its identifier stored in said line.

4. The method according to claim 3, further comprising the following steps: going through the lines of the history table sequentially to calculate, at each current line, the randomized address indexing the cache memory based on the random value (R.sub.j) stored in said current line and on the address (Ac) indicated in the request, and verifying the validity of the current line of the cache memory defined by the corresponding randomized address (A.sub.r).

5. The method according to claim 4, wherein it comprises the step of triggering a cache fault if the end of the history table is reached while validating no line of the cache memory.

6. The method according to claim 5, further comprising the following steps: verifying the matching between the current identifier (PIDc) and the identifier (PIDj) belonging to the pair (R.sub.j, PID.sub.j) relative to the current line, if the line of the cache memory is validated, verifying whether the current identifier (PIDc) is present in the history table in case of non-match between the current identifier (PIDc) and said identifier (PIDj) relative to said current line, noting the position (k) of an identifier (PIDk) stored in the history table if that identifier (PIDk) is equal to the current identifier (PIDc), triggering a cache hit if the legitimacy bit of the permission vector VP at a position (k) corresponding to said noted position (k) of said identifier (PID.sub.k) stored in the history table, is valid, simulating the triggering of a cache fault if the legitimacy bit at said position (k) of the permission vector (VP) is not valid.

7. The method according to claim 6, wherein if a cache fault is triggered, said method comprises the step of withdrawing the legitimacy of a process of accessing a shared data item, without deleting said shared data item from the cache memory if said access process requires eviction of said shared data item from the cache memory or if it leaves the history table.

8. The method according to claim 6, further comprising the step of writing back said shared data item in the main memory if it has been modified and a last access process having legitimacy to said shared data item evicts it from the cache memory or leaves the history table.

9. A system for managing a cache memory configured to equip an electronic device comprising a processor and a main memory, wherein said system comprises: a randomization module configured for generating a random value (R) for each process of accessing the cache memory, and for transforming addresses of the cache memory with said random value into corresponding addresses, referred to as randomized addresses (Ar) configured for indexing the cache memory, a history table composed of a determined number of lines configured to store therein on each line an identification pair associating a random value corresponding to an access process, with an identifier of said corresponding access process, so forming identification pairs that are operative to dynamically partition the cache memory while registering the access to said cache memory, and a state machine configured for accessing the history table and for managing each process of accessing the cache memory according to said identification pairs stored in said history table.

10. A system according to claim 9, further comprising: a set of registers configured for receiving a current request comprising the current identifier of a current access process and a current access address to the cache memory, and a state machine configured for: going through the lines of the history table to verify whether an identifier (PIDi) is present in the history table that matches the current identifier (PIDc) of the current access process, in the positive, calculating the randomized address (Ar) of said current access address using the random value associated with the identifier (PIDi) found in the history table corresponding to the current identifier (PIDc), and in the negative, triggering a cache fault giving rise to the generation of a current random value configured to be associated with the identifier of said current access process to form a current identification pair, and storing said current identification pair on an available line of the history table.

11. The system according to claim 10, wherein the cache memory is subdivided into several lines each of which comprises a permission field (PP) of the access process configured to store therein a permission vector (VP) of dimension equal to the number of lines of the history table, each component of said permission vector corresponding to one and only one line of said history table and whose value indicates legitimacy or non-legitimacy of the access process referenced by its identifier stored in said line.

12. The system according to claim 11, wherein the state machine is configured for: going through the lines of the history table sequentially to calculate, at each current line, the randomized address indexing the cache memory based on the random value (R.sub.j) stored in said current line and on the address (Ac) indicated in the request, and verifying the validity of the current line of the cache memory (3) defined by the corresponding randomized address (A.sub.r).

13. The system according to claim 12, wherein the state machine is configured for: verifying the matching between the current identifier (PIDc) and the identifier (PIDj) belonging to the pair (R.sub.j, PID.sub.j) relative to the current line (j), if the line of the cache memory is validated, verifying whether the current identifier (PIDc) is present in the history table in case of non-match between the current identifier (PIDc) and said identifier (PIDj) relative to said current line, noting the position (k) of an identifier (PIDk) stored in the history table if that identifier (PIDk) is equal to the current identifier (PIDc), triggering a cache hit if the legitimacy bit of the permission vector (VP) at a position (k) corresponding to said noted position (k) of said identifier (PID.sub.k) stored in the history table, is valid, simulating the triggering of a cache fault if the legitimacy bit at said position (k) of the permission vector (VP) is not valid.

14. An electronic device comprising a processor, a main memory, and a cache memory further comprising a management system for managing the cache memory according to claim 9.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

[0075] FIG. 1 is a diagram of a system for managing a cache memory, according to an embodiment of the invention;

[0076] FIG. 2 is a diagram of an implementation of a system for managing a cache memory, according to an embodiment of the invention;

[0077] FIG. 3 is a diagram of a method for managing a cache memory, according to the system implemented in FIG. 2;

[0078] FIG. 4 is a diagram of an implementation of a system for managing a cache memory, according to a preferred embodiment of the invention; and

[0079] FIG. 5 is a diagram of a method for managing a cache memory, according to the system implemented in FIG. 4.

DESCRIPTION OF THE EMBODIMENTS

[0080] FIG. 1 is a diagram of a system for managing a cache memory, according to an embodiment of the invention. This figure also diagrammatically illustrates a method for managing a cache memory, according to an embodiment of the invention.

[0081] The system 1 for managing a cache memory 3 is configured to equip an electronic device comprising a processor and a main memory. This management system 1 comprises a randomization module 5, a history table 7 and a state machine 9.

[0082] The randomization module 5 is configured for generating a random value R for each process of accessing the cache memory 3. Furthermore, it is configured for transforming addresses of the cache memory 3 by means of the random value R into corresponding addresses, referred to as randomized (or transformed) addresses A.sub.r which are provided to index the cache memory 3. By way of example, the randomization is carried out by means of a non-reversible operation F between the initial address A and the random value R associated with the access process.

[0083] Thus, the addressing of the cache memory 3 prior to the randomization is different from that after that randomization. This makes it difficult to implement side channel attacks which attempt to obtain secret information based on the observation of the accesses by a process to that cache memory 3.

[0084] The history table 7 is composed of a determined number N of indexed lines L.sub.1, . . . ,L.sub.N 71 and is configured for storing the random values R.sub.1, . . . , R.sub.N corresponding to the access processes as well as identifiers PID.sub.1, . . . , PID.sub.N of those access processes. More particularly, each indexed line L.sub.i comprises an identification pair (R.sub.i, PID.sub.i) associating a random value R.sub.i with a corresponding identifier PID.sub.i. The identification pairs stored in the history table are operative to dynamically partition the cache memory while registering the accesses to the cache memory.

[0085] As a matter of fact, the storage of the random values R.sub.i in the history table 7 makes it possible to store the memory accesses of an access process after a change of context. This makes it possible to optimize performance since it is not necessary to empty the cache memory 3 at the time of a change of context.

[0086] Furthermore, the storage of the identifiers PID.sub.i in the history table 7 in association with the corresponding random values R.sub.i, enables dynamic partitioning of the cache memory while minimizing the hardware cost of that storage and by speeding up the verification time of the access process.

[0087] It will be noted that the association of the random value R.sub.i with a corresponding identifier PID.sub.i adds a second layer of dynamic isolation. That being the case, when a victim process is run, it will have the same random value for its accesses to the cache memory 3. On the other hand, the random value of an attacking process will be changed and therefore will be unable to track the accesses of the victim process. This is makes it possible to counter the cache timing attacks of ‘Flush+Reload’ type.

[0088] Furthermore, this makes it possible not to abandon any line of the cache memory 3 without there being prior write back into the main memory thus re-establishing the consistency between the cache memory 3 and the main memory.

[0089] The management of the access processes is carried out by the state machine 9. As a matter of fact, a state machine 9 is configured for accessing the history table 7 and managing each process of accessing the cache memory according to identification pairs (i.e. identifiers PID.sub.i and corresponding random values R.sub.i) stored in said history table. Thus, for the future accesses of a process to the cache memory 3, the state machine 9 will go through the history table 7 to see whether its identifier is already present in the table 7 in order to manage the random value being created or not created.

[0090] This management system 1 thus makes it possible to strengthen the security of the cache memory 3 while maintaining very good performance.

[0091] FIG. 2 is a diagram of an implementation of a system for managing a cache memory, according to an embodiment of the invention.

[0092] The system 1 for managing the cache memory 3 comprises a set of registers 13, a randomization module 5, a history table 7 and a state machine 9. This management system 1 is configured to equip an electronic device 15 comprising a processor 17 and a main memory 19.

[0093] The cache memory 3 may for example be of associative, or set-associative, or direct mapping, or other type. In general terms, the cache memory 3 comprises an information recording medium subdivided into several lines L.sub.11, . . . , L.sub.i1, . . . , L.sub.iw, . . . , L.sub.nw of fixed length. By way of example, and without being exhaustive, each line L.sub.ij comprises a data field D.sub.ij, a line tag T.sub.ij called ‘Tag’, a valid bit V.sub.ij, and a modification bit M.sub.ij called ‘dirty bit’.

[0094] For example, in the case of an associative cache memory 3, the lines are grouped together into distinct sets referred to as ‘Sets’ Si. Each Set Si contains a same number W of lines L.sub.ij. The index ‘i’ identifies a Set Si among the other Sets of lines which the cache memory 3 comprises and the index ‘j’ identifies a particular line j of the Set Si. The position of each Set Si in the cache memory 3 is indicated by an address A=A(Si) called ‘line set address’ or ‘Set address’.

[0095] The data field D.sub.ij is divided into a determined number d of words (by e.g. d=4) of fixed length. It will be noted that the lengths of a word, of a field and of a line are expressed by the number of bits that composes them. For example, if the length of a word is equal to 32 bits, a data field of 4 words is 128 bits.

[0096] The Tag T.sub.ij contains a value which makes it possible to select the line L.sub.ij which contains the word sought from among the W lines L.sub.ij of the Set Si.

[0097] The valid bit V.sub.ij of a line is an information bit which makes it possible to mark that line L.sub.ij as being valid “1” or invalid “0”. A line marked as invalid must be treated as if it did not contain any word. Thus, a line L.sub.ij marked as invalid is to be erased and replaced by another line loaded from the main memory 19. The modification bit (i.e. dirty bit) M.sub.ij relative to a line L.sub.ij is an information bit which makes it possible to mark that line L.sub.ij as having been modified. When a line L.sub.ij is marked as modified, and according to the type of the cache memory 3, the field D.sub.ij that it contains is taken into account before, for example, that line is marked as invalid. More particularly, for a cache memory 3 of ‘write back’ type, the data item is re-written into the main memory solely when that line of the cache memory 3 is overloaded or evicted. On the other hand, for a cache memory 3 of ‘write-through’ type, the data item is immediately written in the main memory.

[0098] The set of registers 13 is configured to receive, from the processor 17, a current request 21 comprising the current identifier PID.sub.c of a current access process and a current access address A.sub.c to the cache memory 3.

[0099] More particularly, the current access address A.sub.c comprises an address A(Si) of a Set Si, an index d.sub.r, and a Tag T.sub.r. The address A(Si) of a Set Si of the cache memory 3 is the address that may contain the word to read. The index d.sub.r makes it possible to identify the position of the word to read in the field D.sub.ij of the W lines L.sub.i1, . . . L.sub.iw of the Set Si identified by the address A(Si). The Tag T.sub.r makes it possible to select, from among the Set of the W lines L.sub.i1, . . . , L.sub.iw of the Set Si corresponding to the address A(Si), the line L.sub.ik which contains the word to read if that line exists.

[0100] It will be noted that a request to write a word in the cache memory 3 is practically identical to the read request but in addition it comprises a numerical value Vr containing the new value of the word to record in the cache memory 3.

[0101] Thus, the set of registers 13 comprises registers in which are recorded the various data items contained in the write or read request 21.

[0102] The randomization module 5 comprises a random (more accurately, pseudo-random) value generator 51, a multiplexer module 53 and a randomization operator 55.

[0103] The generator 51 is configured to generate pseudo-random values R (referred to as random values), such that a current random value R.sub.c is associated with each current access process.

[0104] The multiplexer module 53 is configured to select either the use of a random value already stored in the history table or a new current random value R.sub.c generated by the generator 51.

[0105] The randomization operator 55 is configured to perform a randomization operation F between the current address A.sub.c and the current random value R.sub.c associated with the access process. Thus, the current address A.sub.c of the cache memory 3 is transformed into a corresponding randomized address Ar configured to index the cache memory: A.sub.r=F(A.sub.c, R.sub.c). By way of example, the randomization operation F is a logic operation of XOR type.

[0106] This randomization makes it possible to mix the lines of the cache memory 3 such that an attacker cannot know which line has been accessed and therefore cannot track the memory access of its victim.

[0107] Furthermore, in order to store the memory accesses of an access process after a change of context, these random variables are stored in the history table 7. As indicated previously, the history table 7 is composed of a determined number N of lines which are sequentially indexed. Each line L.sub.i comprises an identification pair composed of a random value R.sub.i and of the identifier PID.sub.i of a corresponding access process. It will be noted that a line L.sub.i of the history table 7 is reference by a single index (here, the index ‘i’) designating the line number while a line L.sub.ij of the cache memory 3 is referenced by two indexes, the first ‘i’ identifying the Set Si and the second ‘j’ identifying the number of the line in that Set Si.

[0108] The association of the random value R.sub.i with a corresponding identifier PID.sub.i has several advantageous functions. A first is the fact that a legitimate process will always have the same random value for its accesses to the cache memory 3 while an attacking process will be changed and will thus be unable to track the accesses by the victim process. A second function is the re-establishment of consistency between the cache memory 3 and the main memory 19 since no line of the cache memory 3 will have been abandoned prior to it being written back into the main memory 19. A third function is dynamic randomization of the cache memory at lower equipment and temporal costs.

[0109] The state machine 9 is connected to the randomization module 5, to the cache memory 3, and to the history table 7. For example, the coupling between the state machine 9 and the cache memory 3 is achieved by means of several signals or links comprising a first link Z1 which goes in both directions between the data field D.sub.ij and the state machine 9, and a second signal Z2 which goes directly from the modification bit M.sub.ij to the state machine 9 and two validity/invalidity signals Z3 and Z4 which go indirectly via an ‘AND’ logic gate 25 from the valid and tag bits V.sub.ij, T.sub.ij respectively to the state machine 9. Furthermore, the logic gate 25 also comprises another input Z5 corresponding to a validity/invalidity signal of the identifier PIDi stored in the history table 7.

[0110] The state machine 9 is configured to manage the history table 7 in relation with the cache memory 3. More particularly, the state machine 9 is configured to go through the lines of the history table 7 to verify whether the current identifier PID.sub.c of the current access process is already present in that history table 7. That is to say, the state machine 9 verifies whether there is an identifier PIDi present in the history table 7 which matches the current identifier PIDc of the current access process.

[0111] If the current identifier PID.sub.c is already present in the history table 7 (i.e. if an identifier PIDi is found in the history table matching the current identifier PIDc) and if the valid and tag bits are valid, a success cache (i.e. a cache hit) is triggered by the state machine 9. Thus, the random value R associated with that identifier PIDi found in the history table is used to calculated the randomized address Ar of the current access address A.sub.c. The resulting randomized address Ar then enables the processor 17 to access the cache memory 3. The triggering of a cache hit is carried out solely if the three inputs Z3, Z4, Z5 of the logic gate 25 are valid.

[0112] In the negative, i.e. if at least one of the three inputs Z3, Z4, Z5 of the logic gate 25 is invalid, a cache fault (cache miss) is triggered by the state machine 9. This cache fault gives rise to the generation of a current random value R.sub.c provided to be associated with the identifier of the current access process to form a current identification pair. This current identification pair is stored on a free indexing line of the history table. Furthermore, the current random value R.sub.c generated is used to randomize the current access address A.sub.c.

[0113] FIG. 3 is a diagram of a method of managing a cache memory, according to the system implemented in FIG. 2.

[0114] Assuming that a given current process X is executed in the processor 17 and wishes to perform an access to the main memory 19, for example to read a data item.

[0115] At step E1, the processor 17 produces a current request 21 relative to the current access process X to interrogate the cache memory 3. The request 21 comprises the current identifier PID.sub.c of the current access process, an address A(Si) of a Set Si, an index d.sub.r, and a Tag T.sub.r.

[0116] At steps E2-E6, and after reception of the current request 21 by the management system 1, the state machine 9 goes through the history table 7 sequentially to verify whether the current identifier PID.sub.c of the current access process X is already present in the history table 7.

[0117] At steps E2-E4, the state machine 9 compares the current identifier PID.sub.c iteratively with the identifiers PIDi stored in the history table 7.

[0118] More particularly, at step E2, for a current indexed line L.sub.i of the history table 7, the state machine 9 compares the current identifier PID.sub.c with the identifier PID.sub.i stored in that line L.sub.i. If there is a match, step E6 is proceeded to, and otherwise, step E3 is proceeded to.

[0119] Step E3 is a test to verify whether the current line L.sub.i of the history table 7 is the last L.sub.N in that history table 7. If yes, step E5 is proceeded to. Otherwise, step E4 is proceeded to, in which the indexing of the line is incremented to pass to the following line L.sub.j+1 and step E2 is looped back to.

[0120] Step E5 concerns the case in which the indexing of the current line L.sub.j verified at step E3 is the last L.sub.N of the history table 7. In other words, the end of the history table 7 is reached without finding any match between the current identifier PID.sub.c and the identifiers PID stored in the history table 7. The state machine 9 then directly triggers a cache fault (i.e. a cache miss).

[0121] It will be noted that further to the triggering of the cache fault, the processor 17 reloads the data item into the cache memory 3 from the main memory 19. Furthermore, the generator 51 generates a current random value Rc to be associated with the current identifier PID.sub.c of the current access process. The identification pair formed by the current random value Rc and the current identifier PID.sub.c is recorded in the history table 7.

[0122] At step E6, the state machine deduces that the current access process X has already written in the cache memory 3. The state machine 9 then retrieves the pair (R.sub.i, PID.sub.i) composed of the identifier PID.sub.i and of a corresponding random value R.sub.i, stored in the line L.sub.i of the history table 7, and calculates the randomized address A.sub.r indexing the cache memory 3.

[0123] Steps E7 to E10 concern the verification of the validity of the current line of the cache memory 3 defined by the corresponding randomized address A.sub.r.

[0124] As a matter of fact, at step E7, the state machine 9 verifies the valid bit V of the current indexed line of the cache memory 3 defined by the randomized address A.sub.r. If the bit is not valid (for example, V=0), that line L of the cache memory 3 is considered as not yet initialized and thus step E8 is proceeded to in which the state machine 9 directly triggers a cache fault (cache miss). Otherwise, step E9 is proceeded to.

[0125] Step E9 concerns the case in which the validity bit V verified at step E7 is valid (i.e. V=1). In that case, the state machine 9 verifies the tag Tr of the address tag T stored in the line L of the cache memory 3 which is indexed by the randomized address A.sub.r. In case of invalidity of the tag, it is deduced that the data item sought is not the right one and step E10 is proceeded to in which the state machine 9 directly triggers a cache fault (cache miss). Otherwise, i.e. if the verification of the tag is valid, step E11 is proceeded to.

[0126] At step E11 the state machine 9 triggers a cache hit since the following three conditions are met: equality between the current identifier PIDc and an identifier PIDi stored in the table (step E2); validity of the bit V (step E7); and validity of the tag T (step E9).

[0127] FIG. 4 is a diagram of an implementation of a system for managing a cache memory, according to a preferred embodiment of the invention.

[0128] This management system 1 has a same constituents as that of FIG. 2 except for the fact that the cache memory 3 advantageously comprises an additional field, referred to as permission field or process PP. This permission field PP is configured to store a permission vector VP of dimension equal to the determined number N of lines of the history table 7. Each component VP.sub.k of the permission vector VP corresponds to one and only one line Lk of the history table 7 and whose value indicates legitimacy (i.e. permission) or non-legitimacy (i.e. non-permission) for the access process referenced by its identifier PID.sub.k stored in that line L.sub.k. The value of a component VP.sub.k of the permission vector VP indications whether a process has a legitimate access to a data item present in the cache memory 3. If the value of the component VP.sub.k is 1 (i.e. valid), the process associated with the identifier PID.sub.k has a legitimate access to the data item but if the value of the component VP.sub.k is 0 (i.e. invalid), the process associated with the identifier PID.sub.k does not have access to the corresponding data item.

[0129] According to this embodiment, the state machine 9 goes through the lines of the history table 7 sequentially in order to calculate, at each current line, the randomized address which will index the cache memory 3. The randomized address is calculated from the random value R.sub.j stored in the current line and the address A(Sr) indicated in the request. Furthermore, the state machine 9 retrieves at each current line (i.e. each iteration j), the identification pair (R.sub.j, PID.sub.j) composed of the random value and of the corresponding identifier PID.sub.j.

[0130] The state machine 9 also verifies the validity of the current line of the cache memory 3 defined by the randomized address Ar. More particularly, the state machine 9 verifies the valid bit and the tag TAG of the line of the cache memory 3 defined by the randomized address A.sub.r. When the end of the history table is reached without there being conclusive verification of the validity bit and of the tag TAG, a cache miss is declared.

[0131] On the other hand, if the validity bit and the tag TAG are valid, the state machine 9 verifies the match between the current identifier PIDc and the identifier PIDj belonging to the pair (R.sub.j, PID.sub.j) relative to the iteration j, retrieved by the state machine 9. When there is equality (i.e. PID.sub.j=PID.sub.c), the state machine 9 verifies the validity of the component VP.sub.j of the permission vector VP. In case of validity, referenced by the arrow Z3 going directly into the state machine 9, the latter declares a cache hit and otherwise, the state machine 9 triggers a cache miss.

[0132] In case of non-match between the current identifier PIDc and the identifier PIDj, the state machine concludes that the data item indexed by the randomized address is a shared data item and in that case, verifies whether the current identifier PIDc is present in the history table. If an identifier PIDk is present in the history table equal to the current identifier PIDc, the state machine 9 notes the position k of the stored identifier PID.sub.k. The state machine 9 next verifies the validity of the legitimacy bit at the position k (i.e. VP.sub.k) of the permission vector VP. If yes, a cache hit is triggered enabling the processor 17 to legitimately access the data item stored in the cache memory 3 knowing that it is a data item already shared. Otherwise, the state machine 9 simulates the triggering of a cache fault (cache miss).

[0133] Thus, the logic gate 25 according to this embodiment comprises three inputs Z3, Z4, and Z6. The links Z3, Z4 are the same as those of the embodiment of FIG. 2. On the other hand, the input Z6 comes from the permission field PP. Triggering of a cache hit is carried out if the three inputs Z3, Z4, Z5 of the logic gate 25 are valid. In contrast, a cache miss is triggered if at least one of these three inputs Z3, Z4, Z6 of the logic gate 25 is invalid. In particular, if the access process accesses a data item present in the cache memory 3 but to which it does not have legitimacy, the state machine 9 triggers a cache fault, ‘cache miss’. In other words, if the legitimacy bit VP.sub.k (i.e. the position k of the permission vector VP) associated with the identifier PID.sub.k is not valid, the link Z6 signals a value of non-legitimacy and, therefore, a cache fault is triggered by the state machine 9.

[0134] This makes it possible to prohibit the access tracking that an attacking process can carry out on a victim process. Furthermore, this optimizes the performance of the Furthermore 3 by enabling sharing of the same data item by several processes that are already registered in the history table 7 by their respective identifier and random values. In other words, this makes it possible not to have a same data item shared by several processes in several places of the cache memory 3.

[0135] Advantageously, if an access process requires eviction of a shared data item from the cache memory 3 or if it leaves the history table 7, the legitimacy of the process of accessing that shared data item is withdrawn without deleting it from the cache memory 3. This makes it possible to keep the shared data item in the cache memory 3 such that the other access process (or processes) sharing the same data item can still access that data item.

[0136] Furthermore, the shared data item is written back into the main memory 19 if it has been modified and a last access process having legitimacy to that shared data item evicts it from the cache memory 3 or leaves the history table 7.

[0137] FIG. 5 is a diagram of a method of managing a cache memory, according to the system implemented in FIG. 4.

[0138] Assuming that a given current process X is executed in the processor 17 and wishes to perform an access to the main memory 19, for example to read a data item.

[0139] At step E21, the processor 17 produces a current request 21 relative to the current access process X to interrogate the cache memory 3. The request 21 comprises the current identifier PID.sub.c of the current access process, an address A(Sr) of a Set Sr, an index d.sub.r, and a Tag T.sub.r.

[0140] At steps E22-E25, after reception of the current request 21 by the management system 1, the state machine 9 goes through the history table 7 sequentially to calculate the randomized address which will index the cache memory 3 and at each iteration j, retrieves the identification pair (R.sub.i, PID.sub.j) composed of the random value R.sub.i and of the corresponding identifier PID.sub.j.

[0141] More particularly, at step E22, at the iteration j (i.e. for a current indexed line L.sub.j) of the history table 7, the state machine 9 retrieves the identification pair (R.sub.j, PID.sub.j) composed of the random value R.sub.i and of the corresponding identifier PID.sub.j. Furthermore, it calculates the randomized address A.sub.r indexing the cache memory 3 based on the random value R.sub.j and the address A(Sr) indicated in the request.

[0142] At step E23, the state machine 9 verifies the valid bit V of the line of the cache memory 3 defined by the randomized address A.sub.r. If this bit is not value (for example, V=0), this line L of the cache memory 3 is considered as not yet being initialized and step E25 is then proceeded to. Otherwise, step E24 is proceeded to.

[0143] Step E24 concerns the case in which the validity bit V verified at step E23 is valid (i.e. V=1). In that case, the state machine 9 verifies the tag Tr included in the request with the value of the of the tag T stored in the line L of the cache memory 3 which is indexed by the randomized address A.sub.r. In case of invalidity of the tag, it is deduced that a data item sought is not the right one and step E25 is then proceeded to. Otherwise, i.e. if the verification of the tag is valid, step E27 is proceeded to.

[0144] Step E25 is a test to verify whether the current line L.sub.j of the history table 7 is the last L.sub.N in that history table 7. If yes, step E26 is proceeded to. Otherwise, the indexing of the line is incremented to pass to the following line L.sub.j+1 and step E22 is looped back to.

[0145] Step E26 concerns the case in which the indexing of the current line L.sub.j verified at step E24 is the last L.sub.N of the history table 7. In other words, the end of the history table 7 is reached without conclusive verification of the valid bit V (step E23) or of the tag T (step E24), so the state machine 9 triggers a cache fault (i.e. a cache miss). Here, the cache fault triggers a request to the main memory to retrieve the data item which will be written in the cache memory. The generator will generate a random value which will be associated with the current identifier PIDc thus forming a new identification pair which will be recorded in the first free line of the history table 7. Let it be assumed that this free line is L.sub.m, then this identification pair will be indexed (R.sub.m, PID.sub.m).

[0146] At step E27, the valid bit V and the tag T that were verified at steps E23 and E24 are valid, so the state machine 9 verifies the match between the current identifier PIDc and the identifier PIDj belonging to the identification pair (R.sub.j, PID.sub.j) relative to the indexing line Lj (i.e. iteration j), already retrieved by the state machine 9. In case of equality (i.e. PID.sub.j=PID.sub.c) this means that the data item has been loaded by that current process and thus step E28 is proceeded to. Otherwise, step E29 is proceeded to.

[0147] At step E28, the state machine 9 verifies the validity of the component VP.sub.j (i.e. at the location j) of the permission vector VP. In case of validity, the state machine 9 declares a cache hit and the processor 17 has legitimacy of access to the data item stored in the cache memory 3. Otherwise, the state machine 9 triggers a cache miss. This verification makes it possible to satisfy the concepts of memory consistency and sharing.

[0148] Step E29 concerns the case in which the current identifier PIDc is not equal to the identifier PIDj. In this case, the state machine 9 concludes the data item indexed by the randomized address is a data item shared between the current process and at least one other process. The state machine 9 then verifies whether the current identifier PIDc is present in the history table. If there is an identifier PIDk in the history table equal to the current identifier PIDc, that means that the process has already made accesses to the cache memory and step E30 is proceeded to. Otherwise, step E32 is proceeded to.

[0149] Step E30 concerns a first scenario in which the process has already made accesses to the cache memory 3. At this step the state machine 9 then notes the position k of the identifier PIDk stored in the history table 7 and returns to the line of the cache memory 3 indexed by the randomized address to verify in the permission field PP whether the legitimacy bit at the position k (i.e. VP.sub.k) of the permission vector VP is valid. In the positive, a cache hit is triggered enabling the processor 17 to legitimately access the data item stored in the cache memory 3 knowing that it is a data item already shared. Otherwise, (i.e. when the legitimacy bit VP.sub.k is not valid), the state machine 9 simulates the triggering of a cache fault (cache miss) and step E31 is then proceeded to.

[0150] At step E31, the value 1 is attributed to the legitimacy bit VP.sub.k of position k to authorize the future access to that shared data item by the access process referenced by that identifier PID as if it was the one it had loaded. This makes it possible to prevent the access tracking by an attacker on a victim.

[0151] Step E32 concerns a second scenario in which the current access process makes accesses to the cache memory 3 for the first time or its accesses have already been evicted. This means that its identifier PID is not stored in the history table 7. The state machine 9 then simulates a cache fault (cache miss) even if that data item is present in the cache memory in order to prevent access tracking by an attacking process and step E33 is then proceeded to.

[0152] At step E33, the generator 51 generates a random value Rc to be associated with the current identifier PID.sub.c of that current access process. The identification pair (R.sub.c, PID.sub.c) composed of that random value R.sub.c and the corresponding identifier PID.sub.c is stored on the first available line (for example, the position i) in the history table 7. For example, if the first free line is L.sub.i, the identification pair is referenced (R.sub.i, PID.sub.i). Next, step E34 is then proceeded to.

[0153] At step E34, the state machine 9 validates the legitimacy bit VP.sub.i at the position i in the permission field PP at the line of the cache memory 3 indexed by the randomized address to authorize that process to access that shared data item in the future.

[0154] The simulation of a cache miss is the fact of making the data item available to the processor in a number of cycles equivalent to a conventional miss given that the data item is already present in the cache and the current process has no legitimacy of access to that data item. For this it is possible to seek a request from the main memory and reply to the processor when the main memory has finished the task to avoid choosing a fixed number of cycles which does not reflect reality

[0155] It will be noted that the present invention can apply to all cache memories. Thus, the management system according to the present invention can be implemented in any electronic device (for example a computer) comprising a processor, a main memory, and a cache memory.