SYSTEMS AND METHODS USING HYBRID BOOLEAN NETWORKS AS PHYSICALLY UNCLONABLE FUNCTIONS
20220318437 · 2022-10-06
Inventors
- Andrew Joseph POMERANCE (Alexandria, VA, US)
- Daniel GAUTHIER (Hilliard, OH, US)
- Daniel CANADAY (Columbus, OH, US)
- Noeloikeau CHARLOT (Kailua, HI, US)
Cpc classification
G09C1/00
PHYSICS
G06F21/76
PHYSICS
G06F7/588
PHYSICS
International classification
Abstract
Systems, devices, and methods for generating a unique fingerprint are described herein. For example, an example integrated circuit (IC) chip includes a physically unclonable function (PUF) and an auxiliary circuit. The PUF is a hybrid Boolean network. Additionally, the auxiliary circuit is configured to receive a transient response enable signal.
Claims
1. An integrated circuit (IC) chip, comprising: a physically unclonable function (PUF) comprising a hybrid Boolean network; and an auxiliary circuit, wherein the auxiliary circuit is configured to receive a transient response enable signal.
2. The IC chip of claim 1, wherein the auxiliary circuit is configured to introduce a time delay.
3. The IC chip of claim 2, wherein a duration of the time delay is related to a characteristic time scale of the hybrid Boolean network.
4. The IC chip of claim 1, wherein the auxiliary circuit comprises a plurality of electronic devices, each electronic device being configured to implement a Boolean operation.
5. The IC chip of claim 4, wherein the auxiliary circuit comprises a plurality of pairs of series-connected inverter gates.
6. The IC chip of claim 1, wherein the auxiliary circuit comprises a plurality of electronic devices, each electronic device being configured to implement a copy operation.
7. The IC chip of claim 1, wherein the hybrid Boolean network comprises a plurality of electronic devices, each electronic device being configured to implement a Boolean operation.
8. The IC chip of claim 7, wherein the hybrid Boolean network comprises clocked and un-clocked electronic devices.
9. The IC chip of claim 7, wherein the hybrid Boolean network is configured as a modified random number generator.
10. The IC chip of claim 1, further comprising a substrate, wherein the hybrid Boolean network and the auxiliary circuit are disposed on the substrate.
11. The IC chip of claim 10, wherein the hybrid Boolean network and the auxiliary circuit are located in close physical proximity to each other on the substrate.
12. The IC chip of claim 10, wherein the hybrid Boolean network and the auxiliary circuit are located adjacent to one another on the substrate.
13. The IC chip of claim 1, further comprising a plurality of PUFs, each PUF comprising a respective hybrid Boolean network.
14. The IC chip of claim 13, further comprising a combiner circuit configured to combine respective outputs of each of the PUFs.
15. The IC chip of claim 14, wherein the combiner circuit comprises a PUF.
16. The IC chip of claim 1, wherein the IC chip is a field-programmable gate array (FPGA).
17. The IC chip of claim 1, wherein the IC chip is an application-specific IC (ASIC) chip.
18. The IC chip of claim 1, further comprising a register, wherein the register is configured to receive the transient response enable signal via the auxiliary circuit.
19. The IC chip of claim 18, wherein the register is configured to capture a response of the PUF.
20-62. (canceled)
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0032] The components in the drawings are not necessarily to scale relative to each other. Uke reference numerals designate corresponding parts throughout the several views.
[0033]
[0034]
[0035]
[0036]
[0037]
[0038]
[0039]
[0040]
[0041]
[0042]
[0043]
[0044]
[0045]
DETAILED DESCRIPTION
[0046] Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art. Methods and materials similar or equivalent to those described herein can be used in the practice or testing of the present disclosure. As used in the specification, and in the appended claims, the singular forms “a,” “an,” “the” include plural referents unless the context clearly dictates otherwise. The term “comprising” and variations thereof as used herein is used synonymously with the term “including” and variations thereof and are open, non-limiting terms. The terms “optional” or “optionally” used herein mean that the subsequently described feature, event or circumstance may or may not occur, and that the description includes instances where said feature, event or circumstance occurs and instances where it does not. Ranges may be expressed herein as from “about” one particular value, and/or to “about” another particular value. When such a range is expressed, an aspect includes from the one particular value and/or to the other particular value. Similarly, when values are expressed as approximations, by use of the antecedent “about,” it will be understood that the particular value forms another aspect. It will be further understood that the endpoints of each of the ranges are significant both in relation to the other endpoint, and independently of the other endpoint. As used herein, the terms “about” or “approximately”, when used in reference to a measurement of time (e.g., duration) or physical dimension, mean within plus or minus 10 percentage of the referenced measurement.
[0047] As described above, a physically unclonable function (PUF) is a hardware cybersecurity primitive. A PUF produces a unique, unpredictable response when queried with a challenge. A PUF therefore provides a unique fingerprint (e.g., a “silicon fingerprint”), which is the result of entropy derived from manufacturing variances. PUFs can be used for cybersecurity applications including, but not limited to, secure key generation, memoryless key storage, device authentication, anti-counterfeiting, and intellectual property protection. Using a PUF requires the user to present a “challenge” set of information (such as a set of binary bits), and the PUF generates a “response” set of information, which is then checked against a challenge-response pair (CRP) database. Conventional PUF devices tend to be slow (e.g., a long time between challenge and response) and/or produce a response bit sequence that is much smaller than the challenge bit sequence, thus limiting the security of the PUF. Also, conventional PUF's can be “learned,” that is, the set of challenge-response pairs can be deduced using various attack strategies such as using machine learning. In contrast, a PUF based on transient, likely chaotic, dynamics of a hybrid Boolean network realized on a field-programmable gate array (FPGA) or application-specific integrated circuit (ASIC) are described herein. Slight manufacturing differences in the FPGA or ASIC, such as logic element rise and fall times, logic element threshold differences, and slight differences in delay of signals propagating on the chip, cause different transient behaviors of the Boolean network (different responses) to different challenge bit sequences, which are used as initial conditions for the network.
[0048] Referring now to
[0049] Optionally, and as shown in
[0050] As shown in
[0051] As described below, the physical device 102 is configured to input a challenge bit string into the PUF, where the challenge bit string sets an initial state of the circuit, and then release the PUF from the initial state. The physical device 102 is therefore configured to set the challenge and trigger release of the PUF. The physical device 102 is further configured to capture a transient response bit string from the PUF. As described herein, the physical device 102 can generate an enable signal, which triggers release of the PUF from the challenge state and capture of the transient response bit string from the PUF. For example, the physical device 102 can store the transient response bit string in memory. The transient response bit string is used to provide cybersecurity as described herein.
[0052] After the physical device 102 is manufactured, challenge-response pairs (CRPs) are generated and stored in memory of a computing device, e.g., in a database (also referred to herein as a “challenge-response pair database” or “CRP database”). This process is known as the enrollment phase. This disclosure contemplates performing enrollment with the verifier device 104. In other words, the verifier device 104 is configured to input one or more challenge bit strings into the physical device 102 which then inputs the challenge bit string into the PUF, releases the PUF from its initial state, and captures the respective one or more response bit strings from the PUF. The verifier device 104 is configured to associate respective challenge-response pairs (i.e., associate respective challenge and response bit strings) by maintaining the database.
[0053] In this implementation, the verifier device 104 sends a challenge bit string to physical device 102 and requests the corresponding response bit stream. The physical device 102 receives the challenge bit string from the verifier device 104. The physical device 102 inputs the challenge bit string received from the verifier device 104 into the PUF, releases the PUF from its initial state, and captures a transient response bit string. The physical device 102 then transmits the captured transient response bit string to the verifier device 104, which queries the CRP database to determine whether the transient response bit string is associated with the challenge bit string. The verifier device 104 then transmit a result of the CRP database query to the physical device 102. The PUF is expected to produce a unique, unpredictable response when queried with a challenge. Thus, a particular transient response bit string should be received in response to a particular challenge bit string. This disclosure contemplates the challenge and response bit strings are communicated between the physical device 102 and the verifier device 104 via the networks 110.
[0054] Referring now to
[0055] The hybrid Boolean network can be implemented with an FPGA, for example, by coding the design into a hardware programming language and compiling the code. Alternatively, the hybrid Boolean network can be implemented on an ASIC. Tiny manufacturing variations in signal pathways and input impedance to nodes of the hybrid Boolean network, whether implemented with an FPGA or an ASIC, are sufficient to give rise to different chaotic transient behaviors. As described herein, the hybrid Boolean network includes a plurality of electronic devices, where each electronic device (also referred to herein as “logical element”) is configured to implement a Boolean operation.
[0056] The IC chip 200 includes a substrate (not shown in
[0057] It should be understood that the characteristics of the PUF 220 change with temperature and/or supply voltage. In the following it should be understood that where temperature is referred to, similar statements about supply voltage apply. Additionally, it is desirable for the PUF 220 to function over relatively large temperature ranges and supply voltages. For example, the PUF 220 is a component of an electronic device, which may be subjected to various temperatures. Alternatively or additionally, the PUF 220 has a battery that provides less voltage as the battery is discharged. As noted above, the PUF's characteristics change with temperature and/or supply voltage. Typical clock signals (e.g., the transient response enable signal 210 shown in
[0058] Similar to the PUF 220, the auxiliary circuit 230 includes a plurality of electronic devices (also referred to herein as “logical elements”). The auxiliary circuit 230 therefore includes the same type of electronic devices included in the PUF 220. In other words, the temperature characteristics of the component devices of the PUF 220 and auxiliary circuit 230 are the same. Additionally, the auxiliary circuit 230 can be implemented with an FPGA or an ASIC (i.e., in the same manner as the PUF 220). As discussed above, the auxiliary circuit 230 is designed to introduce a time delay. In some implementations, each electronic device is configured to implement a Boolean operation. For example, the auxiliary circuit 230 can include a plurality of pairs of series-connected inverter gates. In other implementations, each electronic device is configured to implement a copy operation. It should be understood that the number of electronic devices in the auxiliary circuit 230 is directly related to the duration of time delay. For example, a greater number of electronic devices through with the transient response enable signal 210 is fed before being input into the register 240 results in a longer time delay. Accordingly, the number of electronic devices in the auxiliary circuit 230 can be selected based on the characteristic time scale of the PUF 220. As an example, the delay line of the auxiliary circuit 230 can be configured so that the duration of the time delay is about 10 characteristic time scales. It should be understood that 10 characteristic time scales is provided only as an example. This disclosure contemplates using a time delay more or less than 10 characteristic time scales.
[0059] Optionally, in some implementations, the IC chip 200 further includes a plurality of PUFs, where each PUF includes a respective hybrid Boolean network. For example, a plurality of PUFs are illustrated in
[0060] Referring now to
[0061] At step 406, a transient response bit string is captured from the PUF. This can be accomplished, for example, at the output of the flip-flop shown in
[0062] In some implementations, the step of capturing a transient response bit string from the PUF optionally includes capturing a plurality of response bit strings from the PUF. Each of the response bit strings is captured at a different time (e.g., periodically) during the transient period. In this way, multiple responses are collected within the transient state. The transient response bit string is then obtained from the response bit strings. For example, the transient response bit string can include one or more bits selected from each of the response bit strings. In some implementations, the one or more bits selected from each of the response bit strings are determined using a cryptographic key, which can optionally be generated using another PUF. Alternatively, in other implementations, the one or more bits selected from each of the response bit strings are determined using a predetermined key, which can optionally be assigned at the time of manufacture.
[0063] At step 408, the transient response bit string is used to provide cybersecurity. In some implementations, the transient response bit string is used to authenticate a device. Alternatively, in other implementations, the transient response bit string is used as a cryptographic key. It should be understood that authentication and secure key generation are provided only as example applications. This disclosure contemplates using the PUFs described herein for other applications including, but not limited to, memoryless key storage, anti-counterfeiting, tamper-proofing, secure communications, and intellectual property protection. As described herein, the PUF is expected to produce a unique, unpredictable response (e.g., a fingerprint) when queried with a challenge. There is an expectation that a particular transient response bit string should be received in response to a particular challenge bit string. Such correspondences (i.e., CRPs) can be stored in a database as described herein. Thus, for authentication, if the transient response bit string received at step 408 is a match for the challenge bit string input at step 402, then a device (e.g., physical device 102 shown in
[0064] Referring now to
[0065] It should be appreciated that the logical operations described herein with respect to the various figures may be implemented (1) as a sequence of computer implemented acts or program modules (i.e., software) running on a computing device (e.g., the computing device described in
[0066] Example Computing Device
[0067] Referring to
[0068] In its most basic configuration, computing device 500 typically includes at least one processing unit 506 and system memory 504. Depending on the exact configuration and type of computing device, system memory 504 may be volatile (such as random access memory (RAM)), non-volatile (such as read-only memory (ROM), flash memory, etc.), or some combination of the two. This most basic configuration is illustrated in
[0069] Computing device 500 may have additional features/functionality. For example, computing device 500 may include additional storage such as removable storage 508 and non-removable storage 510 including, but not limited to, magnetic or optical disks or tapes. Computing device 500 may also contain network connection(s) 516 that allow the device to communicate with other devices. Computing device 500 may also have input device(s) 514 such as a keyboard, mouse, touch screen, etc. Output device(s) 512 such as a display, speakers, printer, etc. may also be included. The additional devices may be connected to the bus in order to facilitate communication of data among the components of the computing device 500. All these devices are well known in the art and need not be discussed at length here.
[0070] The processing unit 506 may be configured to execute program code encoded in tangible, computer-readable media. Tangible, computer-readable media refers to any media that is capable of providing data that causes the computing device 500 (i.e., a machine) to operate in a particular fashion. Various computer-readable media may be utilized to provide instructions to the processing unit 506 for execution. Example tangible, computer-readable media may include, but is not limited to, volatile media, non-volatile media, removable media and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. System memory 504, removable storage 508, and non-removable storage 510 are all examples of tangible, computer storage media. Example tangible, computer-readable recording media include, but are not limited to, an integrated circuit (e.g., field-programmable gate array or application-specific IC), a hard disk, an optical disk, a magneto-optical disk, a floppy disk, a magnetic tape, a holographic storage medium, a solid-state device, RAM, ROM, electrically erasable program read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices.
[0071] In an example implementation, the processing unit 506 may execute program code stored in the system memory 504. For example, the bus may carry data to the system memory 504, from which the processing unit 506 receives and executes instructions. The data received by the system memory 504 may optionally be stored on the removable storage 508 or the non-removable storage 510 before or after execution by the processing unit 506.
[0072] It should be understood that the various techniques described herein may be implemented in connection with hardware or software or, where appropriate, with a combination thereof. Thus, the methods and apparatuses of the presently disclosed subject matter, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other machine-readable storage medium wherein, when the program code is loaded into and executed by a machine, such as a computing device, the machine becomes an apparatus for practicing the presently disclosed subject matter. In the case of program code execution on programmable computers, the computing device generally includes a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. One or more programs may implement or utilize the processes described in connection with the presently disclosed subject matter, e.g., through the use of an application programming interface (API), reusable controls, or the like. Such programs may be implemented in a high level procedural or object-oriented programming language to communicate with a computer system. However, the program(s) can be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language and it may be combined with hardware implementations.
Examples
[0073] Physically unclonable functions (PUFs) are devices that exploit small variations in a manufacturing process to create unique and stable identifying characteristics with applications ranging from intellectual property protection and device authentication to secret key exchange. Presented below is a PUF design including a chaotic Boolean network implemented on a field-programmable gate array, which is capable of generating challenge-response pairs in as little as 10 nanoseconds (ns). In contrast to other designs, multiple response bits per challenge are collected. This demonstrate an exponential scaling of entropy with network size. A high degree of uniqueness and reliability is found from the PUF design, respectively characterized by μ.sub.inter=0.41±0.02 and μ.sub.intra=0.02±0.01 for a 256-node network. It is further shown that the Boolean network is chaotic and resistant to a third-party machine learning attack, while exhibiting moderate temperature variation, which facilitates commercial use.
[0074] The circuit design described below is the only known strong PUF with multiple response bits built on commercially available off-the-shelf hardware. The PUF is a highly compact chaotic circuit with initial conditions set by the challenge bit string. The response bit string is generated by reading out the state of the circuit during its initial transient, typically within 10 ns. Specifically, the circuit design is a hybrid Boolean network (HBN) implemented on a field-programmable gate array (FPGA). PUF characteristics arise from tiny FPGA manufacturing variations in the wiring and logical elements, which alter the HBN dynamics and hence its challenge-response behavior as a PUF when compared across different FPGAs. Furthermore, the combination of nonlinear and chaotic dynamics with an exponential scaling of entropy with network size appears to result in resilience to machine-learning attacks. Lastly, this disclosure contemplates that the circuit design can double as a true hardware random number generator (HRNG) by letting the circuit continue to evolve well into the chaotic regime after the transient behavior.
[0075] PUF Design
[0076] Described below are definitions of different kinds of networks and previous work on a similarly designed system used for random number generation.
[0077] Hybrid Boolean Networks and Random Number Generation
[0078] Boolean networks are collections of connected nodes each in the state 0 or 1. The state of each node is determined by a Boolean function, which takes as inputs the states of all nodes connected to it, and outputs the new state of that node. An autonomous Boolean network (ABN) is a Boolean network whose functions update without regard to an external clock: their dynamics occur as fast as the physical substrate allows. ABN dynamics are highly sensitive to variations in propagation speed along the links of the network and changes in the rise and fall time of each node, making them attractive candidates as components of a PUF. This is in contrast to a clocked Boolean network, such as one implemented in software, which updates all node states synchronously using a global clock. Hybrid Boolean networks (HBNs) contain both clocked and unclocked components.
[0079] As studied by Rosin [See D. Rosin. Ultra-fast physical generation of random numbers using hybrid Boolean networks. In Dynamics of Complex Autonomous Boolean Networks, pages 57-79. Springer, 2015; D. Rosin et al. Ultrafast physical generation of random numbers using hybrid Boolean networks. Physical Review E, 87(4):040902, 2013], HBNs implemented on field-programmable gate arrays (FPGAs) can be used to produce extremely high random bit rates when used for random number generation. This is useful for many secure communication protocols, including the popular Rivest-Shamir-Adleman cryptosystem [See J. Jonsson and B. Kaliski. Public-key cryptography standards (PKCS) #1: RSA cryptography specifications version 2.1. Technical report, 2003], which rely on the generation of random numbers for encrypting secure data. Generating random numbers as quickly as possible offers a security advantage by increasing the rate at which data can be encrypted and decreasing the time that cryptographic keys must be stored.
[0080] Rosin's construction, which is referred to herein as an HBN-RNG, was designed to create a chaotic physical system on an FPGA whose dynamics rapidly approach the maximum frequency allowed by the hardware. The HBN-RNG is shown in
[0081] As shown in
[0082] When implemented on an Altera Cyclone IV FPGA, a transition to chaos in the HBN-RNG occurs at N=5, above which the network becomes exponentially sensitive to initial conditions and LE parameter details. An efficient RNG can be realized with 128 copies of N=16 networks running in parallel, resulting in a 12.8 Gbit/s random bit rate.
[0083] HBN-PUF
[0084] The physical random number generator described above is “PUF-like” in a number of ways. First, tiny manufacturing variations in signal pathways and input impedance to nodes is sufficient to give rise to different chaotic transient behaviors, suggesting the PUF's uniqueness property. Second, the HBN-RNG shown in
[0085] With these considerations in mind, the HBN-RNG scheme can be modified to act as a HBN-PUF, as shown in
[0086] Replace each node with an XOR LE 608 and a multiplexer 610 that sets the initial state of the ABN to a particularN-bit string (the challenge), as shown in
[0087] Capture the time series of the network using N-bit register 612 at a rate comparable to its dynamics, then read out the time series using a global clock 614 and select (in a manner defined below) an N-bit response from the transient, as shown in
[0088] The first change is to make the network challengeable and to prevent self-excitation from the all-0 state by removing the XNOR node. A challenge C is defined to be the N-bit binary string (also referred to herein as “challenge bit string”) setting the initial state of the ABN according to some arbitrary but fixed labeling of the nodes. Mathematically, shown by Eqn. (1):
C=x(t=0), (1)
for an N-bit state x(t) of the ABN at time t=0. By defining challenges to be the initial states of the ABN, an exponentially scaling challenge space is obtained in which the number of possible challenges grows as 2.sup.NSpecifically, the number of valid challenges N.sub.vc is defined to be all possible bit strings of length N that are not steady-states of the ABN. This means we exclude the all-0 or all-1 states for all N, as the asynchronous 3-input XOR remains static in either case. Similarly, for even N the states with alternating 0's and 1's are excluded. Thus, the number of valid challenges is given by Eqn. (2):
[0089] The second change is to capture the transient behavior of the ABN where it is simultaneously most reliable and unique. This is the point in time at which the FPGA manufacturing variations have decorrelated the network from its initial state sufficiently to act as a “fingerprint” for the circuit. Formally, the HBN-PUF is challenged by setting the initial state of the ABN to C and then allowing it to evolve for a short time when the behavior is still in the transient phase. TheN-bit response R of the HBN-PUF to the challenge C is then selected from among the ABN time series by evaluating its bitwise Boolean derivative, defined as Eqn. (3):
where XOR[., .] is the bitwise XOR function and I is used to denote evaluation at a particular value. The time t≥r is the registered time at which the ABN is stored after applying the challenge, as described below. The optimal time t.sub.opt is the time maximizing uniqueness and reliability from among the time series of Boolean derivative states in the transient, as chosen through an optimization routine described below. The choice to use the Boolean derivative is inspired by the XOR procedure for reducing bias in the output bit stream of the HBN-RNG described above. Finally, the number of bits read per challenge is N, and therefore the extractable bits from the design may potentially scale as N 2.sup.N, resulting in a strong PUF.
[0090] The time series of the ABN evolution is collected as follows. The ABN is first set to C at t=0, and subsequently a RESET bit is flipped to 0, allowing the ABN to evolve. The dynamics of the ABN are then registered autonomously in τ≈0.5 ns intervals by passing the RESET signal down a delay line. The delay line consists of sequential pairs of inverter gates, each pair roughly delaying the RESET signal by τ. After each delay, the state of all of the nodes in the network at that time are placed in registers, then later pushed to memory using a global clock. This process ensures the dynamics of the ABN are captured at a timescale comparable to their evolution, as the inverter gate pairs used in the delay line and the LEs of the nodes in the ABN are both close to r, though each varies slightly due to manufacturing differences.
[0091] Experimental Procedure
[0092] The HBN-PUF is created by coding the design into a hardware programming language (e.g., Verilog hardware description language (HDL)) and using a compiler (e.g., QUARTUS II computer aided design (CAD) software from INTEL CORP. of Santa Clara, Calif.) to compile code with placement and routing chosen automatically by its optimization procedure. N.sub.chips=10 are then separately programmed with the same .SOF file. Each chip is a DE10-Nano system on a chip (SOC) from TERASIC, INC. of Hsinchu, Taiwan hosting CYCLONE V 5CSEBA6U23I7 FPGAs from INTEL CORP. of Santa Clara, Calif. This ensures each FPGA instantiates an identical copy of the HBN-PUF described herein (e.g., as shown in
[0093] Using custom Python scripts, N.sub.distinct unique and randomly selected valid challenges are loaded onto each chip's on-board memory and used to set the initial state of the HBN. The network then evolves for a short time during the transient chaotic phase, the time series is saved to memory, and the PUF is reset to the next challenge.
[0094] The entire process is repeated N.sub.query times, so that the total number of applied challenges per chip is equal to N.sub.distinct×N.sub.query. As described below, a majority vote is performed, in which case the response to a given challenge is taken to be the most frequently observed bits from among N.sub.votes=25 responses to the same challenge. In this way the number of times a challenge is applied is N.sub.query=N.sub.votes×N.sub.repeat and the number of responses to this challenge following the vote is N.sub.repeat, so that the total number of CRPs is N.sub.distinct≈N.sub.repeat. The data of the time series are then read out and used in the analysis described below.
[0095] Device Statistics
[0096] Standard measures of uniqueness and reliability for the PUF design across multiple chips and for different network sizes are defined and evaluated below. Consistent performance comparable to other state-of-the-art PUFs is found. Results showing the HBN-PUF can doubles as a hardware random number generator are also shown.
BACKGROUND
[0097] Let P∈P be a particular PUF instance P belonging to the set of all PUF instances P following the design described above. The response R is a random variable R: S.sub.p.fwdarw.{0,1}.sup.N mapping from the set of all possible physical states S.sub.P of PUF instance P to the set of all binary strings of length N, denoted {0, 1}.sup.N.
[0098] Specifically, the response takes as input a particular state S.sub.P,C∈S.sub.P of PUF instance P resulting from challenge C. Expressed element wise, this mapping is S.sub.P,C.fwdarw.R(S.sub.P,C). To simplify the notation, the response R(P, C) is written as a function of the PUF instance P and the challenge applied to it C, with the tacit understanding that the formal definitions given above hold.
[0099] The reliability and uniqueness of P are characterized by studying the distributions of R for various P and C; in other words, how the design performs as a PUF is studied by comparing responses from individual and different instances on a per-challenge basis. To that end, the following standard measures are defined.
[0100] Intro-Device and Inter-Device Definitions
[0101] Consider two different responses from the same challenge string C.sub.i. These responses may result from applying the same challenge string to the same PUF instance two different times C.sub.i,j and C.sub.i,k, or they may result from applying the challenge exactly once to two different PUF instances P.sub.i and P.sub.m. The first case will be used to gauge reliability: a single PUF instance should ideally produce identical responses when presented with the same challenge. The second case will be used to gauge uniqueness: two different PUF instances should give responses to the same challenge which, when compared, appear random and uncorrelated. For clarity these indices are summarized:
[0102] i∈[0, N.sub.distinct]: Distinct challenge;
[0103] j, k∈[0, N.sub.repeat]: Separate applications of distinct challenge;
[0104] l, m∈[0, N.sub.chips]: Separate PUF instances.
[0105] If each response is taken to be an N-bit string, then the fraction of dissimilar bits between the two responses is denoted as shown by Eqns. (4) and (5):
r.sub.ijk;l=D[R(P.sub.l,C.sub.i,j),R(P.sub.l,C.sub.i,k)]÷N (4)
u.sub.ilm;l=D[R(P.sub.l,C.sub.i,j),R(P.sub.m,C.sub.i,k)]÷N, (5)
where D[.,.] is the Hamming distance (number of differing bits between two N-bit binary strings), r.sub.ijk;l (mnemonic ‘reliability’) is the within-instance (intra or intra-device) fractional Hamming distance between responses for the fixed PUF instance P.sub.i resulting from applications j and k of challenge i. Likewise, u.sub.ilm;j (mnemonic ‘uniqueness’) is the between-instance (inter or inter-device) fractional Hamming distance between responses of PUF instances P.sub.i and P.sub.m resulting from the fixed application j of challenge i.
[0106] To obtain distributions of these distances on a per-challenge basis, the pairwise combinations used to construct them are averaged over, and then the remaining indices are further averaged over to obtain mean measures of reliability μ.sub.intra and uniqueness μ.sub.inter. Specifically, if <.>.sub.(a,b),c indicates the average of a quantity over pairwise combinations (a, b) and remaining indices c, then:
μ.sub.intra=r
.sub.(j,k),l,i, (6)
μ.sub.inter=r
.sub.(l,m),j,i. (7)
[0107] To gauge the reliability of an individual chip, then do not average over the instances P.sub.i, so that the mean reliability on a per chip basis is μ.sub.intra;l=<r>.sub.(j,k)i. Note that a time series of N-bit strings representing the time evolution of the network is recorded, so that there exist the above measures at every point in time. Ideally, μ.sub.intra=0 and μ.sub.inter=0.5 for all time. In practice this is not the case, and the response is chosen as the point in time t.sub.opt that maximizes Δμ(t):=μ.sub.inter(t)−μ.sub.intra(t), i.e., the point in the transient that is simultaneously most reliable and unique.
[0108] Experimental Intro-Device and Inter-Device Statistics
[0109] Here we present results for N.sub.distinct=100 valid challenges repeated N.sub.repeat=10 times each for N=16 and N=256 node networks. Plotted on the lefthand side of
[0110] It can be seen from
[0111] Furthermore, we see that μ.sub.inter and μ.sub.intra are at most 9% and 2% away from their ideal values of 0.5 and 0, respectively. These errors are further correctable through standard means such as: error correction algorithms, tabulating the least unique and reliable bits during enrollment and removing them from the response, or simply requiring more responses until the probability of a false identification is near zero. Each of these is practical for the HBN-PUF described herein as multiple response bits per challenge are collected very quickly, making authentication tasks simpler and more secure than with single-bit response PUFs. This is because the probability of an adversary correctly guessing, e.g., an N=256 bit response is negligible in comparison to guessing a single bit, in which case a very large number of challenges would be required for authentication. Conversely, very similar distributions as those above are obtained using only a few number of challenges, e.g., N.sub.distinct˜10.
[0112] Random Number Generation
[0113] It is shown below that the average bit value of the HBN-PUF responses exhibit tightly centered distributions about 0.5 at late times, suggesting a random quality. Consider the N=256 node network presented above, and let s.sub.ijln be the n.sup.th bit of the response string s from challenge i, application j, and instance l at a time t≥t.sub.opt.
[0114] From
[0115] Entropy Analysis
[0116] In the security analysis of PUFs, the extractable entropy is of central importance. This quantity is ultimately related to both reliability and uniqueness and provides an upperbound on the amount of information that can be securely exchanged with a PUF instance [See P. Tuyls et al. Information-theoretic security analysis of physical unclonable functions. In International Conference on Financial Cryptography and Data Security, pages 141-155. Springer, 2005]. The extractable entropy is difficult to estimate directly, as it is formed from probability distributions in exponentially high dimensional spaces. Described below are several ways to estimate entropy from limited data.
[0117] The process starts by assuming independence between bit pairs in the responses of the HBN-PUF described herein (e.g., as shown in
[0118] Minimum Entropy
[0119] The min-entropy of a random variable X is defined as Eqn. (8):
H.sub.min(X)=log(p.sub.max(X)), (8)
[0120] where p.sub.max(X) is the probability of the most likely outcome. If X=(x.sub.1, x.sub.2, . . . , x.sub.n) is a vector of n independent random variables, then the min-entropy is defined as Eqn. (9):
[0121] In the case of a strong PUF with multiple challenges and a large response space, an ordering of the response bits is needed to make sense of entropy calculations. A natural ordering is to define the response of the i-th node to the j-th challenge as x.sub.jN+i, where the challenges are ordered lexicographically. This is illustrated in Table 1 for the simple case of N=3. Here, there are only 6 challenges because the trivial all-0 and all-1 challenges are omitted. An illustration of response-bit ordering for N=3, where there are 3×6=18 total bits is shown in Table 1.
TABLE-US-00001 TABLE 1 Challenge Node 1 Node 2 Node 3 001 x.sub.1 x.sub.2 x.sub.3 010 x.sub.4 x.sub.5 x.sub.6 011 x.sub.7 x.sub.8 x.sub.9 100 x.sub.10 x.sub.11 x.sub.12 101 x.sub.13 x.sub.14 x.sub.15 110 x.sub.16 x.sub.17 x.sub.18
[0122] Assuming independence of x.sub.i, the min-entropy for the HBN-PUF described herein can be readily calculated with Eqn. (9) from empirical estimates of p.sub.max(x.sub.i) [See D. Holcomb et al. Power-up sram state as an identifying fingerprint and source of true random numbers. IEEE Transactions on Computers, 58(9):1198-1210, 2009; P. Simons et al. Buskeeper PUFs, a promising alternative to d flip-flop PUFs. In 2012 IEEE International Symposium on Hardware-Oriented Security and Trust, pages 7-12. IEEE, 2012]. For each x.sub.i, the estimate of p.sub.max(x.sub.i) is simply the observed frequency of 0 or 1, whichever is larger. To put the entropy calculations into context, the calculations are presented as a fraction of the optimal case. If all of the x, were independent and completely unbiased, i.e., each x.sub.i were equally likely to be 0 or 1, than the min-entropy would be equal to N times the number of valid challenges N.sub.vc. The min-entropy density is therefore defined as shown by Eqn. (10):
ρ.sub.min=H.sub.min/(NN.sub.υc). (10)
[0123] Due to the exponential scaling of the challenge space N.sub.vc, these values are not able to be measured using all of the possible valid challenges for N>8, though, as described below, the full challenge space for low N is studied. Thus, assume that the randomly chosen challenges form a representative sample and multiply by the fraction of the unused space to obtain H.sub.min. Table 2 presents minimum entropy (H.sub.min) and minimum entropy densities (ρ.sub.min) for N=8, 16, 32, 64 with N.sub.chips=10, N.sub.distinct=100, and N.sub.repeat=100.
TABLE-US-00002 TABLE 2 N H.sub.min ρ.sub.min 8 1.1 × 10.sup.3 0.57 16 5.1 × 10.sup.5 0.48 32 5.7 × 10.sup.10 0.41 64 5.7 × 10.sup.20 0.48
[0124] It can be seen from Table 2 that the HBN-PUFs have min-entropy approximately 50% of full min-entropy. For comparison, various standard electronic PUFs have min-entropy between 51% and 99%—see, e.g., Ref. [See R. Maes. Physically unclonable functions. Springer, 2016] for a more complete comparison. The HBN-PUF therefore has min-entropy density comparable to state-of-the-art techniques. Another interpretation of the min-entropy is that it is equal to the number of bits one can securely exchange if an adversary only knows about the biases of the x.sub.i. From Table 2, one can exchange 5.6×10.sup.22 bits of information against a naïve adversary. This HBN-PUF uses only 3×64=192 LEs, which is extremely compact compared to other FPGA-based PUF designs, and hence it is possible to easily increase the entropy by increasing the size of the ring.
[0125] Joint Entropy
[0126] As described above, it is assumed that x.sub.i are independent, though this need not be the case. It is possible that some bits reveal information about others, reducing the entropy. These correlations between bit pairs are studied, first by calculating the mutual information defined by Eqn. (11):
between all pairs of x.sub.i, x.sub.j. Unlike min-entropy, the mutual information is difficult to calculate for higher N, so attention is restricted to N=3-8 and the full valid challenge space is used. The mutual information for small N is calculated with N.sub.chips=10, N.sub.chat=N.sub.vc, and N.sub.repeat=100. For N=7, regions with non-trivial mutual information (>0.05 bits) are shown in
[0127] From
where the ordering of the bits is such that the penalty is as large as possible. Calculating the ordering of the bits to maximize the joint information penalty is effectively a traveling salesman problem, which can be solved approximately with a 2-opt algorithm [See B. Chandra et al. New results on the old k-opt algorithm for the traveling salesman problem. SIAM Journal on Computing, 28(6):1998-2029, 1999].
[0128] Minimum entropy (H.sub.min), joint entropy (H.sub.joint), and joint entropy densities (ρ.sub.joint) for N=3-8 are shown in Table 3. Joint entropy density estimates are similar to many other FPGA-based PUF designs.
TABLE-US-00003 TABLE 3 N H.sub.min H.sub.joint ρ.sub.joint 3 4.6 3.5 0.19 4 29.8 17.6 0.37 5 63.5 19.0 0.13 6 216.2 111.3 0.31 7 467.9 221.0 0.25 8 1140.6 514.8 0.25
[0129] The resulting entropy estimates are tabulated in Table 3, along with entropy density estimates defined analogously to Eqn. (10). The estimates of the joint-entropy density is, on average, 25% less than the estimates of the min-entropy density. This is lower than other electronic PUF designs, where the joint-entropy estimate is between 2.9% and 8.24% less. See Reference [See R. Maes. Physically unclonable functions. Springer, 2016] for a detailed comparison.
[0130] Although the existence of non-zero mutual information lowers the amount of information that can be securely exchanged, calculating the mutual information directly is a computationally inefficient task. Such estimates, and therefore such attacks, are difficult to calculate for large N. Three-bit correlations likely exist, but are even more difficult to estimate, so it's unclear that that entropy is much smaller than the joint-entropy estimates above in practice, although a machine-learning attack may reveal such dependencies efficiently [See U. Rührmair et al. Modeling attacks on physical unclonable functions. In Proceedings of the 17th ACM conference on Computer and communications security, pages 237-249. ACM, 2010].
[0131] Context-Tree Weighting Test
[0132] The entropy is estimated through a string compression test below. The results here should be understood as an upper-bound for the true entropy, especially for larger N. In particular, the context tree weighting (CTW) algorithm [See F. Willems et al. The context-tree weighting method: basic properties. IEEE Transactions on Information Theory, 41(3):653-664, 1995] is considered.
[0133] The CTW algorithm takes a binary string called the context and forms an ensemble of models that predict subsequent bits in the string. It then losslessly compresses subsequent strings into a codeword using the prediction model. The size of the codeword is defined as the number of additional bits required to encode the PUF instance's challenge-response behavior. If the context contains information about a subsequent string, then the codeword will be of reduced size.
[0134] In the case of PUFs, the codeword length has been shown to approach the true entropy of the generating source in the limit of unbounded tree depth [See T. Ignatenko et al. Estimating the secrecy-rate of physical unclonable functions with the context-tree weighting method. In 2006 IEEE International Symposium on Information Theory, pages 499-503. IEEE, 2006]. However, the required memory scales exponentially with tree depth, so it is not computationally feasible to consider an arbitrarily deep tree in the CTW algorithm. Instead, the tree depth is varied up to D=20 to optimize the compression.
[0135] A CTW compression is performed as follows:
[0136] Step 1: Collect data for N=3-8 HBN-PUFs with N.sub.chips=10, N.sub.distinct=N.sub.vc, and N.sub.repeat=1.
[0137] Step 2: Concatenate the resulting measurements for all but one HBN-PUF instance into a one-dimensional (1D) string of length (N.sub.chips−1)N.sub.vcN to be used as context.
[0138] Step 3: Apply the CTW algorithm to compress the measurements from the last HBN-PUF with the context, using various tree depths to optimize the result.
[0139] Repeat steps 2-3, omitting measurements from a different HBN-PUF instance, until all HBN-PUFs have been compressed.
[0140] The results of this compression test are presented in Table 4. The final entropy estimate is the average codeword length from all of the compression tests described above. If the behavior of the N.sub.chips−1 PUF instance can be used to predict the behavior of the unseen instance, then the PUFs do not have full entropy.
[0141] Entropy (H.sub.CTW) and entropy density (ρ.sub.CTW), as estimated from the CTW compression test is shown in Table 4. Note that this is an upper-bound of the true entropy due to the bounded tree-depth is shown in Table 4.
TABLE-US-00004 TABLE 4 N H.sub.CTW ρ.sub.CTW 3 19.4 1.08 4 47.4 0.99 5 148.6 0.99 6 357.9 0.99 7 807.9 0.92 8 1952.2 0.97
[0142] Consistent with the expectation that this is an upper-bound estimate, the entropies are all larger than those calculated with the joint-entropy test described below. Most of the PUF data is resistant to compression, particularly those with higher N, although it is likely the case that higher N require a deeper tree to compress. These results are again similar to studies on other FPGA-based PUFs, which find CTW compression rates between 49% and 100% [See S. Katzenbeisser et al. PUFs: Myth, fact or busted? a security evaluation of physically unclonable functions (PUFs) cast in silicon. In International Workshop on Cryptographic Hardware and Embedded Systems, pages 283-301. Springer, 2012].
[0143] Entropy Summary
[0144] Three different statistical tests to estimate the entropy in the HBN-PUFs are described above. Two of the tests are computationally intensive and only performed on HBN-PUFs of size N=3-8. One is more easily scalable, which was evaluated for N up to 64. To better understand these estimates as a function of N and resource size, these three estimates are shown in
[0145] The H.sub.CTW estimate yields the most entropy, followed by H.sub.min and H.sub.joint. This is expected because H.sub.CTW is an upper-bound estimate, while H.sub.joint is equal to H.sub.min with a penalty term determined by mutual information. Nonetheless, all three estimates are reasonably close, particularly on the scale in
[0146] These results suggest that HBN-PUFs described herein (e.g., as shown in
[0147] Chaos and Resilience to Machine Learning
[0148] Chaotic systems are defined by their extreme sensitivity to initial conditions. Slight perturbations to a chaotic system will lead to wildly diverging long-term behavior. For this reason many machine learning platforms have difficulty predicting the behavior of chaotic systems past a characteristic timescale known as a Lyapunov time, a result which extends to machine learning attacks on PUFs [See L. Liu et al. Lorenz chaotic system-based carbon nanotube physical unclonable functions. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 37(7):1408-1421, 2018]. The Lyapunov time of the HBN-PUF described herein (e.g., as shown in
[0149] Lyapunov Exponent
[0150] The Lyapunov exponent of a system is a measure of the rate at which two nearby points in phase space diverge. Let z(t) be the separation of two trajectories as a function of time, and let z(0) be small. Then |z(t)≈|z(0)| exp λt, where λ is the Lyapunov exponent. A spectrum of λ's is obtained for differently orientated initial separations. However, the maximum λ will usually dominate with time, and for this reason it is used as an indicator of chaos: if the maximum λ is positive, the trajectories will diverge exponentially, and the system is usually said to be chaotic.
[0151] Maximum Lyapunov Exponent Calculation
[0152] The maximum Lyapunov exponent is calculated by extending the method of R. Zhang et al. Boolean chaos. Physical Review E, 80(4), October 2009 toN-bit responses. Here, the Boolean distance between the time series of twoN-bit responses x(t) and y(t) to the same challenge is defined by Eqn. (13):
where T is a window of fixed length, and to is the first time at which d=0, i.e., to is the first time at which the two time series differ by at least 1 bit within a window of length T. Note that, because d(t) is a Boolean metric for separations in phase space, the average of its logarithm over time in the linear regime is A.
[0153] The average logarithm of the Boolean distance of each time series segment is therefore computed over all pairwise combinations of repeated responses to a given challenge, and again averaged over all challenges, to obtain <ln(d(t))>.sub.(j,k),i following the index convention described above, or <ln d> for short. By fitting <ln d> versus t to a straight line, the estimate of the maximum Lyapunov exponent λ is obtained. This is done both experimentally and by simulating responses from the same challenges using a mathematical model.
[0154] Mathematical Model of the PUF
[0155] The PUF dynamics are modeled using a system of coupled first order differential equations given by Eqn. (14):
τ.sub.i{dot over (x)}.sub.i(t)=−x.sub.i(t)+f(N.sub.G(i)), (14)
where x.sub.i(t) is the continuous state of node i at time t taking values between 0 and 1, τ.sub.i is the characteristic rise/fall time of this node, f is the continuous version of the 3-input XOR function, and N.sub.G(i) is the list of all nodes connected to node i, i.e., its neighborhood. Here N.sub.G(i) is restricted to itself and its two neighbors in the ring, and f is defined as by Eqn. (15):
f(x,y,z)=θ(1+tanh(a.sub.x(x−0.5)tanh(a.sub.y(y=0.5))tanh(a.sub.z(z−0.5))), (15)
where θ(w) is a threshold function representing the transition of a continuous signal to a Boolean value. θ(w) is defined by Eqn. (16):
θ(w)=(1+tan h(a.sub.w(w−0.5))/2, (16)
where the a.sub.i's are “squeezing” parameters, here all chosen to be a=20, and τ.sub.i=0.5 was chosen for all nodes. The initial states were set to the challenge values with a perturbation chosen between [0, 0.05] to prevent identical simulations. They were then integrated numerically and decimated and Booleanized to match their experimental counterparts. The Lyapunov exponent was then calculated for each as shown in
[0156] Maximum Lyapunov Exponent Results
[0157] As can be seen from
[0158] Machine Learning Attack with PUFMeter
[0159] PUFmeter [See F. Ganji et al. Pufmeter a property testing tool for assessing the robustness of physically unclonable functions to machine learning attacks. IEEE Access, 7:122513-122521, 2019] is a recently designed third-party machine learning platform used to assess the security of a PUF. It uses probably approximately correct learning and k-junta functions to attempt to learn the challenge-response behavior of a given PUF, and indicates if a PUF is theoretically susceptible to various types of attacks. Due to the fact that PUFmeter searches the entire valid challenge space N.sub.vc, the testing was restricted here to an N=16 node network. Furthermore, the theory behind PUFmeter is based upon single-bit responses. For this reason, PUFmeter was used to test an individual bit of the responses, as well as the XOR of our entire response string. These results are presented in Table 5.
[0160] Table 5 shows N=16 node PUF machine-learning attack results using PUFmeter, with internal parameters δ=0.01 and E=0.05 governing the probability thresholds for the analysis. The result κ=0 indicates a failure of PUFmeter to model the HBN-PUF described herein (e.g., as shown in
TABLE-US-00005 TABLE 5 Noise Average Noise Response Bit Upper Bound Sensitivity Sensitivity κ XOR 0.47 0.26 0.25 0 0th 0.47 0.38 0.22 0
[0161] Here κ is the minimum number of Boolean variables usable by PUFmeter to predict the response to a given challenge; since κ=0, PUFmeter was unable to model the behavior of the HBN-PUF. The noise upper bound, average sensitivity, and noise sensitivity are used to gauge the theoretical bounds for which types of attacks are expected to be possible. From these, PUFmeter indicated that the HBN-PUF may be susceptible to a Fourier-based attack.
[0162] Taken together with the exponential entropy scaling and chaotic nonlinear dynamics, the failure of PUFmeter to model the HBN-PUF described herein suggests that the behavior of the HBN-PUF is likely to be resilient to machine learning attacks.
[0163] Temperature Variation
[0164] Temperature variation is an important practical concern when comparing PUFs indifferent environmental conditions or over long operating times [See S. Mathew et al. 16.2 a 0.19 pj/b pvt-variation-tolerant hybrid physically unclonable function circuit for 100% stable secure key generation in 22 nm CMOS. In 2014 IEEE International Solid-State Circuits Conference Digest of Technical Papers (ISSCC), pages 278-279. IEEE, 2014]. The temperature variation of the HBN-PUF described herein is assessed for two network sizes, N=16 and N=256, by loading N.sub.chips=10 DE-10-Nano's into an environmental test chamber facility capable of controlling humidity and temperature conditions over a wide range.
[0165] For these tests, the temperature is first increased to 55° C. and the humidity purged to <5% toremoveexcessmoistureandpreventcondensationatlowertemperatures. Next, the temperature is lowered in 10° C. increments to a final temperature T=15° C. At each temperature, the chamber is allowed to reach equilibrium as indicated by a digital display, typically within 10 minutes. Then, the boards are queried with N.sub.distinct=50 and N.sub.repeat=50 challenges.
[0166] The metric Δμ(t)=μ.sub.inter(t)−μ.sub.intra(t) described above is calculated at each temperature for both network sizes. This quantity demonstrates the performance of each PUF when compared to others at the same temperature. Additionally, at each temperature the deviation of an HBN-PUF with respect to itself at 25° C. was calculated. This is a quantity denoted as μ.sub.intra;25° C. This measure is equivalent to considering an individual chip as consisting of different instances—one for each temperature. It is calculated at each temperature by comparing responses to those generated at 25° C., then averaging over all challenges and overall chips (individual chips exhibited similar values). These plots are presented in
[0167] As can be seen from the
[0168] From
[0169] This disclosure contemplates using Muller gates, or C-gates, to improve temperature stabilization. It is known that Muller gates, or C-gates, are useful for temperature stabilization in asynchronous PUFs [See S. Gujja. Temperature Variation Effects on Asynchronous PUF Design Using FPGAs. University of Toledo, 2014]. Accordingly, the HBN-PUF described herein may be modified to include Muller gates serving to stabilize individual bit flips associated with thermal fluctuations. Other potential temperature stabilization techniques include optimizing the layout and synthesis of PUFs on the FPGA with respect to temperature, as well as post-processing error correction schemes described herein.
CONCLUSION
[0170] The results above show that HBN-PUFs exhibit strong measures of reliability and uniqueness, with inter-device and intra-device statistics that are close to ideal and have tight distributions. This suggests HBN-PUFs are useful for device authentication purposes. Additionally, by virtue of their N-bit responses, HBN-PUFs require fewer challenges for authentication compared to single-bit response PUFs. In combination with the exponentially growing size of the challenge-space with network size, this makes HBN-PUFs attractive for both authentication and security, as it would take longer than the lifetime of the universe to query every challenge for, e.g., an N=256 node network, even at nanosecond intervals.
[0171] The results above also show that various entropy estimates suggest HBN-PUF entropy scales exponentially with network size, yielding significantly more entropy and using less hardware than other PUF designs. This means HBN-PUFs constructed from on the order of hundreds of LEs can efficiently store trillions or more independent cryptographic keys in their physical structure using a commercially available FPGA, which has memory for even larger designs than those considered here—for example, an N=1024 node network is easily realizable within memory constraints.
[0172] Furthermore, HBN-PUFs appear to exhibit chaotic dynamics and a resilience to machine-learning, in contrast to similar PUFs such as ring oscillators.
[0173] Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.