The adoption of a PUF as key storage and generator avoids some of the shortcomings of memory-based approaches. It is generally harder to perform an attack aimed to read out, predict, or derive responses from a PUF than to gain access to a nonvolatile memory. However, PUF implementations may be subject of attacks. They could exploit either the working model of a PUF circuit or post-processing algorithms.

Model Based Attack

The model building attack can be accomplished by collecting a large number of CRPs that lead to extraction of a mathematical function of the PUF under attack, such that an attacker is able to predict responses generated by arbitrary-deflned challenges [3, 14, 24, 38]. Only PUFs with a large set of CRPs, namely, strong PUF [15], are prone to such attack.

Modeling attacks may involve machine learning algorithms, which are suitable to extract enough knowledge from a subset of CRPs and to generalize, i.e., predict, the mapping mechanism behind a PUF. The extracted model is a digital clone of the PUF and violates the mathematical unclonability property (see Eq. 10.2a). Anyway, to be successful the machine learning tool has to determine a polynomial-sized timing model which is as much as possible accurate in predict responses.

Without loss of generality, we can consider a delay PUF. Assuming that each element delay that belongs to a path add up itself to the total path delay, an adversary can apply a sequence of inputs (challenges) to the PUF and obtain a system of equations, which expresses the linear model behind the mapping mechanism. Whenever the system turns out linear, solving this system is relatively simple, since it is a set of linear equations in the continuous domain, and consequently the support vector machines are a good machine learning tool candidate to be used. Otherwise, a more complex set of nonlinear equations has to be considered. Their complexity depends on the fact that the delay of each element delay is a nonlinear function of the previous element delay, whose delay strongly depends on the order of the transitions converging on it. Machine learning techniques can still be used, however the training set is nonlinearly separable, hence other algorithms have to be adopted.

As a matter of fact, Arbiter PUF implementations show an additive linear behavior, which makes them vulnerable to modeling attacks [17, 32, 38].

Side-Channel Attack

Side-channel attack exploits the non-primary outputs of an algorithm to gather information not available as primary output [19, 20]. Such information are leaked during the execution of the algorithm by measuring power consumption, execution time, emitted electromagnetic field, temperature, and so on. Attacks based on the side channel technique can be performed to extract PUF responses when they are processed to remove noise and recover quality parameters. Authors of [33, 34] successfully performed and documented two types of side-channel attacks to a RO PUF by exploiting the fuzzy extractor which was implemented to post-process the responses. Similarly, authors of [11] accomplished an attack against an error-correction code for a weak PUF (a PUF that is not strong).