A PUF [Pappu01] is a physical device that when stimulated, it magically produces an output which is "unpredictable".
PUFs do not keep state, and do not have secrets to be protected (in contrast with tamper-proof hardware tokens, for example).
As such, they are naturally very appealing for cryptographic applications.
PUF-based solutions have already been proposed for authentication schemes, key storage and leakage-resilient encryption.
In such solutions it is typically assumed that the parties running the protocol are both honest and want to defend
from an external adversary tampering with the communication channel, or with the machines of the honest parties.
Very recently, PUFs have been proposed for securely realizing tasks in which the parties running the protocol
are mutually distrustful (i.e., secure computation). Namely, some of the parties might not follow the protocol specification honestly.
Such protocols exploit the properties of well formed PUFs to obtain very fast and unconditionally secure protocols
which are know to be impossible to achieve in the plain model (without hardware assumptions).
However, although claiming solutions for this "malicious setting" where parties are mutually distrustful, their security is based on the assumption that all the PUFs used in the protocol are well formed. Namely, they are generated following the honest generation procedure.
In this talk we argue that this assumption might be a bit too optimistic, and that a perhaps more natural approach in this setting is
to assume that only honest parties use well formed PUFs, while malicious parties can play with arbitrarily malicioushardware (as long as it "looks like" a PUF).
We will introduce the "malicious PUF model" and show protocols which are secure (even unconditionally) in this model.
Joint work with Rafail Ostrovsky (UCLA), Ivan Visconti (University of Salerno), Akshay Wadia (UCLA).