CryptoDB
Loud and Clear: Human-Verifiable Authentication Based on Audio
Authors: | |
---|---|
Download: | |
Abstract: | Secure pairing of electronic devices that lack any previous association is a challenging problem which has been considered in many contexts and in various flavors. In this paper, we investigate an alternative and complementary approach--the use of the audio channel for human-assisted authentication of previously un-associated devices. We develop and evaluate a system we call Loud-and-Clear (L&C) which places very little demand on the human user. L&C involves the use of a text-to-speech (TTS) engine for vocalizing a robust-sounding and syntactically-correct (English-like) sentence derived from the hash of a device's public key. By coupling vocalization on one device with the display of the same information on another device, we demonstrate that L&C is suitable for secure device pairing (e.g., key exchange) and similar tasks. We also describe several common use cases, provide some performance data for our prototype implementation and discuss the security properties of L&C. |
BibTeX
@misc{eprint-2005-12761, title={Loud and Clear: Human-Verifiable Authentication Based on Audio}, booktitle={IACR Eprint archive}, keywords={Human-assisted authentication, Man-in-the-middle attack, Audio, Text-to-speech, Public key, Key agreement, Personal device, Wireless networks.}, url={http://eprint.iacr.org/2005/428}, note={ICDCS 2006 msirivia@uci.edu 13327 received 23 Nov 2005, last revised 28 Jun 2006}, author={Michael T. Goodrich and Michael Sirivianos and John Solis and Gene Tsudik and Ersin Uzun}, year=2005 }