Table of Contents
Audio Captcha
Notice: this is a draft of unpublished working paper by Takuya Nishimoto.
Part of the work is published in Japanese.
References
- Luis von Ahn, Benjamin Maurer, Colin McMillen, David Abraham, Manuel Blum. “reCAPTCHA: Human-Based Character Recognition via Web Security Measures,” SCIENCE, VOL 321, 12, 1465, Sept 2008.
- Inaccessibility of CAPTCHA, Alternatives to Visual Turing Tests on the Web, W3C Working Group Note 23 November 2005. http://www.w3.org/TR/turingtest/
- Jennifer Tam, Jiri Simsa, Sean Hyde, Luis Von Ahn: “Breaking Audio CAPTCHAs,” Proc. of NIPS, 2008.
Introduction
CAPTCHAs (Completely Automated Public Turing test to tell Computers and Humans Apart) are popular security techniques on the World Wide Web that prevent automated programs from abusing online services.
A typical CAPTCHA is an image containing several distorted characters that appears at the Web forms.
Users are asked to type the distorted dirty characters to let the system know that they are human.
Image-based CAPTCHAs are, however, preventing Web use of persons with visual disability. Audio CAPTCHAs were created to solve this accessibility issue.
Within the CAPTCHA tasks, the difference of recognition performance between human and machine can be used as a criterion of the safeness.
Viewied from another side, mental workload for human in listening audio CAPTCHA tasks has not been investigated very much.
Design method of CAPTCHA task in consideration of safety and ease-of-use is described in this paper.