Smile in the face of adversity much? A print based spoofing attack

PDF of full paper: Smile in the face of adversity much? A print based spoofing attack
Full-size poster image: Smile in the face of adversity much? A print based spoofing attack

[This paper was presented on July 21, 2017 at The First International Workshop on The Bright and Dark Sides of Computer Vision: Challenges and Opportunities for Privacy and Security (CV-COPS 2017), in conjunction with the 2017 IEEE Conference on Computer Vision and Pattern Recognition.]

Vinay Uday Prabhu and John Whaley, UnifyID, San Francisco, CA 94107

Abstract

In this paper, we demonstrate a simple face spoof attack targeting the face recognition system of a widely available commercial smart-phone. The goal of this paper is not proclaim a new spoof attack but to rather draw the attention of the anti-spoofing researchers towards a very specific shortcoming shared by one-shot face recognition systems that involves enhanced vulnerability when a smiling reference image is used.

Introduction

One-shot face recognition (OSFR) or single sample per person (SSPP) face recognition is a well-studied research topic in computer vision (CV) [8]. Solutions such as Local Binary Pattern (LBP) based detectors [1], Deep Lambertian Networks (DLN) [9] and Deep Supervised Autoencoders (DSA) [4] have been proposed in recent times to make the OSFR system more robust to changes in illumination, pose, facial expression and occlusion that they encounter when deployed in the wild. One very interesting application of face recognition that has gathered traction lately is for mobile device unlocking [6]. One of the highlights of Android 4.0 (Ice Cream Sandwich) was the Face Unlock screen-lock option that allowed users to unlock their devices with their faces. It is rather imperative that we mention here that this option is always presented to the user with a cautioning clause that typically reads like *Face recognition is less secure than pattern, PIN, or password.

The reasoning behind this is that there exists a plethora of face spoof attacks such as print attacks, malicious identical twin attack, sleeping user attack, replay attacks and 3D mask attacks. These attacks are all fairly successful against most of the commercial off-the-shelf face recognizers [7]. This ease of spoof attacks has also attracted attention of the CV researchers that has led to a lot of efforts in developing liveness detection anti-spoofing frameworks such as Secure-face [6]. (See [3] for a survey.)

Recently, a large scale smart-phone manufacturer introduced a face recognition based phone unlocking feature. This announcement was promptly followed by media reports about users demonstrating several types of spoof attacks.

In this paper, we would like to explore a simple print attack on this smart-phone. The goal of this paper is not proclaim a new spoof attack but to rather draw the attention of the anti-spoofing community towards a very specific shortcoming shared by face recognition systems that we uncovered in this investigation.

2. Methodology and Results

Figure 1. Example of two neutral expression faces that failed to spoof the smart-phone’s face recognition system.
Figure 2. Example of 2 smiling registering faces that successfully spoofed the smart-phone’s face recognition system.
The methodology we used entailed taking a low quality printout of the target user’s face on a plain white US letter paper size (of dimension 8.5 by 11.0 inches) and then unlocking the device by simply exposing this printed paper in front of the camera. Given the poor quality of the printed images, we observed that this simple print attack was duly repulsed by the detector system as long as the attacker sported neutral facial expressions during the registration phase. However, when we repeated the attack in such a way that the attacker had an overtly smiling face when (s)he registered, we were able to break in successfully with high regularity.

In Figure 1, we see two examples of neutral expression faces that failed to spoof the smart-phone’s face recognition system when the registering image had a neutral facial expression. A video containing the failed spoofing attempt with a neutral facial expression can be viewed here.

In Figure 2, we see the same two subjects’ images that successfully spoofed the phone’s face recognition system when the registering (enrollment) image was overtly smiling. The face training demo videos are available here. The video of the successful spoof can be viewed here.

2.1. Motivation for the attack and discussion

It has been well known for a long time in the computer vision community that faces displaying expressions, especially smiles, resulted in stronger recall and discrimination power [10]. In fact, the authors in [2] termed this the happy-face advantage, and showcased the variation in detection performance for varying facial expressions. Through experimentation, we wanted to investigate the specific onshot classification scenario when the registering enrollment face had a strong smile that resulted in the discovery of this attack. As for defense from this attack, there are two straightforward recommendations. The first recommendation would be to simply display a message goading the user to maintain a passport-type neutral facial expression. The second would entail having a smile detector such as [5] as a pre-filter that would only allow smile-free images as a reference image.

References
[1] T. Ahonen, A. Hadid, and M. Pietikainen. Face description with local binary patterns: Application to face recognition. IEEE transactions on pattern analysis and machine intelligence, 28(12):2037–2041, 2006. 1
[2]  W. Chen, K. Lander, and C. H. Liu. Matching faces with emotional expressions. Frontiers in psychology, 2:206, 2011. 2
[3]  J. Galbally, S. Marcel, and J. Fierrez. Biometric antispoofing methods: A survey in face recognition. IEEE Access, 2:1530–1552, 2014. 1
[4]  S. Gao, Y. Zhang, K. Jia, J. Lu, and Y. Zhang. Single sample face recognition via learning deep supervised autoencoders. IEEE Transactions on Information Forensics and Security, 10(10):2108–2118, 2015. 1
[5]  P. O. Glauner. Deep convolutional neural networks for smile recognition. arXiv preprint arXiv:1508.06535, 2015. 2
[6]  K. Patel, H. Han, and A. K. Jain. Secure face unlock: Spoof detection on smartphones. IEEE Transactions on Information Forensics and Security, 11(10):2268–2283, 2016. 1
[7]  D. F. Smith, A. Wiliem, and B. C. Lovell. Face recognition on consumer devices: Reflections on replay attacks. IEEE Transactions on Information Forensics and Security,10(4):736–745, 2015. 1
[8]  X.Tan,S.Chen,Z.-H.Zhou, and F.Zhang. Face recognition from a single image per person: A survey. Pattern recognition, 39(9):1725–1745, 2006.
[9]  Y. Tang, R. Salakhutdinov, and G. Hinton. Deep lambertian networks. arXiv preprint arXiv:1206.6445, 2012. 1
[10]  Y. Yacoob and L. Davis. Smiling faces are better for face recognition. In Automatic Face and Gesture Recognition, 2002. Proceedings. Fifth IEEE International Conference on, pages 59–64. IEEE, 2002. 2

UnifyID Anoints 16 Distinguished Scientists for the AI Fellowship

Fast Growing Startup Uses Machine Learning to Solve Passwordless Authentication

Today, UnifyID, a service that can authenticate you based on unique factors like the way you walk, type, and sit, announced the final 16 fellows selected for its inaugural Artificial Intelligence Fellowship for the Fall of 2016. Each of the fellows have shown exemplary leadership and curiosity in making a meaningful difference in our society and clearly has an aptitude for making sweeping changes in this rapidly growing area of AI.

Of the company’s recent launch and success at TechCrunch Disrupt, claiming SF Battlefield Runner-Up (2nd in 1000 applicants worldwide), UnifyID CEO John Whaley said, “We were indeed overwhelmed by the amazing response to our first edition of the AI Fellowship and the sheer quality of applicants we received. We also take immense pride in the fact that more than 40% of our chosen cohort will be women, which further reinforces our commitment as one of the original 33 signees of the U.S. White House Tech Inclusion Pledge.”

The final 16 fellows hail from Israel, Paris, Kyoto, Bangalore, and cities across the U.S. with Ph.D., M.S., M.B.A., and B.S. degrees from MIT, Stanford, Berkeley, Harvard, Columbia, NYU-CIMS, UCLA, Wharton, among other top institutions.

  • Aidan Clark triple major in Math, Classical Languages and CS at UC Berkeley
  • Anna Venancio-Marques Data Scientist in Residence, PhD École normale supérieure
  • Arik Sosman Software Engineer at BitGo, 2x Apple WWDC scholar, CeBIT speaker
  • Baiyu Chen Convolutional Neural Network Researcher, Masters in CS at UC Berkeley

  • Fuxiao Xin Lead Machine Learning Scientist at GE Global Research, PhD Bioinformatics

  • Kathy Sohrabi VP Engineering, IoT and sensors, MBA at Wharton, PhD EE at UCLA
  • Kazu Komoto Chief Robotics Engineer, CNET Writer, Masters in ME at Kyoto University

  • Laura Florescu Co-authored Asymptopia, Mathematical Reviewer, PhD CS at NYU

  • Lorraine Lin Managing Director, MFE Berkeley, PhD Oxford, Masters Design Harvard
  • Morgan Lai AI Scientist, MIT Media Lab, Co-founder/CTO, M.Eng. CS at MIT
  • Pushpa Raghani Post Doc Researcher at Stanford and IBM, PhD Physics at JNCASR

  • Raul Puri Machine Learning Development at Berkeley, BS EE/CS/Bioeng at Berkeley
  • Sara Hooker Data Scientist, Founder non-profit, educational access in rural Africa
  • Siraj Raval Data Scientist, the Bill Nye of Computer Science on YouTube

  • Wentao Wang Senior New Tech Integration Engineer at Tesla, PhD ME at MIT

  • Will Grathwohl Computer Vision Specialist, Founder/Chief Scientist, BS CSAIL at MIT

 

This highly selective, cross-disciplinary program covers the following areas:

  • Deep Learning
  • Signal Processing
  • Optimization Theory
  • Sensor Technology
  • Mobile Development
  • Statistical Machine Learning
  • Security and Identity
  • Human Behavior

Our UnifyID AI Fellows will get to choose from one of 16 well-defined projects in the broad area of applied artificial intelligence in the context of solving the problem of seamless personal authentication. The Fellows will be led by our esteemed Fellowship Advisors, renown experts in machine learning and PhDs from CMU, Stanford, and University of Vienna, Austria.

Please welcome our incoming class! ✨

 

Read the original UnifyID AI Fellowship Announcement:

https://unify.id/2016/10/10/announcing-the-unifyid-ai-fellowship/

 

Initial Release:

http://www.prweb.com/releases/2016/unifyid/prweb13804371.htm#!

UnifyID @ HackMIT

I just got back from HackMIT, and what a crazy, intense experience it was. For those who don’t know, HackMIT is a 24-hour hackathon with over 1,000 students from all over the country and the world, all hacking on some very cool stuff. I was on the judging panel as well as acted as a mentor, helping students debug issues with a wide variety of technologies like node.js/Express, cocoapods and Swift 3, Ethereum smart contracts, Angular and Javascript, 502 errors on HTTP requests, and a bunch of other issues. A few students came up to me after they recognized UnifyID from our TechCrunch video and wanted to take photos together.

I met a lot of great students from all over the US, Europe, and South America. I also gave a tech talk where we demonstrated our implicit authentication technology in action with a volunteer from the audience. Being a technical crowd, I was able to dive deep into the technical aspects with some of the actual data in a Jupyter notebook. People were amazed by some of the unique aspects to human movement and how much information you can get from the accelerometer and gyroscope in your phone!

HackMIT had tons of free food/drinks/activities. They had no soft drinks because they were encouraged to avoid unhealthy drinks, but they had plenty of Red Bull (?). And unlimited Soylent, too. Plus food/snacks at all hours of the day and night, like fresh smoothies at midnight and hot waffles with chocolate in the morning. And crazy activities like the 2am shakedown and the 7 minute workout outside in the wee hours of the morning.

Many/most teams stayed up all night hacking. There was a wide variance in hacking ability but the top teams were truly astonishing in what they were able to build in 24 hours. All of the top ten were amazing and it was hard to choose.

a8d03f4d3a3c6932a895ff34069b12d3

The ultimate winner was “WindowShare”. They built an awesome cross-platform tool where you can drag any window between computers and it seamlessly copies the program’s file and opens it on the other machine. Like if you open a text file in TextEdit on a Mac, you can drag the window over and the contents appear in a Notepad window on the Windows machine. Likewise for images and Chrome windows/tabs. They also implemented remote mouse so you could move your mouse on the other screen as well and control it without messing up the original mouse. They implemented in Java with JNI and socket communication.

The runner-up was a book-reading bot that used the phone, OCR, and text-to-speech to read (physical) books aloud. It also used a motorized mechanism including a computer fan to reliably turn pages.

We also added a honorable mention: “Fretless”, an MIT team that built a Guitar Hero like contraption that hooks to your violin. It takes a MIDI file and lights up where you are supposed to press your fingers so you can learn how to play real songs.

All of the top ten projects were amazing and the teams got a ton done in 24 hours! To everyone who participated, I say “Hack on!”