SIMUni: Sampling Impostors from Misfit Universal Background Models in accelerometric gait biometric verification

In this paper, we would like to disseminate surprisingly positive results we obtained by using a framework for generating impostor features in the context of training user-specific models in accelerometric gait biometric user verification. We propose that we directly sample from a poorly fit Universal Background – Gaussian Mixture Model (UBM-GMM) to generative negative class features, which on the face of it, seems like an unreasonable proposal, and combining these with the positive class user-enrollment features to train local user-specific shallow classifiers. Through empirical analysis on the state-of-the-art dataset, we showcase that this simple approach outperforms the classical UBM-GMM approach with or without score normalization, a result that was rather unexpected.

Download  Full Paper

Download Poster

New Study Shows Consumers Desire a World Beyond Passwords and Biometrics

With the current password-based user authentication paradigm so loathed and cumbersome, a new study surveyed 1,000 consumers in the United States to better understand their perceptions of convenience, security and privacy around authentication.

Of those surveyed, nearly three-quarters of respondents said it was “difficult” to keep track of their passwords and 82 percent never again wanted to use passwords.

Other security solutions, such as facial identification, also have challenges, according to the survey. For instance, half of Millennials and over two-thirds of both Gen X and Baby Boomers are reluctant to use facial scans due to concerns about privacy. However, over 60 percent of those surveyed would use implicit authentication for personal identification given its perceived convenience. Biometric authentication, such as facial scanning or fingerprints, is also easy to copy and is extremely hard to change once compromised.

Users of iPhones are much more inclined to use biometrics, with 74 percent of those respondents using biometrics to unlock their smartphones. On the other hand, only 55 percent of Android users surveyed use biometrics to unlock their smartphones.

Surprisingly, almost half of all respondents use a handwritten piece of paper to keep track of all their passwords, with one-third of all respondents never changing their passwords unless prompted to.

Other interesting facts include:

  • Nearly half (46 percent) of all respondents use the same password for all of their logins
  • 60 percent of all respondents believe it is the app maker’s responsibility to keep their information safe on their smartphone
  • Just over one-third (34 percent) of all respondents’ accounts had, in the past, been hacked or had their passwords stolen
  • Almost 83 percent of Generation Z use biometric authentication to unlock their smartphone, whereas only 53 percent of Baby Boomers use biometrics
  • Over 91 percent of Generation Z stay logged into their social media accounts, citing convenience as the reason

My Best Internship Yet

On the Monday that I started interning, I remember being one of the first couple of people in the office, showing up at 9:30 in the morning. It was a very rare occurrence that would, unfortunately, never happen again. I had a lot of firsts that day. I was an intern for the first time (note the title) and shook a co-worker’s hand for the first time. I ate food from a San Francisco food truck for the first time. I drank soylent for the first time. I promised myself to never drink soylent again for the first time.

Despite being an intern, I had complete control over whatever work I wanted to do, since projects were free to chose from. Most of my time was spent helping develop the iOS SDK and demo app, but I also made contributions to the backend. I learned all those convenient terminal commands that I never bothered to learn in college classes. I learned about docker and microservice architecture and the pain of pulling images. I learned how to write clean, production-level Swift code that is well-tested and well-tested and really well-tested. I even learned, on multiple occasions, how to create retain cycles — which is not a good thing, since they crash the app fairly quickly.

For the most part, UnifyID gave me maximum creative freedom. There was pretty much just one annoying rule — never push changes directly to dmz, our staging branch — but I made sure to break that one a few times. The dmz branch is now push-protected. Special shoutout to Micah for failing to stop me at first.

Just another day in the office

UnifyID does some really cool stuff with machine learning like identifying who you are based on your gait. After being surrounded by smart machine learning engineers and data scientists, I got more into machine learning and worked on a small side project during my free time on weekends (and during a day or two in the office, see “lax and carefree environment”). Gonna have to shamelessly promote it real quick since it’s pretty cool, check it out here.

I’m now way more motivated to take data-oriented classes and pursue research opportunities, something I never seriously thought about before. If I hadn’t interned at UnifyID, where innovative machine learning algorithms are just one git pull away, I doubt I’d be as interested in machine learning as I am now.

I’ll have some awesome memories of my time coding in the office. Feeling like a boss as the CI tests pass with green check marks. Earning Yuliia’s approval as she cautiously merges my branch into her. Shout out to Yuliia for asking me to help out on iOS work during my first week and trusting me with a bunch of responsibilities throughout the summer.

I’ll remember the funny and good moments outside of work too. The late-night dinner conversations with Andres and Pascal. Isaac mixing up Divyansh and Vinay. Chunyu and I throwing some solid insults at Lef in Chinese. (Lef threw some insults back at us in Greek, but I’m sure they weren’t as creative).

I am incredibly grateful for this opportunity to work on cool stuff over the summer; and grateful for all the help the engineers and product managers have given me, and the tips and tricks they’ve taught me. I’m humbled and feel very lucky to have had the opportunity to work in such an intellectual and driven, yet fun-filled, environment.

I’m sad to be leaving, but I’ll make sure to advertise UnifyID loud and proud when I’m back at UCSD — by wearing the extra company t-shirts I’ve surreptitiously accumulated over the summer.

Ready…Set…Hack!

HackMIT: Hack to the Future

In the past couple of months, UnifyID has been busy attending university hackathons at MIT and UC Berkeley. What this means is hours and hours of non-stop hacking, but it also means unlimited snacks, mini midnight workouts, and lots of young, passionate coders working to create impactful projects.

John poses with a16z representative Nigel at HackMIT.
On September 16, John Whaley flew to Cambridge, Massachusetts to attend HackMIT: Hack to the Future where he had the opportunity to meet more than 1500 students from all different universities. Representing a16z, John participated in a fireside chat where he covered a variety of topics including what it’s like to work in a startup, choosing industry versus graduate school, and building a company on machine learning. He discussed the fundamentals of entrepreneurship, team-building, fundraising, and more, as students picked his brain about technical topics and career advice. Later, John was able to speak more in depth during his tech talk about UnifyID and identifying individuals based on gait. Students were deeply interested in the problem UnifyID is trying to solve as well as the impact and intellectual aspect of UnifyID’s approach to the issue.

Aside from his fireside chat and tech talk, John had the opportunity to mentor hackers in their own projects. His favorite part was meeting and interacting with all of the students, seeing their ambition, passion, and genuine interest in the projects that they were working on. He also enjoyed the intense energy in the arena, choosing to stay and mentor hackers until 3am.

After 24 hours of hard work and non stop hacking at MIT, many groups of students presented their projects. Projects covered a wide range of topics from virtual reality games to homework-help mobile applications. Even though John had been to plenty of hackathons in the past, he was still amazed by the caliber and level of innovation that the students were able to reach in their projects. The first place prize ended up going to a group of students who created Pixelator, “a simple product that sharpens blurry images without a lot of code.”

Cal Hacks 4.0

A few weeks later, on October 6, Andres Castaneda crossed the Bay to attend Cal Hacks 4.0 at the UC Berkeley Stadium. With nearly 1500 students listening, he gave a presentation about UnifyID’s Android SDK and API, receiving a positive response from students who believed it was a revolutionary idea. Similar to John at MIT, Andres also had the opportunity to mentor up-and-coming hackers. For 36 hours, he helped students solve technical challenges as they competed for over $100,000 in prizes, including UnifyID’s contribution: a $300 Amazon giftcard and a Rick and Morty card game.

Based on the level of positive impact, innovation, and technical difficulty, the winning hack for UnifyID’s prize was Safescape, a mobile application that analyzes real-time news articles and alerts people in areas of “non-safe” events. It uses UnifyID’s Android SDK to validate individuals on the application. Inspired by the recent natural and terror crises occurring globally, Safescape also provides those in danger with potential escape routes, allows them to alert others around them, and contains a simple way to contact loved ones.

Andres’ favorite part about participating in Cal Hacks was “seeing people build a product from 0 to 1 in 36 hours.” He also found it hilarious that many students brought sleeping bags and threw them on the floor for intermittent opportunities to take naps.

Andres poses with mentees and previous UnifyID interns Aditya and Michael.

UnifyID is a strong supporter of hackathons because they provide great opportunities to connect with university students. Witnessing the high caliber of work accomplished at these events, UnifyID is inspired by young hackers who are truly passionate about making an impact in the world. These students represent a large diversity of talent from all different schools and backgrounds and are able to demonstrate what students are interested in nowadays. Additionally, hackathons allow UnifyID the chance to give back to the community. They are not only learning opportunities for up-and-coming hackers, but they also help UnifyID to understand how to cater to students’ interests and needs. After 2 hackathons in the span of one month, UnifyID is channeling its focus back to the day-to-day for now; however, we cannot wait for the next one!

A Unique Experience – Interning at UnifyID

I get mixed up with my friend Eric a lot. In the picture above, I’m on the left and Eric is on the right. We have similar builds, wear glasses, and although Eric will tell you he’s incomparably more handsome than me, even our close friends will accidentally call me, Eric and Eric, Isaac on campus at UCSD. I thought the peak of our similarities were when we both accepted full-stack internships at UnifyID in San Francisco this summer, but I realized I was mistaken. On Day 1, Eric and I had gone and picked out the exact same outfit for our internship debut. We had black t-shirts, tan chinos, blue shoes, and even opposite desks to really sell the mirror illusion. At a company built upon faith in each individual’s uniqueness, initially, I could not have felt more out of place. 

Despite our many similarities, Eric and I do have our differences, and they showed in how we dealt with our first-day jitters. I smiled politely and tried not to get in anyone’s way; Eric dropped the f-bomb before lunch. Having prior experience at a company where that sort of thing wouldn’t fly, I took it upon myself to pull him aside and tell him to rein it in. I thought that I had done him a favor until later that day when a full-time engineer casually slung a string of curses at his monitor with even more gusto than Eric had. It was then that I started to realize that working at UnifyID would be unlike anything I had experienced before.

Me, excelling.
Looking back, I shouldn’t have been surprised that UnifyID gives its employees the space to be themselves. Our mission is to identify people by what makes them unique–to squash those qualities would be sacrilege. As a result of this, the atmosphere is lighter and the conversations more genuine.

In the three months that I spent at UnifyID, I came to realize that it is this freedom that makes the team work as well as it does. I never felt like I had to put energy into trying to fill the role of the intern I thought I should be. Instead, I could just go in every day as myself. Once I realized this and started to embrace it, my productivity and sense of fulfillment soared. I went on to make significant contributions to our Android SDK, from redesigning our service architecture to developing a full suite of end to end tests. Now at the end of my internship, I find myself a far better engineer than I entered, lost trying to find where the time has gone, and sad to say goodbye to the friends I’ve made.

It’s difficult to describe a summer of my experiences at UnifyID in a few short paragraphs. But in a word? I would say, authentic.

A load balancer that learns, WebTorch

In my previous blog post “How I stopped worrying and embraced docker microservices” I talked about why Microservices are the bees knees for scaling Machine Learning in production. A fair amount of time has passed (almost a year ago, whoa) and it proved that building Deep Learning pipelines in production is a more complex, multi-aspect problem. Yes, microservices are an amazing tool, both for software reuse, distributed systems design, quick failure and recovery, yada yada. But what seems very obvious now, is that Machine Learning services are very stateful, and statefulness is a problem for horizontal scaling.

Context switching latency

An easy way to deal with this issue is understand that ML models are large, and thus should not be context switched. If a model is started on instance A, you should try to keep it on instance A as long as possible. Nginx Plus comes with support for sticky sessions, which means that requests can always be load balanced on the same upstream a super useful feature. That was 30% of the message of my Nginxconf 2017 talk.

The other 70% of my message was urging people to move AWAY from microservices for Machine Learning. In an extreme example, we announced WebTorch, a full-on Deep Learning stack on top of an HTTP server, running as a single program. For your reference, a Deep Learning stack looks like this.

Pipeline required for Deep Learning in production.
What is this data, why is it so dirty, alright now it’s clean but my Neural net still doesn’t get it, finally it gets it!

Now consider the two extremes in implementing this pipeline;

  1. Every stage is a microservice.
  2. The whole thing is one service.

Both seem equally terrible for different reasons and here I will explain why designing an ML pipeline is a zero-sum problem.

Communication latency

If every stage of the pipeline is a microservice this introduces a huge communication overhead between microservices. This is because very large dataframes which need to be passed between services also need to be

  1. Serialized
  2. Compressed (+ Encrypted)
  3. Queued
  4. Transfered
  5. Dequeued
  6. Decompressed (+ Decrypted)
  7. Deserialized

What a pain, what a terrible thing to spend cycles on. All of these actions need to be repeated every time the microservice limit is crossed. The horror, the terrible end-to-end performance horror!

In the opposite case, you’re writing a monolith which is hard to maintain, probably you’re either using uncomfortable semantics either for writing the HTTP server or the ML part, can’t monitor the in between stages etc. Like I said, writing a ML pipeline for production is a zero-sum problem.

An extreme example; All-in-one deep learning

Venn diagram of torch, nginx
Torch and Nginx have one thing in common, the amazing LuaJIT
That’s right, you’ll need to look at your use case and decide where you draw the line. Where does the HTTP server stop and where does the ML back-end start. If only there was a tool that made this decision easy and allowed you to even go to the extreme case of writing a monolith, without sacrificing either HTTP performance (and pretty HTTP server semantics) or ML performance and relevance in the rapid growing Deep Learning market. Now such a tool is here (in alpha) and it’s called WebTorch.

WebTorch is the freak child of the fastest, most stable HTTP server, nginx and the fastest, most relevant Deep Learning framework Torch.

Now of course that doesn’t mean WebTorch is either the best performance HTTP server and/or the best performing Deep Learning framework, but it’s at least worth a look right? So I run some benchmarks, loaded the XOR neural network found at the torch training page. I used another popular Lua tool, wrk to benchmark my server. I’m sending serialized Torch 2D DoubleTensor tensors to my server using POST requests to train. Here’s the results:

Huzha! Over 1000 req/sec on my Macbook air, with no Cuda support and 2 Intel cores!

So there, plug that into a CUDA machine and see how much performance you squeeze out of that bad baby. I hope I have convinced you that sometimes, mixing two great things CAN lead to something great and that WebTorch is an ambitious and interesting open source project! Check out the Github repo and give it a star if you like the idea.

https://github.com/UnifyID/WebTorch

And hopefully, in due time it will become a fast, production level server which makes it easy for Data Scientists to deploy their models in the cloud (do people still say cloud?) and devOps people to deploy and scale.

Possible applications of such a tool include, but not limited to:

  • Classification of streaming data
  • Adaptive load balancing
  • DDoS attack/intrusion detection
  • Detect and adapt to upstream failures
  • Train and serve NNs
  • Use cuDNN, cuNN and cuTorch inside NGINX
  • Write GPGPU code on NGINX
  • Machine learning NGINX plugins
  • Easily serve GPGPU code
  • Rapid prototyping Deep Learning solutions

Maybe your own?

63 Days of Summer at UnifyID

60 hours after I wrapped up my second year at UC Berkeley, I walked into the UnifyID office for my first day as a software engineering intern. I was not sure what to expect, but I definitely did not think that before I left that day, I would have already contributed to the codebase! The decision to work at UnifyID was an easy one. This team was working on technology that I believed was the future of security, using implicit authentication to determine what makes you unique, ultimately eliminating passwords.

Pushing an MR to master, Day 1 done!

Throughout the summer, I worked on various projects ranging from Android development to devOps to server backend work. One project I particularly enjoyed working on was the continuous integration for our Android project. It was interesting to understand how the code that was written was built, tested, and deployed through the pipeline, and how it all tied together with Docker and Amazon Web Services. I had never worked in any of these areas before arriving at UnifyID, but with guidance from my mentor, CEO John Whaley, and the incredible support of the other engineers, I was able to directly contribute to the product. I learned something new every day and noticed my growth as a software engineer as the summer progressed.

As a female engineer, I have always noticed the underrepresentation of women in engineering. I constantly wonder what I can do to lessen this gap? From this experience, I have learned that as long as you are passionate about your work and genuinely care about what you are doing, not much can stand in your way. To all my aspiring engineering peers: be inquisitive, be supportive, and a caring community will form.

Impromptu team outing at a SoMa neighborhood cafe!

The team really makes the office feel like a comfortable and enjoyable space to be in. The whole team is so passionate about their work and willing to take time out of their day to share and explain their projects to me. Everyone comes from such different backgrounds and each person is so interesting to talk to and learn from.

As the summer comes to an end, I would like to thank the team at UnifyID for this wonderful learning experience. Nowhere else would I have been able to discuss ideas, designs, and implementations with such qualified people while working on a groundbreaking solution to such a prolific problem.

Recapping our Summer 2017 Internship Program

This summer we ran our largest internship program yet at UnifyID. We hosted an immensely talented group of 16 interns who joined us for 3 months, and there was never a dull day! While bringing in interns for the summer does create an energetic cadence, fresh viewpoints challenge us to grow as a company too. 12 weeks can feel like both a sprint and marathon, but in start-up days, even the hour can be precious.

Almost all our interns mentioned a desire to contribute to the technology of the future when asked why they chose to work at UnifyID, and we think this is a testament to the quality of our internship program—interns are able to contribute their talents in a meaningful way, whether on our machine learning, software engineering, or product teams.

Our machine learning interns focused on research, under the guidance of Vinay Prabhu. Much of their work has been on figuring out how to integrate new factors into our algorithms or develop datasets of human activity for future use. Three of our paper submissions were accepted to ICML workshops to be held in Sydney this year. This brings the total number of peer reviewed research papers accepted or published by UnifyID in the last few weeks to seven! What is especially exciting is the fact that these were the first peer-reviewed papers for our undergraduate interns in what we hope will be long and fruitful research careers.

Our software engineering interns have been integral in supporting our product sprints, which have been centered around deploying initial versions of our technology to our partners quickly. As one of our interns, Joy, said: “From mobile development to server work to DevOps, I learned an insane amount from this incredible team.”

Our product interns were involved across teams and worked on projects varying from product backlog grooming and retrospectives to beta community management to content marketing to analyst relations to technical recruiting to team building efforts. Having worked across multiple facets of the business, they were able to wear many hats and learn a great deal about product development and operations.

Aside from work, there’s no shortage of events to attend in the Bay Area, from informal ones like Corgi Con or After Dark Thursday Nights at the Exploratorium, to events focused on professional development like Internpalooza or a Q&A with Ben Horowitz of a16z, who provided his advice on how to succeed in the tech world. Our interns were also able to take part in shaping our team culture: designing custom t-shirts, going on team picnics, and participating in interoffice competitions and hackathons.

A serendipitous meet up at Norcal Corgi Con!

Though we are sad to see them go, we know that they all have a bright future ahead of them and are so grateful for the time they were able to spend at our company this summer. Thank you to the Summer 2017 class of UnifyID interns!

  • Mohannad Abu Nassar, senior, MIT, Electrical Engineering and Computer Science
  • Divyansh Agarwal, junior, UC Berkeley, Computer Science and Statistics
  • Michael Chien, sophomore, UC Berkeley, Environmental Economics and Policy
  • Pascal Gendron, 4th year, Université de Sherbrooke, Electrical Engineering
  • Peter Griggs, junior, MIT, Computer Science
  • Aditya Kotak, sophomore, UC Berkeley, Computer Science and Economics
  • Francesca Ledesma, junior, UC Berkeley, Industrial Engineering and Operations Research
  • Nikhil Mehta, senior, Purdue, Computer Science
  • Edgar Minasyan, senior, MIT, Computer Science and Math
  • Vasilis Oikonomou, junior, UC Berkeley, Computer Science and Statistics
  • Joy Tang, junior, UC Berkeley, Computer Science
  • Issac Wang, junior, UC San Diego, Computer Science
  • Eric Zhang, junior, UC San Diego, Computer Engineering
Bay Area feels

UnifyID™ Raises $20M Series A Funding from NEA to Fuel Next Gen Authentication

Company Uses Behavioral and Environmental Factors, Not Passwords, to Identify Users

SAN FRANCISCO, CA – August 1, 2017 – UnifyID is leading the development of an implicit authentication platform that requires zero conscious user actions. The Company announced today that it has closed $20 million in Series A financing led by NEA. Its General Partners Scott Sandell and Forest Baskett will be joining UnifyID’s Board. Investors Andreessen Horowitz, Stanford StartX, and Accomplice Ventures previously invested in the company’s Seed round, bringing the total invested to $23.4 million. This latest round of funding will be used to grow the team to expand enterprise trials, accelerate research and maintain the company’s position as the leader in implicit authentication and behavioral biometrics.

“Our goal is seamless security: you can be yourself and the devices and services you interact with will naturally recognize you based on what makes you unique,” said UnifyID founder John Whaley. Since 2015, UnifyID has been using a combination of signal processing, optimization theory, deep learning, statistical machine learning, and computer science to solve one of the oldest and most fundamental problems in organized society: How do I know you are who you say you are?

To date, the company has developed the first implicit authentication platform designed for online and physical world use. Named RSA’s Unanimous Winner for 2017, UnifyID utilizes sensor data from everyday devices and machine learning to authenticate you based on unique factors like the way you walk, type, and sit. The company has also partnered with global corporations to assess the generalizability of their software across industries.

The UnifyID solution combines over 100 different attributes to achieve 99.999% accuracy without users changing their behavior or needing specific training. The key is the proliferation of sensors combined with innovations in machine learning. UnifyID is the first product to develop neural networks to run locally on the phone to process sensor data in real-time.

“A large percentage of data breaches involve weak, default or stolen passwords, and we think passwords – as we know them – need an overhaul,” said Forest Baskett, NEA General Partner. “We are excited about the world-changing potential of UnifyID’s frictionless, universal authentication solution.”

In the past six months, UnifyID received national attention by winning security innovation competitions at TechCrunch Disrupt, RSA, and SXSW and continued to grow its engineering, machine learning, and enterprise deployment talent. For career and partnership inquiries, learn more at https://unify.id.

 

ABOUT UNIFYID
Headquartered in San Francisco, UnifyID is the first implicit authentication platform. Its proprietary approach uses behavioral and environmental factors to identify users. In February of 2017, the Company was recognized as the most innovative start-up at RSA. For career and partnership inquiries, learn more at https://unify.id.

ABOUT NEA
New Enterprise Associates, Inc. (NEA) is a global venture capital firm focused on helping entrepreneurs build transformational businesses across multiple stages, sectors and geographies. With over $19 billion in cumulative committed capital since the firm’s founding in 1977, NEA invests in technology and healthcare companies at all stages in a company’s lifecycle, from seed stage through IPO. The firm’s long track record of successful investing includes more than 210 portfolio company IPOs and more than 360 acquisitions. For additional information, visit www.nea.com.

 

Contacts
Grace Chang
grace [at] unify.id

Vulnerability of deep learning-based gait biometric recognition to adversarial perturbations

PDF of full paper: Vulnerability of deep learning-based gait biometric recognition to adversarial perturbations
Full-size poster image: Vulnerability of deep learning-based gait biometric recognition to adversarial perturbations

[This paper was presented on July 21, 2017 at The First International Workshop on The Bright and Dark Sides of Computer Vision: Challenges and Opportunities for Privacy and Security (CV-COPS 2017), in conjunction with the 2017 IEEE Conference on Computer Vision and Pattern Recognition.]

Vinay Uday Prabhu and John Whaley, UnifyID, San Francisco, CA 94107

Abstract

In this paper, we would like to draw attention towards the vulnerability of the motion sensor-based gait biometric in deep learning-based implicit authentication solutions, when attacked with adversarial perturbations, obtained via the simple fast-gradient sign method. We also showcase the improvement expected by incorporating these synthetically-generated adversarial samples into the training data.

Introduction

In recent times, password entry-based user-authentication methods have increasingly drawn the ire of the security community [1], especially when it comes to its prevalence in the world of mobile telephony. Researchers [1] recently showcased that creating passwords on mobile devices not only takes significantly more time, but it is also more error prone, frustrating, and, worst of all, the created passwords were inherently weaker. One of the promising solutions that has emerged entails implicit authentication [2] of users based on behavioral patterns that are sensed without the active participation of the user. In this domain of implicit authentication, measurement of gait-cycle [3] signatures, mined using the on-phone Inertial Measurement Unit – MicroElectroMechanical Systems (IMU-MEMS) sensors, such as accelerometers and gyroscopes, has emerged as an extremely promising passive biometric [4, 5, 6]. As stated in [7, 5], gait patterns can not only be collected passively, at a distance, and unobtrusively (unlike iris, face, fingerprint, or palm veins), they are also extremely difficult to replicate due to their dynamic nature.

Inspired by the immense success that Deep Learning (DL) has enjoyed in recent times across disparate domains, such as speech recognition, visual object recognition, and object detection [8], researchers in the field of gait-based implicit authentication are increasingly embracing DL-based machine-learning solutions [4, 5, 6, 9], thus replacing the more traditional hand-crafted-feature- engineering-driven shallow machine-learning approaches [10]. Besides circumventing the oft-contentious process of hand-engineering the features, these DL-based approaches are also more robust to noise [8], which bodes well for the implicit-authentication solutions that will be deployed on mainstream commercial hardware. As evinced in [4, 5], these classifiers have already attained extremely high accuracy (∼96%), when trained under the k-class supervised classification framework (where k pertains to the number of individuals). While these impressive numbers give the impression that gait-based deep implicit authentication is ripe for immediate commercial implementation, we would like to draw the attention of the community towards a crucial shortcoming. In 2014, Szegedy et al. [11] discovered that, quite like shallow machine-learning models, the state-of- the-art deep neural networks were vulnerable to adversarial examples that can be synthetically generated by strategically introducing small perturbations that make the resultant adversarial input example only slightly different from correctly classified examples drawn from the data distribution, but at the same time resulting in a potentially controlled misclassification. To make things worse, a large plethora of models with disparate architectures, trained on different subsets of the training data, have been found to misclassify the same adversarial example, uncovering the presence of fundamental blind spots in our DL frameworks. After this discovery, several works have emerged ([12, 13]), addressing both means of defence against adversarial examples, as well as novel attacks. Recently, the cleverhans software library [13] was released. It provides standardized reference implementations of adversarial example-construction techniques and adversarial training, thereby facilitating rapid development of machine-learning models, robust to adversarial attacks, as well as providing standardized benchmarks of model performance in the adversarial setting explained above. In this paper, we focus on harnessing the simplest of all adversarial attack methods, i.e. the fast gradient sign method (FGSM) to attack the IDNet deep convolutional neural network (DCNN)-based gait classifier introduced in [4]. Our main contributions are as follows: 1: This is, to the best of our knowledge, the first paper that introduces deep adversarial attacks into this non-computer vision setting, specifically, the gait-driven implicit-authentication domain. In doing so, we hope to draw the attention of the community towards this crucial issue in the hope that further publications will incorporate adversarial training as a default part of their training pipelines. 2: One of the enduring images that is widely circulated in adversarial training literature is that of the panda+nematode = gibbon adversarial-attack example on GoogleNet in [14], which was instrumental in vividly showcasing the potency of the blind spot. In this paper, we do the same with accelerometric data to illustrate how a small and seemingly imperceptible perturbation to the original signal can cause the DCNN to make a completely wrong inference with high probability. 3: We empirically characterize the degradation of classification accuracy, when subjected to an FGSM attack, and also highlight the improvement in the same, upon introducing adversarial training. 4: Lastly, we have open-sourced the code here.

Figure 1. Variation in the probability of correct classification (37 classes) with and without adversarial training for varying ε.
Figure 2. The true accelerometer amplitude signal and its adversarial counterpart for ε = 0.4.

2. Methodology and Results

In this paper, we focus on the DCNN-based IDNet [4] framework, which entails harnessing low-pass-filtered tri-axial accelerometer and gyroscope readings (plus the sensor-specific magnitude signals), to, firstly, extract the gait template, of dimension 8 × 200, which is then used to train a DCNN in a supervised-classification setting. In the original paper, the model identified users in real time by using the DCNN as a deep-feature extractor and further training an outlier detector (one-class support vector machine-SVM), whose individual gait-wise outputs were finally combined into a Wald’s probability-ratio-test-based framework. Here, we focus on the trained IDNet-DCNN and characterize its performance in the adversarial-training regime. To this end, we harness the FGSM introduced in [14], where the adversarial example, x ̃, for a given input sample, x, is generated by: x ̃ = x + ε sign (∇xJ (θ, x)), where θ represents the parameter vector of the DCNN, J (θ, x) is the cost function used to train the DCNN, and ∇x () is the gradient function.

As seen, this method is parametrized by ε, which controls the magnitude of the inflicted perturbations. Fig. 2 showcases the true and adversarial gait-cycle signals for the accelerometer magnitude signal (given by amag(t) = √(a2x (t) + a2y (t) + a2z (t))) for ε = 0.4. Fig. 1 captures the drop in the probability of correct classification (37 classes) with increasing ε. First, we see that in the absence of any adversarial example, we were able to get about 96% ac- curacy on a 37 class classification problem, which is very close to what is claimed in [4]. However, with even mild perturbations (ε = 0.4), we see a sharp decrease of nearly 40% in accuracy. Fig. 1 also captures the effect of including the synthetically generated adversarial examples in this scenario. We see that, for ε = 0.4, we manage to achieve about 82% accuracy, which is a vast improvement of ∼ 25%.

3. Future Work

This brief paper is part of an ongoing research endeavor. We are currently currently extending this work to other adversarial-attack approaches, such as Jacobian-based Saliency-Map Approach (JSMA) and Black-Box-Attack (BBA) approach [15]. We are also investigating the effect of these attacks within the deep-feature-extraction+SVM approach of [4], and we are comparing other architectures, such as [6] and [5].

References
[1]  W.Melicher, D.Kurilova, S.M.Segreti, P.Kalvani, R.Shay, B. Ur, L. Bauer, N. Christin, L. F. Cranor, and M. L. Mazurek, “Usability and security of text passwords on mobile devices,” in Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pp. 527–539, ACM, 2016. 1
[2]  E. Shi, Y. Niu, M. Jakobsson, and R. Chow, “Implicit authentication through learning user behavior,” in International Conference on Information Security, pp. 99–113, Springer, 2010. 1
[3]  J. Perry, J. R. Davids, et al., “Gait analysis: normal and pathological function.,” Journal of Pediatric Orthopaedics, vol. 12, no. 6, p. 815, 1992. 1
[4]  M. Gadaleta and M. Rossi, “Idnet: Smartphone-based gait recognition with convolutional neural networks,” arXiv preprint arXiv:1606.03238, 2016. 1, 2
[5]  Y. Zhao and S. Zhou, “Wearable device-based gait recognition using angle embedded gait dynamic images and a convolutional neural network,” Sensors, vol. 17, no. 3, p. 478, 2017. 1, 2
[6]  S. Yao, S. Hu, Y. Zhao, A. Zhang, and T. Abdelza- her, “Deepsense: A unified deep learning framework for time-series mobile sensing data processing,” arXiv preprint arXiv:1611.01942, 2016. 1, 2
[7]  S. Wang and J. Liu, Biometrics on mobile phone. INTECH Open Access Publisher, 2011. 1
[8]  Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015. 1
[9]  N. Neverova, C. Wolf, G. Lacey, L. Fridman, D. Chandra, B. Barbello, and G. Taylor, “Learning human identity from motion patterns,” IEEE Access, vol. 4, pp. 1810–1820, 2016. 1
[10]  C. Nickel, C. Busch, S. Rangarajan, and M. Mo ̈bius, “Using hidden markov models for accelerometer-based biometric gait recognition,” in Signal Processing and its Applications (CSPA), 2011 IEEE 7th International Colloquium on, pp. 58–63, IEEE, 2011. 1
[11]  C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,” arXiv preprint arXiv:1312.6199, 2013. 1
[12]  C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9, 2015. 1
[13]  N. Papernot, I. Goodfellow, R. Sheatsley, R. Feinman, and P. McDaniel, “cleverhans v1.0.0: an adversarial machine learning library,” arXiv preprint arXiv:1610.00768, 2016. 1
[14]  I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explain- ing and harnessing adversarial examples,” arXiv preprint arXiv:1412.6572, 2014. 2
[15] N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, “Practical black-box attacks against deep learning systems using adversarial examples,” arXiv preprint arXiv:1602.02697, 2016.