With the current password-based user authentication paradigm so loathed and cumbersome, a new study surveyed 1,000 consumers in the United States to better understand their perceptions of convenience, security and privacy around authentication.
Of those surveyed, nearly three-quarters of respondents said it was “difficult” to keep track of their passwords and 82 percent never again wanted to use passwords.
Other security solutions, such as facial identification, also have challenges, according to the survey. For instance, half of Millennials and over two-thirds of both Gen X and Baby Boomers are reluctant to use facial scans due to concerns about privacy. However, over 60 percent of those surveyed would use implicit authentication for personal identification given its perceived convenience. Biometric authentication, such as facial scanning or fingerprints, is also easy to copy and is extremely hard to change once compromised.
Users of iPhones are much more inclined to use biometrics, with 74 percent of those respondents using biometrics to unlock their smartphones. On the other hand, only 55 percent of Android users surveyed use biometrics to unlock their smartphones.
Surprisingly, almost half of all respondents use a handwritten piece of paper to keep track of all their passwords, with one-third of all respondents never changing their passwords unless prompted to.
Other interesting facts include:
Nearly half (46 percent) of all respondents use the same password for all of their logins
60 percent of all respondents believe it is the app maker’s responsibility to keep their information safe on their smartphone
Just over one-third (34 percent) of all respondents’ accounts had, in the past, been hacked or had their passwords stolen
Almost 83 percent of Generation Z use biometric authentication to unlock their smartphone, whereas only 53 percent of Baby Boomers use biometrics
Over 91 percent of Generation Z stay logged into their social media accounts, citing convenience as the reason
Vinay Uday Prabhu and John Whaley, UnifyID, San Francisco, CA 94107
In this paper, we would like to draw attention towards the vulnerability of the motion sensor-based gait biometric in deep learning-based implicit authentication solutions, when attacked with adversarial perturbations, obtained via the simple fast-gradient sign method. We also showcase the improvement expected by incorporating these synthetically-generated adversarial samples into the training data.
In recent times, password entry-based user-authentication methods have increasingly drawn the ire of the security community , especially when it comes to its prevalence in the world of mobile telephony. Researchers  recently showcased that creating passwords on mobile devices not only takes significantly more time, but it is also more error prone, frustrating, and, worst of all, the created passwords were inherently weaker. One of the promising solutions that has emerged entails implicit authentication  of users based on behavioral patterns that are sensed without the active participation of the user. In this domain of implicit authentication, measurement of gait-cycle  signatures, mined using the on-phone Inertial Measurement Unit – MicroElectroMechanical Systems (IMU-MEMS) sensors, such as accelerometers and gyroscopes, has emerged as an extremely promising passive biometric [4, 5, 6]. As stated in [7, 5], gait patterns can not only be collected passively, at a distance, and unobtrusively (unlike iris, face, fingerprint, or palm veins), they are also extremely difficult to replicate due to their dynamic nature.
Inspired by the immense success that Deep Learning (DL) has enjoyed in recent times across disparate domains, such as speech recognition, visual object recognition, and object detection , researchers in the field of gait-based implicit authentication are increasingly embracing DL-based machine-learning solutions [4, 5, 6, 9], thus replacing the more traditional hand-crafted-feature- engineering-driven shallow machine-learning approaches . Besides circumventing the oft-contentious process of hand-engineering the features, these DL-based approaches are also more robust to noise , which bodes well for the implicit-authentication solutions that will be deployed on mainstream commercial hardware. As evinced in [4, 5], these classifiers have already attained extremely high accuracy (∼96%), when trained under the k-class supervised classification framework (where k pertains to the number of individuals). While these impressive numbers give the impression that gait-based deep implicit authentication is ripe for immediate commercial implementation, we would like to draw the attention of the community towards a crucial shortcoming. In 2014, Szegedy et al.  discovered that, quite like shallow machine-learning models, the state-of- the-art deep neural networks were vulnerable to adversarial examples that can be synthetically generated by strategically introducing small perturbations that make the resultant adversarial input example only slightly different from correctly classified examples drawn from the data distribution, but at the same time resulting in a potentially controlled misclassification. To make things worse, a large plethora of models with disparate architectures, trained on different subsets of the training data, have been found to misclassify the same adversarial example, uncovering the presence of fundamental blind spots in our DL frameworks. After this discovery, several works have emerged ([12, 13]), addressing both means of defence against adversarial examples, as well as novel attacks. Recently, the cleverhans software library  was released. It provides standardized reference implementations of adversarial example-construction techniques and adversarial training, thereby facilitating rapid development of machine-learning models, robust to adversarial attacks, as well as providing standardized benchmarks of model performance in the adversarial setting explained above. In this paper, we focus on harnessing the simplest of all adversarial attack methods, i.e. the fast gradient sign method (FGSM) to attack the IDNet deep convolutional neural network (DCNN)-based gait classifier introduced in . Our main contributions are as follows: 1: This is, to the best of our knowledge, the first paper that introduces deep adversarial attacks into this non-computer vision setting, specifically, the gait-driven implicit-authentication domain. In doing so, we hope to draw the attention of the community towards this crucial issue in the hope that further publications will incorporate adversarial training as a default part of their training pipelines. 2: One of the enduring images that is widely circulated in adversarial training literature is that of the panda+nematode = gibbon adversarial-attack example on GoogleNet in , which was instrumental in vividly showcasing the potency of the blind spot. In this paper, we do the same with accelerometric data to illustrate how a small and seemingly imperceptible perturbation to the original signal can cause the DCNN to make a completely wrong inference with high probability. 3: We empirically characterize the degradation of classification accuracy, when subjected to an FGSM attack, and also highlight the improvement in the same, upon introducing adversarial training. 4: Lastly, we have open-sourced the code here.
2. Methodology and Results
In this paper, we focus on the DCNN-based IDNet  framework, which entails harnessing low-pass-filtered tri-axial accelerometer and gyroscope readings (plus the sensor-specific magnitude signals), to, firstly, extract the gait template, of dimension 8 × 200, which is then used to train a DCNN in a supervised-classification setting. In the original paper, the model identified users in real time by using the DCNN as a deep-feature extractor and further training an outlier detector (one-class support vector machine-SVM), whose individual gait-wise outputs were finally combined into a Wald’s probability-ratio-test-based framework. Here, we focus on the trained IDNet-DCNN and characterize its performance in the adversarial-training regime. To this end, we harness the FGSM introduced in , where the adversarial example, x ̃, for a given input sample, x, is generated by: x ̃ = x + ε sign (∇xJ (θ, x)), where θ represents the parameter vector of the DCNN, J (θ, x) is the cost function used to train the DCNN, and ∇x () is the gradient function.
As seen, this method is parametrized by ε, which controls the magnitude of the inflicted perturbations. Fig. 2 showcases the true and adversarial gait-cycle signals for the accelerometer magnitude signal (given by amag(t) = √(a2x (t) + a2y (t) + a2z (t))) for ε = 0.4. Fig. 1 captures the drop in the probability of correct classification (37 classes) with increasing ε. First, we see that in the absence of any adversarial example, we were able to get about 96% ac- curacy on a 37 class classification problem, which is very close to what is claimed in . However, with even mild perturbations (ε = 0.4), we see a sharp decrease of nearly 40% in accuracy. Fig. 1 also captures the effect of including the synthetically generated adversarial examples in this scenario. We see that, for ε = 0.4, we manage to achieve about 82% accuracy, which is a vast improvement of ∼ 25%.
3. Future Work
This brief paper is part of an ongoing research endeavor. We are currently currently extending this work to other adversarial-attack approaches, such as Jacobian-based Saliency-Map Approach (JSMA) and Black-Box-Attack (BBA) approach . We are also investigating the effect of these attacks within the deep-feature-extraction+SVM approach of , and we are comparing other architectures, such as  and .
References  W.Melicher, D.Kurilova, S.M.Segreti, P.Kalvani, R.Shay, B. Ur, L. Bauer, N. Christin, L. F. Cranor, and M. L. Mazurek, “Usability and security of text passwords on mobile devices,” in Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pp. 527–539, ACM, 2016. 1  E. Shi, Y. Niu, M. Jakobsson, and R. Chow, “Implicit authentication through learning user behavior,” in International Conference on Information Security, pp. 99–113, Springer, 2010. 1  J. Perry, J. R. Davids, et al., “Gait analysis: normal and pathological function.,” Journal of Pediatric Orthopaedics, vol. 12, no. 6, p. 815, 1992. 1  M. Gadaleta and M. Rossi, “Idnet: Smartphone-based gait recognition with convolutional neural networks,” arXiv preprint arXiv:1606.03238, 2016. 1, 2  Y. Zhao and S. Zhou, “Wearable device-based gait recognition using angle embedded gait dynamic images and a convolutional neural network,” Sensors, vol. 17, no. 3, p. 478, 2017. 1, 2  S. Yao, S. Hu, Y. Zhao, A. Zhang, and T. Abdelza- her, “Deepsense: A unified deep learning framework for time-series mobile sensing data processing,” arXiv preprint arXiv:1611.01942, 2016. 1, 2  S. Wang and J. Liu, Biometrics on mobile phone. INTECH Open Access Publisher, 2011. 1  Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015. 1  N. Neverova, C. Wolf, G. Lacey, L. Fridman, D. Chandra, B. Barbello, and G. Taylor, “Learning human identity from motion patterns,” IEEE Access, vol. 4, pp. 1810–1820, 2016. 1  C. Nickel, C. Busch, S. Rangarajan, and M. Mo ̈bius, “Using hidden markov models for accelerometer-based biometric gait recognition,” in Signal Processing and its Applications (CSPA), 2011 IEEE 7th International Colloquium on, pp. 58–63, IEEE, 2011. 1  C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,” arXiv preprint arXiv:1312.6199, 2013. 1  C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9, 2015. 1  N. Papernot, I. Goodfellow, R. Sheatsley, R. Feinman, and P. McDaniel, “cleverhans v1.0.0: an adversarial machine learning library,” arXiv preprint arXiv:1610.00768, 2016. 1  I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explain- ing and harnessing adversarial examples,” arXiv preprint arXiv:1412.6572, 2014. 2  N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, “Practical black-box attacks against deep learning systems using adversarial examples,” arXiv preprint arXiv:1602.02697, 2016.
Vinay Uday Prabhu and John Whaley, UnifyID, San Francisco, CA 94107
In this paper, we demonstrate a simple face spoof attack targeting the face recognition system of a widely available commercial smart-phone. The goal of this paper is not proclaim a new spoof attack but to rather draw the attention of the anti-spoofing researchers towards a very specific shortcoming shared by one-shot face recognition systems that involves enhanced vulnerability when a smiling reference image is used.
One-shot face recognition (OSFR) or single sample per person (SSPP) face recognition is a well-studied research topic in computer vision (CV) . Solutions such as Local Binary Pattern (LBP) based detectors , Deep Lambertian Networks (DLN)  and Deep Supervised Autoencoders (DSA)  have been proposed in recent times to make the OSFR system more robust to changes in illumination, pose, facial expression and occlusion that they encounter when deployed in the wild. One very interesting application of face recognition that has gathered traction lately is for mobile device unlocking . One of the highlights of Android 4.0 (Ice Cream Sandwich) was the Face Unlock screen-lock option that allowed users to unlock their devices with their faces. It is rather imperative that we mention here that this option is always presented to the user with a cautioning clause that typically reads like *Face recognition is less secure than pattern, PIN, or password.
The reasoning behind this is that there exists a plethora of face spoof attacks such as print attacks, malicious identical twin attack, sleeping user attack, replay attacks and 3D mask attacks. These attacks are all fairly successful against most of the commercial off-the-shelf face recognizers . This ease of spoof attacks has also attracted attention of the CV researchers that has led to a lot of efforts in developing liveness detection anti-spoofing frameworks such as Secure-face . (See  for a survey.)
Recently, a large scale smart-phone manufacturer introduced a face recognition based phone unlocking feature. This announcement was promptly followed by media reports about users demonstrating several types of spoof attacks.
In this paper, we would like to explore a simple print attack on this smart-phone. The goal of this paper is not proclaim a new spoof attack but to rather draw the attention of the anti-spoofing community towards a very specific shortcoming shared by face recognition systems that we uncovered in this investigation.
2. Methodology and ResultsThe methodology we used entailed taking a low quality printout of the target user’s face on a plain white US letter paper size (of dimension 8.5 by 11.0 inches) and then unlocking the device by simply exposing this printed paper in front of the camera. Given the poor quality of the printed images, we observed that this simple print attack was duly repulsed by the detector system as long as the attacker sported neutral facial expressions during the registration phase. However, when we repeated the attack in such a way that the attacker had an overtly smiling face when (s)he registered, we were able to break in successfully with high regularity.
In Figure 1, we see two examples of neutral expression faces that failed to spoof the smart-phone’s face recognition system when the registering image had a neutral facial expression. A video containing the failed spoofing attempt with a neutral facial expression can be viewed here.
In Figure 2, we see the same two subjects’ images that successfully spoofed the phone’s face recognition system when the registering (enrollment) image was overtly smiling. The face training demo videos are available here. The video of the successful spoof can be viewed here.
2.1. Motivation for the attack and discussion
It has been well known for a long time in the computer vision community that faces displaying expressions, especially smiles, resulted in stronger recall and discrimination power . In fact, the authors in  termed this the happy-face advantage, and showcased the variation in detection performance for varying facial expressions. Through experimentation, we wanted to investigate the specific onshot classification scenario when the registering enrollment face had a strong smile that resulted in the discovery of this attack. As for defense from this attack, there are two straightforward recommendations. The first recommendation would be to simply display a message goading the user to maintain a passport-type neutral facial expression. The second would entail having a smile detector such as  as a pre-filter that would only allow smile-free images as a reference image.
References  T. Ahonen, A. Hadid, and M. Pietikainen. Face description with local binary patterns: Application to face recognition. IEEE transactions on pattern analysis and machine intelligence, 28(12):2037–2041, 2006. 1  W. Chen, K. Lander, and C. H. Liu. Matching faces with emotional expressions. Frontiers in psychology, 2:206, 2011. 2  J. Galbally, S. Marcel, and J. Fierrez. Biometric antispoofing methods: A survey in face recognition. IEEE Access, 2:1530–1552, 2014. 1  S. Gao, Y. Zhang, K. Jia, J. Lu, and Y. Zhang. Single sample face recognition via learning deep supervised autoencoders. IEEE Transactions on Information Forensics and Security, 10(10):2108–2118, 2015. 1  P. O. Glauner. Deep convolutional neural networks for smile recognition. arXiv preprint arXiv:1508.06535, 2015. 2  K. Patel, H. Han, and A. K. Jain. Secure face unlock: Spoof detection on smartphones. IEEE Transactions on Information Forensics and Security, 11(10):2268–2283, 2016. 1  D. F. Smith, A. Wiliem, and B. C. Lovell. Face recognition on consumer devices: Reflections on replay attacks. IEEE Transactions on Information Forensics and Security,10(4):736–745, 2015. 1  X.Tan,S.Chen,Z.-H.Zhou, and F.Zhang. Face recognition from a single image per person: A survey. Pattern recognition, 39(9):1725–1745, 2006.  Y. Tang, R. Salakhutdinov, and G. Hinton. Deep lambertian networks. arXiv preprint arXiv:1206.6445, 2012. 1  Y. Yacoob and L. Davis. Smiling faces are better for face recognition. In Automatic Face and Gesture Recognition, 2002. Proceedings. Fifth IEEE International Conference on, pages 59–64. IEEE, 2002. 2
In this brave new world of emerging protectionism and continued globalization, privacy and security seem at odds. In our market survey across 730 individuals and a similar count of organizations, security concerns ranked 10 very important (0 not at all) on startlingly 50% of those surveyed.
UnifyID, a service that authenticates you based on unique factors like the way you walk, type, and sit is a revolutionary new identity platform for seamless security. Understanding the need for data privacy and ownership, product ease of use, and multifactor security, UnifyID has crafted a solution to address the pain of remembering passwords for authorized access in online and offline use cases.
In a deeper dive across 70 organizations and in 40 hours of interviews, we discovered that people care a lot about easier access at work but also at home. “It would be great if you can take stuff off my plate: several cell phones for different countries, computers, iPads, smart software in my car and home that can all actually talk to each other so that I don’t use the same password or long passwords every time I do a software update–this would save me several days every year I take to manage the access to these independent tools,” says Marco, an enterprise software COO.
In another interview with John, an undergraduate aerospace engineer, “Personally, I’m just excited to make new technology a part of my life. UnifyID complements my life with easier access to all of the sites I visit and makes my life easier and exciting to see the technology of the future as part of my life. Other than protecting my identity, it’s really cool to use this technology to make a big difference in people’s lives.”
These interviews were a special treat to meet people from different cultures, backgrounds, and walks of life. We had an incredibly unique chance to hear more on what specifically about security is most important to our users’ day-to-day lives. “As a small company, you have the opportunity to touch more people than Coca-Cola! A guy living in Istanbul is really interested in what you’re doing right now, 10k kilometers away. I’m sure more than 100k people are very excited about what 6 people in San Francisco are doing,” remarked a manager at Coca-Cola at the end of our call.
Our challenge is unique in that we’re not just addressing large corporations but real people including our friends and family. Though the political tides may be changing, taking back your rights to security and privacy is a paramount task we don’t take lightly. If you’d like to join us in this journey to taking down passwords, please sign up for beta or feel free to drop us a note anytime.
Now, imagine seamless authentication everywhere. Software so powerful that by the sensors you already have on your phone, wearables, devices at home or the office, knows it’s you. No more 6-digit pin, string of upper and lowercase letters and numbers to signify that it is really you making a purchase, logging in, or entering a key swipe. Anywhere online or offline where you need to identify yourself, UnifyID promises that based on your everyday actions from factors like how you sit, walk, and type (i.e. passive factors also known as implicit authentication in academia), your “you-ness” can be determined with 99.999% accuracy. At times when the machine learning algorithms are unsure, an active challenge will be triggered on your nearest phone or device (e.g. fingerprint verification, among a dozen others in development).
UnifyID has been called the holy grail of authentication because the degree of security and sophistication of its machine learning efforts are unparalleled and the convenience and focus on usability makes trying the product unbelievably easy.
Between now and then, we’re in the stage of private beta–ensuring that the flows are easy and work as expected. UnifyID launched out of stealth at TechCrunch Disrupt in September. The initial sign on, logging out and logging back into sites has gone through more than 25 iterations in a few weeks (thanks to the onsite testers!). We’re ready to move forward to a remote private beta and test outside the bounds of our four-walls.
Join us on this journey to disrupt passwords. While “The Oracle” is still under development (our machine learning algorithms), we are moving full-forward on making sure that at this stage, the UnifyID user flows are easy for everyone to use, many times, everyday, across all sites.
Sign up for the UnifyID Private Beta: https://unify.id, click “Apply for Private Beta,” enter “Imagination” and why you are interested in participating in the beta in the secret handshake field.
Fast Growing Startup Uses Machine Learning to Solve Passwordless Authentication
Today, UnifyID, a service that can authenticate you based on unique factors like the way you walk, type, and sit, announced the final 16 fellows selected for its inaugural Artificial Intelligence Fellowship for the Fall of 2016. Each of the fellows have shown exemplary leadership and curiosity in making a meaningful difference in our society and clearly has an aptitude for making sweeping changes in this rapidly growing area of AI.
Of the company’s recent launch and success at TechCrunch Disrupt, claiming SF Battlefield Runner-Up (2nd in 1000 applicants worldwide), UnifyID CEO John Whaley said, “We were indeed overwhelmed by the amazing response to our first edition of the AI Fellowship and the sheer quality of applicants we received. We also take immense pride in the fact that more than 40% of our chosen cohort will be women, which further reinforces our commitment as one of the original 33 signees of the U.S. White House Tech Inclusion Pledge.”
The final 16 fellows hail from Israel, Paris, Kyoto, Bangalore, and cities across the U.S. with Ph.D., M.S., M.B.A., and B.S. degrees from MIT, Stanford, Berkeley, Harvard, Columbia, NYU-CIMS, UCLA, Wharton, among other top institutions.
Aidan Clark triple major in Math, Classical Languages and CS at UC Berkeley
Anna Venancio-Marques Data Scientist in Residence, PhD École normale supérieure
Arik Sosman Software Engineer at BitGo, 2x Apple WWDC scholar, CeBIT speaker
Baiyu ChenConvolutional Neural Network Researcher, Masters in CS at UC Berkeley
Fuxiao XinLead Machine Learning Scientist at GE Global Research, PhD Bioinformatics
Kathy Sohrabi VP Engineering, IoT and sensors, MBA at Wharton, PhD EE at UCLA
Kazu KomotoChief Robotics Engineer, CNET Writer, Masters in ME at Kyoto University
Laura Florescu Co-authored Asymptopia, Mathematical Reviewer, PhD CS at NYU
Morgan Lai AI Scientist, MIT Media Lab, Co-founder/CTO, M.Eng. CS at MIT
Pushpa RaghaniPost Doc Researcher at Stanford and IBM, PhD Physics at JNCASR
Raul Puri Machine Learning Development at Berkeley, BS EE/CS/Bioeng at Berkeley
Sara Hooker Data Scientist, Founder non-profit, educational access in rural Africa
Siraj Raval Data Scientist, the Bill Nye of Computer Science on YouTube
Wentao Wang Senior New Tech Integration Engineer at Tesla, PhD ME at MIT
Will GrathwohlComputer Vision Specialist, Founder/Chief Scientist, BS CSAIL at MIT
This highly selective, cross-disciplinary program covers the following areas:
Statistical Machine Learning
Security and Identity
Our UnifyID AI Fellows will get to choose from one of 16 well-defined projects in the broad area of applied artificial intelligence in the context of solving the problem of seamless personal authentication. The Fellows will be led by our esteemed Fellowship Advisors, renown experts in machine learning and PhDs from CMU, Stanford, and University of Vienna, Austria.
Please welcome our incoming class! ✨
Read the original UnifyID AI Fellowship Announcement:
After a year and a half of intense heads down work, we are very happy and proud to finally present UnifyID to the world.
Our goal at UnifyID is to solve one of the oldest and most fundamental problems in organized society: How do I know you are who you say you are?
The Status Quo
The traditional (digital) approach to authentication is to use a password. But when you think about it, the whole notion of passwords is pretty absurd. A password is this: I have a secret, and I tell you that secret, and that’s how you know it’s me. The problem is, I’m not very good at coming up with secrets and since I can’t keep track of very many secrets, I keep using the same ones over and over again. It’s frustratingly easy to get phished and tricked into sharing my secret, and don’t even get me started on using public records like my mother’s maiden name as a shared “secret” to authenticate someone!
In the interim, some people say to use a “password manager” to help keep track of all your passwords. Password managers are a band-aid solution. Password managers help you manage your ever growing list of passwords and accounts. They don’t solve this fundamental problem that someone can impersonate you by just knowing a secret. And they are a great honeypot so when your master password is keylogged, leaked, phished, or stolen, instead of just giving up one secret, you just gave up all your secrets.
Another approach is to use biometrics, like your fingerprint, to identify you. Fingerprints are convenient except for the fact that 1) you leave them everywhere you go, and 2) they are very, very difficult to change when they are compromised. Other biometrics are intrusive, annoying, and flaky, and often don’t add much security at all.
A third approach is to use a device to authenticate yourself. This technology has been around for a long time but has never taken off in a mainstream way, despite massive user education campaigns and huge, well-funded industry pushes. The main reason is it adds so much friction to the user experience. You now have something extra you need to carry around. You need to read off a code and type it in before a timer expires. If you forget your device, you are locked out.
Realizing people don’t want to carry extra things around, more recently vendors have moved to “soft tokens”, which are apps on your phone that provide similar functionality and trade off security for the convenience of not having to carry around an extra physical token. Or, services will send you a text message with a code you need to type in, which is not only annoying, but also doesn’t add much security.
The common thread among all of these approaches are 1) they are annoying, and 2) they don’t add much security. These are the two problems we are solving at UnifyID.
A few years back, Kurt and I worked on a demo where we captured encrypted packet traces, and by simply looking at the timing between the packets, we could determine the timing of a user’s keystrokes, and ultimately, what the user had typed. People were impressed by the demo but ultimately the interesting and challenging part was the fact that each individual had his or her own unique way of typing. In fact, after we saw you type around four sentences of text, we could uniquely identify you.
We began to look at other aspects we could passively detect that were a) unique per individual and b) did not require any conscious action on the part of the user. We looked at the various sensor data you could get from phones, computers, and wearables. We used signal processing and machine learning to stitch together the various noisy signals from multiple devices. It took a lot of work, but what we discovered was both shocking and heartening: It turns out people are both very predictable and very unique in their behaviors, actions, and environments. In essence, there is only one you in the world, and it was possible to authenticate you based on the sensors already around you. UnifyID was born.
The Future is Implicit
This technology is called implicit authentication. The basic idea is to be yourself, and there is enough that is unique about you that it is possible to authenticate you implicitly; that is, without you having to make any explicit action.
Implicit authentication is not new. In fact, this is how authentication worked since the prehistoric era. People used how you looked, how you moved, how you talked, your possessions, the context in which they encountered you, and how you acted to figure out who you were. Our brains are trained to identify people based on these characteristics and to pick up on subtle clues when something is off. Much like what human beings can do naturally, we discovered it is possible to train a machine learning system to do the same.
The result is truly magical. It makes security much more seamless and natural. You can be yourself, and the devices and services you interact with will naturally recognize you based on your unique characteristics. No passwords to remember, no codes to read off your phone. You are not tied to one device, or have something extra to carry around. The future is implicit.
The applications of this technology are endless, but one key area is in authenticating transactions and preventing account takeover. With our implicit authentication system, we can identify the human behind the device and give a confidence level that they are who they say they are. UnifyID also does continuous authentication, which means we can detect when changes happen and automatically challenge or log out the user.
Balancing Security and User Experience
There has always been a balance between security and user experience. For too long, security solutions have sacrificed user experience in the name of security. But you can’t look at security and user experience independently. Any security solution that does not take into account the user experience will not be successful in the real world. If you make security policies too annoying or add too much friction, people will either find ways around your security policies, or will just be miserable and unproductive.
UnifyID was designed with the user experience in mind. In fact, UnifyID is truly a subtraction from the user experience. Usernames? Passwords? Security questions? Passcodes? When enough signals match, these are completely eliminated from the user experience. In the cases where they don’t match, we issue you a challenge to prove your identity. But even the challenges are designed with the user experience in mind. You can use challenge factors like fingerprints and facial recognition, among others in active development. And the more you use the system, the more the machine learning algorithms adapt to your unique behaviors and environment. UnifyID is not only more convenient, it is also more secure.
UnifyID utilizes combinations of deep neural networks, decision trees, Bayesian networks, signal processing, and semi-supervised and unsupervised machine learning. Our system is able to discover what makes each individual unique and finds correlations between multiple factors that greatly boost the accuracy. “Machine learning” is not just a buzzword for us. We have a great team of machine learning and security experts from MIT, Stanford, Berkeley, and CMU, and are working with world-class advisors in both academia and industry. I’m very proud of the team we have built so far. (And if you want to work on the next revolution in authentication and have fun doing it, we are hiring!)
One example of an implicit factor we use is how you walk. It turns out that an individual’s gait is quite particular to them, and has a number of influences including unique physiology, length of femur, muscle memory, the culture you grew up in, and more. In fact, we can identify you with only four seconds of your walking data from your phone sitting in your pocket. And that is just one of over a hundred different attributes we use to authenticate you.
Experience the Future of Authentication
At UnifyID, we believe it is time for authentication to be about you. Humans have always been considered to be the “weak link” in security. At UnifyID, we turn that around and use what is unique about each individual to enhance security. The best way to authenticate yourself is to be yourself.
UnifyID is the first holistic implicit authentication platform available on the market. We are excited to announce a limited private beta for individuals to test ride the future of authentication in their Chrome browsers and iPhones today.
Embrace your uniqueness. After all, there is no one in the world more you than you.