Recapping our Summer 2017 Internship Program

This summer we ran our largest internship program yet at UnifyID. We hosted an immensely talented group of 16 interns who joined us for 3 months, and there was never a dull day! While bringing in interns for the summer does create an energetic cadence, fresh viewpoints challenge us to grow as a company too. 12 weeks can feel like both a sprint and marathon, but in start-up days, even the hour can be precious.

Almost all our interns mentioned a desire to contribute to the technology of the future when asked why they chose to work at UnifyID, and we think this is a testament to the quality of our internship program—interns are able to contribute their talents in a meaningful way, whether on our machine learning, software engineering, or product teams.

Our machine learning interns focused on research, under the guidance of Vinay Prabhu. Much of their work has been on figuring out how to integrate new factors into our algorithms or develop datasets of human activity for future use. Three of our paper submissions were accepted to ICML workshops to be held in Sydney this year. This brings the total number of peer reviewed research papers accepted or published by UnifyID in the last few weeks to seven! What is especially exciting is the fact that these were the first peer-reviewed papers for our undergraduate interns in what we hope will be long and fruitful research careers.

Our software engineering interns have been integral in supporting our product sprints, which have been centered around deploying initial versions of our technology to our partners quickly. As one of our interns, Joy, said: “From mobile development to server work to DevOps, I learned an insane amount from this incredible team.”

Our product interns were involved across teams and worked on projects varying from product backlog grooming and retrospectives to beta community management to content marketing to analyst relations to technical recruiting to team building efforts. Having worked across multiple facets of the business, they were able to wear many hats and learn a great deal about product development and operations.

Aside from work, there’s no shortage of events to attend in the Bay Area, from informal ones like Corgi Con or After Dark Thursday Nights at the Exploratorium, to events focused on professional development like Internpalooza or a Q&A with Ben Horowitz of a16z, who provided his advice on how to succeed in the tech world. Our interns were also able to take part in shaping our team culture: designing custom t-shirts, going on team picnics, and participating in interoffice competitions and hackathons.

A serendipitous meet up at Norcal Corgi Con!

Though we are sad to see them go, we know that they all have a bright future ahead of them and are so grateful for the time they were able to spend at our company this summer. Thank you to the Summer 2017 class of UnifyID interns!

  • Mohannad Abu Nassar, senior, MIT, Electrical Engineering and Computer Science
  • Divyansh Agarwal, junior, UC Berkeley, Computer Science and Statistics
  • Michael Chien, sophomore, UC Berkeley, Environmental Economics and Policy
  • Pascal Gendron, 4th year, Université de Sherbrooke, Electrical Engineering
  • Peter Griggs, junior, MIT, Computer Science
  • Aditya Kotak, sophomore, UC Berkeley, Computer Science and Economics
  • Francesca Ledesma, junior, UC Berkeley, Industrial Engineering and Operations Research
  • Nikhil Mehta, senior, Purdue, Computer Science
  • Edgar Minasyan, senior, MIT, Computer Science and Math
  • Vasilis Oikonomou, junior, UC Berkeley, Computer Science and Statistics
  • Joy Tang, junior, UC Berkeley, Computer Science
  • Issac Wang, junior, UC San Diego, Computer Science
  • Eric Zhang, junior, UC San Diego, Computer Engineering
Bay Area feels

UnifyID™ Raises $20M Series A Funding from NEA to Fuel Next Gen Authentication

Company Uses Behavioral and Environmental Factors, Not Passwords, to Identify Users

SAN FRANCISCO, CA – August 1, 2017 – UnifyID is leading the development of an implicit authentication platform that requires zero conscious user actions. The Company announced today that it has closed $20 million in Series A financing led by NEA. Its General Partners Scott Sandell and Forest Baskett will be joining UnifyID’s Board. Investors Andreessen Horowitz, Stanford StartX, and Accomplice Ventures previously invested in the company’s Seed round, bringing the total invested to $23.4 million. This latest round of funding will be used to grow the team to expand enterprise trials, accelerate research and maintain the company’s position as the leader in implicit authentication and behavioral biometrics.

“Our goal is seamless security: you can be yourself and the devices and services you interact with will naturally recognize you based on what makes you unique,” said UnifyID founder John Whaley. Since 2015, UnifyID has been using a combination of signal processing, optimization theory, deep learning, statistical machine learning, and computer science to solve one of the oldest and most fundamental problems in organized society: How do I know you are who you say you are?

To date, the company has developed the first implicit authentication platform designed for online and physical world use. Named RSA’s Unanimous Winner for 2017, UnifyID utilizes sensor data from everyday devices and machine learning to authenticate you based on unique factors like the way you walk, type, and sit. The company has also partnered with global corporations to assess the generalizability of their software across industries.

The UnifyID solution combines over 100 different attributes to achieve 99.999% accuracy without users changing their behavior or needing specific training. The key is the proliferation of sensors combined with innovations in machine learning. UnifyID is the first product to develop neural networks to run locally on the phone to process sensor data in real-time.

“A large percentage of data breaches involve weak, default or stolen passwords, and we think passwords – as we know them – need an overhaul,” said Forest Baskett, NEA General Partner. “We are excited about the world-changing potential of UnifyID’s frictionless, universal authentication solution.”

In the past six months, UnifyID received national attention by winning security innovation competitions at TechCrunch Disrupt, RSA, and SXSW and continued to grow its engineering, machine learning, and enterprise deployment talent. For career and partnership inquiries, learn more at https://unify.id.

 

ABOUT UNIFYID
Headquartered in San Francisco, UnifyID is the first implicit authentication platform. Its proprietary approach uses behavioral and environmental factors to identify users. In February of 2017, the Company was recognized as the most innovative start-up at RSA. For career and partnership inquiries, learn more at https://unify.id.

ABOUT NEA
New Enterprise Associates, Inc. (NEA) is a global venture capital firm focused on helping entrepreneurs build transformational businesses across multiple stages, sectors and geographies. With over $19 billion in cumulative committed capital since the firm’s founding in 1977, NEA invests in technology and healthcare companies at all stages in a company’s lifecycle, from seed stage through IPO. The firm’s long track record of successful investing includes more than 210 portfolio company IPOs and more than 360 acquisitions. For additional information, visit www.nea.com.

 

Contacts
Grace Chang
grace [at] unify.id

Vulnerability of deep learning-based gait biometric recognition to adversarial perturbations

PDF of full paper: Vulnerability of deep learning-based gait biometric recognition to adversarial perturbations
Full-size poster image: Vulnerability of deep learning-based gait biometric recognition to adversarial perturbations

[This paper was presented on July 21, 2017 at The First International Workshop on The Bright and Dark Sides of Computer Vision: Challenges and Opportunities for Privacy and Security (CV-COPS 2017), in conjunction with the 2017 IEEE Conference on Computer Vision and Pattern Recognition.]

Vinay Uday Prabhu and John Whaley, UnifyID, San Francisco, CA 94107

Abstract

In this paper, we would like to draw attention towards the vulnerability of the motion sensor-based gait biometric in deep learning-based implicit authentication solutions, when attacked with adversarial perturbations, obtained via the simple fast-gradient sign method. We also showcase the improvement expected by incorporating these synthetically-generated adversarial samples into the training data.

Introduction

In recent times, password entry-based user-authentication methods have increasingly drawn the ire of the security community [1], especially when it comes to its prevalence in the world of mobile telephony. Researchers [1] recently showcased that creating passwords on mobile devices not only takes significantly more time, but it is also more error prone, frustrating, and, worst of all, the created passwords were inherently weaker. One of the promising solutions that has emerged entails implicit authentication [2] of users based on behavioral patterns that are sensed without the active participation of the user. In this domain of implicit authentication, measurement of gait-cycle [3] signatures, mined using the on-phone Inertial Measurement Unit – MicroElectroMechanical Systems (IMU-MEMS) sensors, such as accelerometers and gyroscopes, has emerged as an extremely promising passive biometric [4, 5, 6]. As stated in [7, 5], gait patterns can not only be collected passively, at a distance, and unobtrusively (unlike iris, face, fingerprint, or palm veins), they are also extremely difficult to replicate due to their dynamic nature.

Inspired by the immense success that Deep Learning (DL) has enjoyed in recent times across disparate domains, such as speech recognition, visual object recognition, and object detection [8], researchers in the field of gait-based implicit authentication are increasingly embracing DL-based machine-learning solutions [4, 5, 6, 9], thus replacing the more traditional hand-crafted-feature- engineering-driven shallow machine-learning approaches [10]. Besides circumventing the oft-contentious process of hand-engineering the features, these DL-based approaches are also more robust to noise [8], which bodes well for the implicit-authentication solutions that will be deployed on mainstream commercial hardware. As evinced in [4, 5], these classifiers have already attained extremely high accuracy (∼96%), when trained under the k-class supervised classification framework (where k pertains to the number of individuals). While these impressive numbers give the impression that gait-based deep implicit authentication is ripe for immediate commercial implementation, we would like to draw the attention of the community towards a crucial shortcoming. In 2014, Szegedy et al. [11] discovered that, quite like shallow machine-learning models, the state-of- the-art deep neural networks were vulnerable to adversarial examples that can be synthetically generated by strategically introducing small perturbations that make the resultant adversarial input example only slightly different from correctly classified examples drawn from the data distribution, but at the same time resulting in a potentially controlled misclassification. To make things worse, a large plethora of models with disparate architectures, trained on different subsets of the training data, have been found to misclassify the same adversarial example, uncovering the presence of fundamental blind spots in our DL frameworks. After this discovery, several works have emerged ([12, 13]), addressing both means of defence against adversarial examples, as well as novel attacks. Recently, the cleverhans software library [13] was released. It provides standardized reference implementations of adversarial example-construction techniques and adversarial training, thereby facilitating rapid development of machine-learning models, robust to adversarial attacks, as well as providing standardized benchmarks of model performance in the adversarial setting explained above. In this paper, we focus on harnessing the simplest of all adversarial attack methods, i.e. the fast gradient sign method (FGSM) to attack the IDNet deep convolutional neural network (DCNN)-based gait classifier introduced in [4]. Our main contributions are as follows: 1: This is, to the best of our knowledge, the first paper that introduces deep adversarial attacks into this non-computer vision setting, specifically, the gait-driven implicit-authentication domain. In doing so, we hope to draw the attention of the community towards this crucial issue in the hope that further publications will incorporate adversarial training as a default part of their training pipelines. 2: One of the enduring images that is widely circulated in adversarial training literature is that of the panda+nematode = gibbon adversarial-attack example on GoogleNet in [14], which was instrumental in vividly showcasing the potency of the blind spot. In this paper, we do the same with accelerometric data to illustrate how a small and seemingly imperceptible perturbation to the original signal can cause the DCNN to make a completely wrong inference with high probability. 3: We empirically characterize the degradation of classification accuracy, when subjected to an FGSM attack, and also highlight the improvement in the same, upon introducing adversarial training. 4: Lastly, we have open-sourced the code here.

Figure 1. Variation in the probability of correct classification (37 classes) with and without adversarial training for varying ε.
Figure 2. The true accelerometer amplitude signal and its adversarial counterpart for ε = 0.4.

2. Methodology and Results

In this paper, we focus on the DCNN-based IDNet [4] framework, which entails harnessing low-pass-filtered tri-axial accelerometer and gyroscope readings (plus the sensor-specific magnitude signals), to, firstly, extract the gait template, of dimension 8 × 200, which is then used to train a DCNN in a supervised-classification setting. In the original paper, the model identified users in real time by using the DCNN as a deep-feature extractor and further training an outlier detector (one-class support vector machine-SVM), whose individual gait-wise outputs were finally combined into a Wald’s probability-ratio-test-based framework. Here, we focus on the trained IDNet-DCNN and characterize its performance in the adversarial-training regime. To this end, we harness the FGSM introduced in [14], where the adversarial example, x ̃, for a given input sample, x, is generated by: x ̃ = x + ε sign (∇xJ (θ, x)), where θ represents the parameter vector of the DCNN, J (θ, x) is the cost function used to train the DCNN, and ∇x () is the gradient function.

As seen, this method is parametrized by ε, which controls the magnitude of the inflicted perturbations. Fig. 2 showcases the true and adversarial gait-cycle signals for the accelerometer magnitude signal (given by amag(t) = √(a2x (t) + a2y (t) + a2z (t))) for ε = 0.4. Fig. 1 captures the drop in the probability of correct classification (37 classes) with increasing ε. First, we see that in the absence of any adversarial example, we were able to get about 96% ac- curacy on a 37 class classification problem, which is very close to what is claimed in [4]. However, with even mild perturbations (ε = 0.4), we see a sharp decrease of nearly 40% in accuracy. Fig. 1 also captures the effect of including the synthetically generated adversarial examples in this scenario. We see that, for ε = 0.4, we manage to achieve about 82% accuracy, which is a vast improvement of ∼ 25%.

3. Future Work

This brief paper is part of an ongoing research endeavor. We are currently currently extending this work to other adversarial-attack approaches, such as Jacobian-based Saliency-Map Approach (JSMA) and Black-Box-Attack (BBA) approach [15]. We are also investigating the effect of these attacks within the deep-feature-extraction+SVM approach of [4], and we are comparing other architectures, such as [6] and [5].

References
[1]  W.Melicher, D.Kurilova, S.M.Segreti, P.Kalvani, R.Shay, B. Ur, L. Bauer, N. Christin, L. F. Cranor, and M. L. Mazurek, “Usability and security of text passwords on mobile devices,” in Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pp. 527–539, ACM, 2016. 1
[2]  E. Shi, Y. Niu, M. Jakobsson, and R. Chow, “Implicit authentication through learning user behavior,” in International Conference on Information Security, pp. 99–113, Springer, 2010. 1
[3]  J. Perry, J. R. Davids, et al., “Gait analysis: normal and pathological function.,” Journal of Pediatric Orthopaedics, vol. 12, no. 6, p. 815, 1992. 1
[4]  M. Gadaleta and M. Rossi, “Idnet: Smartphone-based gait recognition with convolutional neural networks,” arXiv preprint arXiv:1606.03238, 2016. 1, 2
[5]  Y. Zhao and S. Zhou, “Wearable device-based gait recognition using angle embedded gait dynamic images and a convolutional neural network,” Sensors, vol. 17, no. 3, p. 478, 2017. 1, 2
[6]  S. Yao, S. Hu, Y. Zhao, A. Zhang, and T. Abdelza- her, “Deepsense: A unified deep learning framework for time-series mobile sensing data processing,” arXiv preprint arXiv:1611.01942, 2016. 1, 2
[7]  S. Wang and J. Liu, Biometrics on mobile phone. INTECH Open Access Publisher, 2011. 1
[8]  Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015. 1
[9]  N. Neverova, C. Wolf, G. Lacey, L. Fridman, D. Chandra, B. Barbello, and G. Taylor, “Learning human identity from motion patterns,” IEEE Access, vol. 4, pp. 1810–1820, 2016. 1
[10]  C. Nickel, C. Busch, S. Rangarajan, and M. Mo ̈bius, “Using hidden markov models for accelerometer-based biometric gait recognition,” in Signal Processing and its Applications (CSPA), 2011 IEEE 7th International Colloquium on, pp. 58–63, IEEE, 2011. 1
[11]  C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,” arXiv preprint arXiv:1312.6199, 2013. 1
[12]  C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9, 2015. 1
[13]  N. Papernot, I. Goodfellow, R. Sheatsley, R. Feinman, and P. McDaniel, “cleverhans v1.0.0: an adversarial machine learning library,” arXiv preprint arXiv:1610.00768, 2016. 1
[14]  I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explain- ing and harnessing adversarial examples,” arXiv preprint arXiv:1412.6572, 2014. 2
[15] N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, “Practical black-box attacks against deep learning systems using adversarial examples,” arXiv preprint arXiv:1602.02697, 2016.

Smile in the face of adversity much? A print based spoofing attack

PDF of full paper: Smile in the face of adversity much? A print based spoofing attack
Full-size poster image: Smile in the face of adversity much? A print based spoofing attack

[This paper was presented on July 21, 2017 at The First International Workshop on The Bright and Dark Sides of Computer Vision: Challenges and Opportunities for Privacy and Security (CV-COPS 2017), in conjunction with the 2017 IEEE Conference on Computer Vision and Pattern Recognition.]

Vinay Uday Prabhu and John Whaley, UnifyID, San Francisco, CA 94107

Abstract

In this paper, we demonstrate a simple face spoof attack targeting the face recognition system of a widely available commercial smart-phone. The goal of this paper is not proclaim a new spoof attack but to rather draw the attention of the anti-spoofing researchers towards a very specific shortcoming shared by one-shot face recognition systems that involves enhanced vulnerability when a smiling reference image is used.

Introduction

One-shot face recognition (OSFR) or single sample per person (SSPP) face recognition is a well-studied research topic in computer vision (CV) [8]. Solutions such as Local Binary Pattern (LBP) based detectors [1], Deep Lambertian Networks (DLN) [9] and Deep Supervised Autoencoders (DSA) [4] have been proposed in recent times to make the OSFR system more robust to changes in illumination, pose, facial expression and occlusion that they encounter when deployed in the wild. One very interesting application of face recognition that has gathered traction lately is for mobile device unlocking [6]. One of the highlights of Android 4.0 (Ice Cream Sandwich) was the Face Unlock screen-lock option that allowed users to unlock their devices with their faces. It is rather imperative that we mention here that this option is always presented to the user with a cautioning clause that typically reads like *Face recognition is less secure than pattern, PIN, or password.

The reasoning behind this is that there exists a plethora of face spoof attacks such as print attacks, malicious identical twin attack, sleeping user attack, replay attacks and 3D mask attacks. These attacks are all fairly successful against most of the commercial off-the-shelf face recognizers [7]. This ease of spoof attacks has also attracted attention of the CV researchers that has led to a lot of efforts in developing liveness detection anti-spoofing frameworks such as Secure-face [6]. (See [3] for a survey.)

Recently, a large scale smart-phone manufacturer introduced a face recognition based phone unlocking feature. This announcement was promptly followed by media reports about users demonstrating several types of spoof attacks.

In this paper, we would like to explore a simple print attack on this smart-phone. The goal of this paper is not proclaim a new spoof attack but to rather draw the attention of the anti-spoofing community towards a very specific shortcoming shared by face recognition systems that we uncovered in this investigation.

2. Methodology and Results

Figure 1. Example of two neutral expression faces that failed to spoof the smart-phone’s face recognition system.
Figure 2. Example of 2 smiling registering faces that successfully spoofed the smart-phone’s face recognition system.
The methodology we used entailed taking a low quality printout of the target user’s face on a plain white US letter paper size (of dimension 8.5 by 11.0 inches) and then unlocking the device by simply exposing this printed paper in front of the camera. Given the poor quality of the printed images, we observed that this simple print attack was duly repulsed by the detector system as long as the attacker sported neutral facial expressions during the registration phase. However, when we repeated the attack in such a way that the attacker had an overtly smiling face when (s)he registered, we were able to break in successfully with high regularity.

In Figure 1, we see two examples of neutral expression faces that failed to spoof the smart-phone’s face recognition system when the registering image had a neutral facial expression. A video containing the failed spoofing attempt with a neutral facial expression can be viewed here.

In Figure 2, we see the same two subjects’ images that successfully spoofed the phone’s face recognition system when the registering (enrollment) image was overtly smiling. The face training demo videos are available here. The video of the successful spoof can be viewed here.

2.1. Motivation for the attack and discussion

It has been well known for a long time in the computer vision community that faces displaying expressions, especially smiles, resulted in stronger recall and discrimination power [10]. In fact, the authors in [2] termed this the happy-face advantage, and showcased the variation in detection performance for varying facial expressions. Through experimentation, we wanted to investigate the specific onshot classification scenario when the registering enrollment face had a strong smile that resulted in the discovery of this attack. As for defense from this attack, there are two straightforward recommendations. The first recommendation would be to simply display a message goading the user to maintain a passport-type neutral facial expression. The second would entail having a smile detector such as [5] as a pre-filter that would only allow smile-free images as a reference image.

References
[1] T. Ahonen, A. Hadid, and M. Pietikainen. Face description with local binary patterns: Application to face recognition. IEEE transactions on pattern analysis and machine intelligence, 28(12):2037–2041, 2006. 1
[2]  W. Chen, K. Lander, and C. H. Liu. Matching faces with emotional expressions. Frontiers in psychology, 2:206, 2011. 2
[3]  J. Galbally, S. Marcel, and J. Fierrez. Biometric antispoofing methods: A survey in face recognition. IEEE Access, 2:1530–1552, 2014. 1
[4]  S. Gao, Y. Zhang, K. Jia, J. Lu, and Y. Zhang. Single sample face recognition via learning deep supervised autoencoders. IEEE Transactions on Information Forensics and Security, 10(10):2108–2118, 2015. 1
[5]  P. O. Glauner. Deep convolutional neural networks for smile recognition. arXiv preprint arXiv:1508.06535, 2015. 2
[6]  K. Patel, H. Han, and A. K. Jain. Secure face unlock: Spoof detection on smartphones. IEEE Transactions on Information Forensics and Security, 11(10):2268–2283, 2016. 1
[7]  D. F. Smith, A. Wiliem, and B. C. Lovell. Face recognition on consumer devices: Reflections on replay attacks. IEEE Transactions on Information Forensics and Security,10(4):736–745, 2015. 1
[8]  X.Tan,S.Chen,Z.-H.Zhou, and F.Zhang. Face recognition from a single image per person: A survey. Pattern recognition, 39(9):1725–1745, 2006.
[9]  Y. Tang, R. Salakhutdinov, and G. Hinton. Deep lambertian networks. arXiv preprint arXiv:1206.6445, 2012. 1
[10]  Y. Yacoob and L. Davis. Smiling faces are better for face recognition. In Automatic Face and Gesture Recognition, 2002. Proceedings. Fifth IEEE International Conference on, pages 59–64. IEEE, 2002. 2

Our Pledge to Inclusion and Diversity: 1 Year Later

Lack of diversity in tech has been a long-standing problem, but in recent months it’s become increasingly apparent that inclusion is more than an aspirational need. Diversity is the DNA that creates robust, flourishing environments primed for tough conversations and progressive thinking at UnifyID.

Last June, UnifyID was one of 33 companies that signed the White House Tech Inclusion Pledge on the eve of President Obama’s Global Entrepreneurship Innovation Summit 2016 to ensure that our employees reflect the diverse nature of the American workforce.

Although UnifyID is a small startup, we still want to lead in all areas of our business—and diversity is no exception. As an inaugural signatory of this agreement, the first of its kind, we proudly reaffirm our commitment to being an industry leader in promoting inclusion for all.

Our team on a normal day in the office.

The pledge was three-part, with the central aim of increasing representation of underrepresented groups:

“Implement and publish company-specific goals to recruit, retain, and advance diverse technology talent, and operationalize concrete measures to create and sustain an inclusive culture.”

This was a task we have invested significant time and effort into accomplishing, particularly in our recruitment operations. Many job seekers and experts alike have criticized the inconsistent process around the technical interview, noting its irrelevance to the workplace and its unnecessary biases against women. Taking into account these guidelines from Code2040, a collaborating organization of the Tech Inclusion Pledge, we’ve created a low stress, context-relevant, and fun language-agnostic technical challenge to improve the non-biased screening in our recruiting process.

“Annually publish data and progress metrics on the diversity of our technology workforce across functional areas and seniority levels.”

It is important to us that we are transparent about our gender, racial, and ethnic data because diversity and inclusion is a core part of our company mission to be authentic, be yourself. As such, this report is our first attempt at this, and we hope to make future updates more frequently.

On our team, 70 percent are people of color and 24 percent are women. Immigrants make up a significant part of the American workforce, and we are also proud to call UnifyID the workplace of immigrants who collectively represent 17 nationalities (including our interns). Paulo, one of our machine learning engineers, has quipped, “the office sometimes feels like a Model UN conference!” While our size makes us unable to release more detailed breakouts (we respect employee privacy), we will continue to release diversity data in a timely and transparent fashion.

“Invest in partnerships to build a diverse pipeline of technology talent to increase our ability to recognize, develop and support talent from all backgrounds.”

Here in the Bay Area, we are surrounded by terrific organizations that support underrepresented groups in tech, and we’ve been fortunate to be involved in these events. Some of these events include the Out for Undergrad (O4U) annual Tech Conference, which allowed us to connect with many high-achieving LGBTQ+ undergraduates from across the country, as well as the Y Combinator-hosted Female Founders Conference, or even SF Pride last month!

Our head of Product, Grace Chang, at last year’s Out for Undergrad (O4U) Tech Conference!

Diversity strengthens us as a company and as a country, so this remains one of our foremost priorities as we continue to grow (we’re hiring) and we hope to see improvement in our workplace and in the industry as a whole. We are thrilled that today, the number of companies that have signed the pledge has risen to 80.

We encourage more companies to sign this Tech Inclusion Pledge here.

UnifyID at a16z’s Battle of the Hacks

Last weekend, UnifyID was invited to attend Andreessen Horowitz’s 4th annual Battle of the Hacks at their headquarters in Menlo Park—an exclusive hackathon for the organizers of the 14 top university hackathons in North America, ultimately competing for a $25,000 sponsorship from a16z. Grace served as a judge alongside others from companies like Slack, Lyft, and Github, while Andres was a mentor for the event, advising teams on how best to complete their projects!

We’ve sent out people to hackathons before (see our CEO John’s post from HackMIT here) and we continue to do it for a few reasons. First, we’re strong believers in supporting innovation, particularly through mentorship, because it’s the same thing we do at UnifyID. Second, we’re able to meet students and hackers working on incredible projects, which is not only inspiring but shows us the depth and breadth of knowledge in the talent pool for us to hire from. Finally, no matter what hackathon, we have enjoyed ourselves without fail. In fact, Andres even stayed the night at the event (which students said they’d never seen a mentor do!)

The winner of the hackathon was HackMIT, which built a Chrome Extension called Cubic leveraging NLP to provide a timeline (topic history across sites) and proper context (detail or general views on related topics) for any news story. The judging panel was incredibly impressed by the difficulty of the project, adding dimensionality to the content we consume on a daily basis.

Hack the North from UWaterloo was the runner-up: they made a creative visual system called Fable to augment live storytelling. By breaking down voice inputs and pairing them with relevant web images, they constructed a useful supplement to traditional stories.

In 3rd was the Bitcamp team from University of Maryland, College Park. They created Alexagram, an interactive hologram using Alexa. One of the coolest demos by far, their project was able to give Alexa some personality as well as some visual interaction with the user.

You can check out all the submissions here!

All the projects were incredible and the teams were all very impressive. We’re already looking forward to the next one!

Housewarming Party at UnifyID!

At a startup like UnifyID, it’s amazing how much can change over a few weeks’ time. What’s even more incredible, though, is how unpredictable the catalyst for that change can be. It’s been almost four months since we were unanimously crowned winners of RSA’s Innovation Sandbox, and the positive response we’ve received since has been overwhelming.

Last Friday, we hosted a housewarming party at our new SoMa office celebrating all the good work we’ve done including: wrapping up the Spring AI Fellowship, winning other competitions, kicking off new partnerships, welcoming a new batch of summer interns (pictured below), and a special announcement from founders John Whaley and Kurt Somerville!

We’re so grateful to everyone who attended the Housewarming and all who continue to support the mission of our work.

Interested in learning more about this secret announcement? Join the team, lead the frontier in how people interact seamlessly with technology, and let’s change authentication forever.

Photo credits: Karina Furhman