The machine learning model was trained and tested on limited data
Analysis The idea of so-called “master faces,” a set of fake images generated by machine learning algorithms to crack into facial biometric systems by impersonating people, made splashy headlines last week. But a closer look at the research reveals clear weaknesses that make it unlikely to work in the real world. “A master face is a face image that passes face-based identity-authentication for a large portion of the population,” the paper released on arXiv, earlier this month, explained. “These faces can be used to impersonate, with a high probability of success, any user, without having access to any user-information.” The trio of academics from Tel Aviv University go on to say they built a model that generated nine master faces capable of representing 40 per cent of the population that bypassed “three leading deep face recognition systems.” At first glance, it seems impressive and the claims pose clear security risks in applications that require facial identification. First, the team employed Nvidia’s StyleGAN system to create realistic-looking images of made-up faces. Each fake output was compared to one real photograph of the 5,749 different people represented in the Labeled Faces in the Wild (LFW) dataset. A separate classifier algorithm determines how similar the fake AI-generated faces look compared to the real ones in the dataset. Images that score highly for similarity by the classifier are kept, and the others are discarded. These scores are used to train an evolutionary algorithm to create more and more spoof faces using StyleGAN that look like the people in the dataset. Over time, the researchers are able to find a set of master faces that represent as many of the images they can in the dataset.
Home
United States
USA — software Don't believe the hype that AI-generated 'master faces' can break into face...