An investigation into the Australian Federal Police has shown it has not taken any steps to improve its privacy practices since using Clearview AI’s facial recognition tool without the backing of an appropriate legislative framework.
In an investigation conducted by Australia’s Information Commissioner (OAIC), it has found the Australian Federal Police’s (AFP) use of the Clearview AI platform interfered with the privacy of Australian citizens. Clearview AI’s facial recognition tool is known for breaching privacy laws on numerous fronts by scraping biometric information from the web indiscriminately and collecting data on at least 3 billion people, with many of those people being Australian. From November 2019 to January 2020, 10 members of the AFP’s Australian Centre to Counter Child Exploitation (ACCCE) used the Clearview AI platform to conduct searches of certain individuals residing in Australia. ACCCE members used the platform to search for scraped images of possible persons of interest, an alleged offender, victims, members of the public, and members of the AFP, the OAIC said. While the AFP only used the Clearview AI platform on a trial basis, Information and Privacy Commissioner Angelene Falk determined [PDF] the federal police failed to undertake a privacy impact assessment of the Clearview AI platform, despite it being a high privacy risk project.
Home
United States
USA — software OAIC determines AFP interfered with privacy of Australians after using Clearview AI