FTC says the company’s ‘reckless use’ of AI humiliated customers
Rite Aid has received a five-year ban on employing facial recognition software, following the Federal Trade Commission’s (FTC) discovery of the drugstore giant’s “careless utilization of facial surveillance systems,” causing embarrassment and jeopardizing customers’ sensitive information.
The FTC’s Order, pending approval from the U.S. Bankruptcy Court due to Rite Aid’s Chapter 11 bankruptcy filing in October, mandates the deletion of all images amassed during the facial recognition system’s implementation, along with any products derived from those images. Additionally, Rite Aid is required to establish a robust data security program to protect any collected personal data.
A 2020 Reuters report unveiled Rite Aid’s clandestine deployment of facial recognition systems in approximately 200 U.S. stores over an eight-year span, using “largely lower-income, non-white neighborhoods” as testing grounds.
Caught in the FTC’s scrutiny over biometric surveillance misconduct, Rite Aid faced allegations of collaborating with two contracted companies to create a “watchlist database” featuring images of customers accused of criminal activity at its stores. These often poor-quality images were sourced from CCTV or employee mobile phone cameras.
Upon a supposed match with an existing image, employees would receive automatic alerts instructing actions, predominantly leading to incorrect accusations and actions like following, searching, ordering customers to leave, or involving law enforcement. The FTC asserts that Rite Aid failed to inform customers about the use of facial recognition technology and prohibited employees from disclosing this information.
The controversy surrounding facial recognition software is underscored as one of the most contentious aspects of AI-powered surveillance. Recent years have witnessed citywide bans on the technology, regulatory efforts to control police usage, and legal actions against companies like Clearview AI for significant privacy breaches.
The FTC’s investigation into Rite Aid also reveals inherent biases in AI systems, indicating that Rite Aid’s technology disproportionately generated false positives in communities with a plurality of Black and Asian residents compared to majority-White communities. Furthermore, the FTC criticizes Rite Aid for neglecting to assess the accuracy of its facial recognition system before or after deployment.
In response, Rite Aid expressed satisfaction with reaching an agreement with the FTC but disagreed with the core allegations. The company clarified that the accusations pertain to a facial recognition technology pilot program executed in a limited number of stores, discontinued over three years ago, preceding the FTC’s investigation.