Facial Age-Estimation Tech Weighed by Feds as Children’s Privacy Protection Option

Staff Writer
08.2023-MB-K12-facial-recognition-1459063664

Industry groups are asking federal officials to allow facial age-estimation technology to be used to secure parents’ consent before children get access to online sites and services. 

The Entertainment Software Rating Board, a nonprofit self-regulatory group which assigns age and content ratings to consumer video games, has requested approval from the Federal Trade Commission for the use of privacyprotective facial age estimation technology as a means to comply with federal law. 

The technology analyzes the geometry of individuals’ faces to confirm that they are adults. 

Two companies — Yoti, a digital identity company, and SuperAwesome, which provides technology to help companies comply with parental verification requirements — have joined in requesting that the FTC approve the technology to allow compliance with the federal Children’s Online Privacy Protect Act, or COPPA.

What Is Facial Age-Estimation? 

Federal requirements under COPPA mandate that online sites and services for children under 13 obtain parental consent before collecting or using personal information from a child. Interested parties can submit new verifiable methods to the FTC, an agency focused on consumer protection, for approval. 

Earlier this month, the FTC requested public comment about the proposed parental consent method using age-estimation technology.  

Specifically, the agency asked for public input on whether the technology meets federal requirements for parental consent, if the proposed method poses a risk to consumers’ personal information, and whether it is likely to be error-prone or biased against particular demographic groups.

With facial ageestimation technology, computer vision and machine learning is used to estimate a person’s age based on analysis of patterns in an image of their face, as described in the application the software rating board submitted to the FTC. The system takes a facial image, converts it into numbers, and compares those numbers to patterns in its training dataset that are associated with known ages. 

Users of the technology — in this case, parents seeking to grant consent for their children to use an online site or service — would take a real-time photo of themselves. This ensures that an authentic user’s image is being captured in the moment, rather than someone using an uploaded image.

In this model of privacyprotective facial age estimation that the Entertainment Software Rating Board is proposing, images of users’ faces are immediately and permanently deleted after they’re used, according to the application. 

The application says that Yoti and SuperAwesome have already used the technology for legally required parental consent in territories outside the U.S., and since last year, have delivered more than 4.8 million estimations of age through the technology. 

Questions About Privacy and Bias 

Suzanne Bernstein, law fellow at the Electronic Privacy Information Center, which advocates for digital privacy, said she has questions about how the biometric information collected by facial age-estimation technology would be protected. 

Although the application states that images will be deleted, Bernstein said she’d like to see more transparency about matters related to the collection and storage of IP address or location information. 

It’s important that the FTC is careful of potential bias in these technologies or in related technologies that may be present down the line.Suzanne Bernstein, Electronic Privacy Information Center

There also need to be safeguards to ensure that the age-estimation technology is collecting information from adults who actually have the right to grant content for children to visit a site, not just any adult, she said. 

Her organization wants the FTC to be “careful of what they approve with this type of technology, and what standards they might be setting,” she said. 

The application submitted to the FTC says that the advocates for the technology are aware of potential racial and gender bias, and that they’re accounting for that in the way that AI systems are being trained. However, Bernstein said she’s concerned about the accuracy of the technology and its ability to determine if it’s reviewing an actual face or a photo of one. 

“There have been documented issues with not being able to distinguish between faces of color or different kinds of characteristics, so that’s something to be aware of, generally, with this kind of biometric evaluation,” she said. 

“It’s important that the FTC is careful of potential bias in these technologies or in related technologies that may be present down the line.” 

Follow EdWeek Market Brief on Twitter @EdMarketBrief or connect with us on LinkedIn.

Image by Getty.


See also: