More than just a number: The current policy debate around young Internet users and why it matters to startups

Engine
11 min readFeb 22, 2024

--

By The Engine Policy Team

This is the first in a series of posts on the unintended consequences for startups of proposals aimed at enhancing Internet safety for young users. The second post, on the costs of determining users’ age, is available here.

Key Takeaways:

  • Policymakers are rightfully concerned about the safety and well being of young Internet users, but many of the proposals they’re advancing would bring significant tradeoffs to online participation and expression for users of all ages as well as the ability for startups to compete.
  • There’s a wide range of proposed policy changes being considered, ranging from extending existing privacy protections to more Internet companies and users, to requiring platforms to block young users from seeing “harmful content,” to banning young users from parts of the Internet all together.
  • At their core, the vast majority of the proposals require Internet companies proactively identifying, estimating, or verifying the age of their users, which carries direct and indirect costs that will fall disproportionately on startups.

Policymakers have a lot of ideas on the table that could make life harder — and more expensive — for startups that do (or even might) interact with young users. While the varying goals of protecting young users from things like harmful content, privacy invasions, and addictive technologies are all laudable, these proposals often carry significant tradeoffs, including on privacy, security, and expression, as well as creating costs and compliance burdens that fall disproportionately on startups.

How does the law work for startups now?

Many of the proposals would be massive shifts from the way the world currently works for startups. Currently, the landscape around how to deal with young users is relatively straightforward; if you operate a website or service that’s directed to users under the age of 13 or you have “actual knowledge” that a user is under the age of 13, you have to comply with rules created by the Federal Trade Commission (FTC) under the Children’s Online Privacy Protection Act (COPPA). That law was passed in 1998 to give parents more control over how their children’s personal information is collected and used online, especially as it relates to targeted advertising. After the first set of rules was enacted in 2000, the FTC updated the rules in 2013, and the agency is currently in the process of updating the rules again. At a high level, the rules require companies in scope to obtain parental consent before collecting information from young users and give parents the ability to review, delete, or prevent further use of their child’s information.

COPPA is the reason you have to check the box that you’re 13 years old or older every time you sign up for a general audience website or service that will collect your data online. Checking that box gives the operator of the website or service “actual knowledge” that they’re not dealing with a user under the age of 13 and they don’t have to worry about COPPA compliance. (And while it’s true that a younger user can check the box just as easily as someone older, the bright line created by the actual knowledge standard saves operators from the fraught process of having to figure out individual users’ ages, as discussed later in this series.)

So what’s the problem?

Policymakers in state legislatures, Congress, and the administration have put forward a number of varying laws and rules that would dramatically upend that relatively straightforward framework in the name of protecting kids online. And while they all ostensibly share the same high-level goal, the proposals tend to tackle different facets of the issue:

  • “Companies are able to collect too much data about kids.” Some critics of the current landscape believe that, despite the protections and requirements created by COPPA, websites and online services are still collecting too much personal information about young users. Some say that the fact that COPPA’s protections end at age 13, leaving a large swath of “young users” between the ages of 13 and 17 open to data collection and targeted advertising. Others say the problem is that Internet companies that aren’t directed to children and don’t have actual knowledge that a user is a child should still know that they’re dealing with young users based on the context of what those users are doing online.
  • Tech companies are building and marketing products that are addictive to kids.” Some critics say the products and services being offered by tech companies are intentionally designed to keep users engaged with the product or service, which disproportionately impacts young, impressionable users. Some of the complaints are about companies’ design decisions, like infinite scroll, where more content automatically loads when a user reaches the bottom of a page, while other complaints focus on fundamental aspects of a product or service, like using an algorithm to make personalized recommendations. Critics argue that companies should have alternative versions of their offerings without these features and functions specifically for young users.
  • “Young users are seeing harmful content online.” For Internet companies of any size that host user-generated content — whether that’s traditional social media platforms, message apps, photo and video sharing services, and any website where users can leave comments — content moderation is a critical, inherently fraught, and time consuming and expensive undertaking. It’s not practical as a platform scales to, in real time, review every piece of user content to ensure it complies with the company’s acceptable use policies, meaning there’s no way to guarantee that one user won’t share something “harmful” that another user sees. Complicating things further, there’s no clear consensus on what’s “harmful” to young users. When critics of Internet companies talk about “harmful” content, they cite a wide range of things, including content about eating disorders and sexual health and sexual orientation, As we pointed out in comments to the federal government, what’s harmful for one community of users might be helpful to another, and the platform working with their community of users is going to be best equipped to make that determination.
  • “Illegal and illicit activity online is harming kids offline.” Online activity can undoubtedly contribute to offline harm, and policymakers are rightly focused on ways to reduce illegal content that harms children in the real world, including child sexual abuse material (CSAM) and the sale of illegal drugs. Internet companies already spend significant time and money finding and removing that kind of content. But, as discussed above, content moderation is inherently fraught, and even in areas where illegality of content is clearest, like CSAM, there are still inherent limits to technologies used to detect that kind of content.

How could the policy landscape change?

Depending on the problem policymakers are trying to solve, there’s a wide range of proposed changes on the table at the federal and state levels:

  • Expand existing childrens’ data protections to cover more companies and users. Many lawmakers are looking for ways to stretch existing protections for kids online to more users. One proposal — the Children and Teens’ Online Privacy Protection Act, which was amended in and approved by a key Senate Committee last year — would take existing federal protections for childrens’ online data and expand them in multiple ways, including revising the age for COPPA protections from under 13 to under 16 and allowing the FTC to effectively create a new knowledge standard for whether companies should know they’re dealing with young users. At the state level, Virginia and Connecticut have advanced legislation that would create more requirements and prohibitions around young users’ data. Connecticut’s law — which the state legislature passed last summer — limits the kinds of data a company can collect from young users and prohibits the sale of kids’ data and processing it for targeted advertising. Proposals in Virginia — which did not make it across the finish line before the end of the legislative term — would have extended existing state privacy protections for children to users between the ages of 13 and 17.
  • Require companies to get young users’ parental consent to create accounts. Several states have considered — and some have passed — legislation that would require Internet companies to get parental consent before users under the age of 18 can create accounts. The proposals often lay out mechanisms for obtaining and verifying parental consent with varying levels of specificity, including collecting parents’ government issued IDs or creating a central phone line where parents can call and give their consent. One passed law in Arkansas would require Internet companies to use a third-party vendor to verify all of their users’ ages and obtain parental consent for minors using the service; that law was recently blocked by a federal court after being challenged by industry group NetChoice on First Amendment grounds. Recently passed laws in Utah require parental consent for users under the age of 18, require companies to allow parents to access their children’s accounts, and restrict minors’ access to social media between 10:30 p.m. and 6:30 a.m. as a default. NetChoice has also sued to block the Utah laws from going into effect.
  • Prohibit companies from showing young users “harmful content.” Many policymakers are focused on the harm that Internet usage can cause to young people’s mental health. One proposal — the Kids Online Safety Act, which has already made it through a key Senate committee and has been modified several times — would, among other things, create a duty for Internet companies to take “reasonable care” to prevent users under 17 from seeing “harmful” content. The bill’s definition of harm includes anything that contributes to mental health disorders including anxiety, depression, and eating disorders, online bullying and harassment, and anything that promotes tobacco, gambling, and alcohol. The bill would be enforced by the FTC, creating the opportunity for differing answers to the question “what online content can endanger a teenager’s mental health?” depending on what political party controls the agency. Other parts of the bill — including prohibitions on harmful “design features” may be enforced by states’ attorneys general, creating disparate enforcement based on the politics of the state and what design features (and the content made available through them) they consider to be harmful. Civil liberties groups have warned about the impact the bill will have on kids, especially those who don’t otherwise have access to resources about things like eating disorder recovery or LGBTQ+ health. At the state level, several states — including Montana, Tennessee, and Pennsylvania — have considered legislation that would require device manufacturers to block minors’ access to harmful and obscene material, including pornography.
  • Require companies to estimate users’ ages and offer a version of their product or service to young users. Often called “age-appropriate design codes” and modeled after requirements that were first created in the United Kingdom, several states have considered legislation that would require Internet companies to estimate their users’ age, which then triggers several obligations, including different privacy settings, a different privacy policy that is accessible to children, enforcement of community standards, and restrictions on product design that encourages young users to share their information. While many states have considered this type of “design code” — including Nevada, Maryland, and Minnesota — California passed the first version of the law in the U.S. in 2022. It has since been blocked in federal court after NetChoice challenged the law on First Amendment grounds.
  • Require companies to proactively monitor for illegal user activity that harms kids offline. Fueled by concerns about children being physically harmed in the offline world, including by CSAM and the sale of illegal drugs, some lawmakers are putting forward proposals that would effectively force companies to scan and remove certain types of user content. At the federal level, the Senate Judiciary Committee has repeatedly advanced proposals that would push companies to scan user content for CSAM (the EARN IT Act) and illegal drugs (the Cooper Davis Act). Technologists and civil liberties advocates have warned that these proposals carry significant privacy and security tradeoffs, in addition to threatening constitutionally protected speech that will get caught in the inherently imperfect filters that tech companies would use to comply with the laws. In 2023, California passed its own CSAM measure which creates legal liability for websites that “knowingly’’ facilitate child exploitation, which many have warned will push companies to stop or scale back the work they already do to proactively find and report CSAM in an attempt to avoid liability for any CSAM content they might miss.
  • Ban minors from large swaths of the Internet. Some of the most extreme proposals prohibit young users from social media platforms entirely. Texas considered, but ultimately did not move, a bill during the last legislative session that would have required Internet companies to collect driver’s licenses to verify that no one under 18 is using their service. More recently, Florida policymakers advanced a bill that prohibits social media platforms from allowing users under the age of 16 to create accounts.

What would these changes mean for startups?

Any of these proposals would dramatically change the way startups interact with their users. One major change that many of the proposals include is putting the onus on Internet companies to figure out the age of their users. To figure out which users are “young” (under 13, under 16, or under 18, depending on the proposal), a startup would have to figure out the age of all of its users, which, as discussed in the second part of this series, typically requires purchasing and integrating third-party parental consent, age verification, or age estimation software.

But there are other costs to these proposals, especially around the additional data collection necessary to do parental consent, age verification, age estimation, and, for the state-level proposals, geolocation. Any additional data collected by a startup needs to be processed, stored, and shared, if necessary, securely, and a startup collecting, for instance, a dataset of its users’ government-issued IDs, has to worry about being an attractive target for a data breach. There’s also the cost of asking users for that data, especially as a new and relatively unknown company. A startup that requires users to submit their drivers licenses as part of signing up for a service has to worry about whether users feel comfortable handing that sensitive information over, or whether they’ll seek out an alternative offered by a larger, more established company.

Depending on the proposal, startups would face additional significant compliance burdens once they determine users’ ages, ranging from proactively monitoring and filtering out “harmful” content before it reaches young users, changing the way a company collects data about young users, changing existing products and services to offer additional versions specifically to young users, or removing young users altogether. All of these would carry significant costs, both in terms of literal costs to operationalize but also costs to growth, user participation and expression, and opportunity costs.

All of these direct and indirect costs will make it harder for startups to compete. While so much of the policy conversations about kids’ safety happening at every level of government are driven by concerns about large companies, policymakers need to remember that the rules they write will impact the entire ecosystem, including the startups that want to be good stewards of their users’ data and already have to be responsive to their users’ needs and concerns.

Engine is a non-profit technology policy, research, and advocacy organization that bridges the gap between policymakers and startups. Engine works with government and a community of thousands of high-technology, growth-oriented startups across the nation to support the development of technology entrepreneurship through economic research, policy analysis, and advocacy on local and national issues.

--

--

Engine
Engine

Written by Engine

Engine is the voice of startups in government. We are a nonprofit that supports entrepreneurship through economic research, policy analysis, and advocacy.

No responses yet