YouTube’s Age Verification Policy and The First Amendment
Age-based content restrictions are not novel. Courts have held that restrictions imposed to protect minors can be constitutional, as states have the power to shield children from inappropriate material. One well-known example is the Motion Picture Association’s movie ratings based on the age of viewers (i.e., Rated R or PG-13). The modern age allowance dilemma is how to address the harm that unfettered access to social media can cause children, including exposing them to cyberbullying, trafficking, grooming, deteriorating mental health and social media addiction, while still considering censorship, surveillance and data breaches. Some states have responded by passing age verification laws. As of December 31, 2025, 25 states have passed bills that require certain websites to use an age verification service. Louisiana was the first state to enact such a law on January 1, 2023; that statute holds the publisher or distributor liable for not providing reasonable age verification for content that is harmful to minors. Utah, Mississippi, Virginia, Arkansas and Texas followed suit the same year.
Applicable First Amendment Law
Under Ginsberg v. New York, 390 U.S. 629 (1968), states have the authority to protect children from harmful content. But what happens when the right to protect children from harmful media conflicts with the constitutional right of adults to have access to that media? This is the tension with social media sites such as YouTube, X, Spotify, Reddit, Instagram and Facebook, which have age verification policies.
Under First Amendment law, content-based restrictions trigger strict scrutiny. The government must demonstrate a compelling interest, and the regulation must be narrowly tailored to advance that interest without unnecessarily burdening protected speech. In this case, the compelling government interest is to protect children from exposure to harmful materials.
YouTube, for example, requires users to present their identification when using the service. The platform uses artificial intelligence to estimate users’ ages based on their activity and the length of time their channel has been active. If the AI determines the user is underage, their account will be restricted. If adults wish to regain full access to the material, they must submit a government ID, selfie or credit card to verify their identity.
Although YouTube does not completely bar adults from accessing such content, the requirement of uploading personal identification documents creates a chilling effect. Users may be unwilling to provide their identification to access the media and jeopardize their privacy, including the risks of being tracked by the sites they visit and of undesired parties gaining access to their personal identification information.
This past summer, the Supreme Court ruled in Free Speech Coalition, Inc. v. Paxton, 606 U.S. 461 (2025), that a Texas Act, HB 1181, was constitutional. The Act required websites more than one-third of which contain “sexual material harmful to minors” to verify the age of their users. One of the key issues in the case was whether the government had to satisfy rational basis scrutiny or the higher level of strict scrutiny. The Court applied rational basis scrutiny and upheld the Act. However, some legal scholars have criticized the Paxton case, arguing that the regulation is vague, over-broad and ineffective as the ruling “does very little to advance the legitimate goal of keeping violent and exploitative pornography away from children, but it does much to advance the power of the government to keep many kinds of other speech away from everyone.”
In prior cases, the Supreme Court has taken a different approach. In Reno v. ACLU, 521 U.S. 844 (1997), the Court applied strict scrutiny and found that the Communications Decency Act, prohibiting the transmission of harmful content to minors, was unconstitutional because it was vague and over-inclusive. The Act prohibited material that was patently offensive without defining what that includes, which meant the Act was not narrowly tailored to the issue. The Court mentioned the chilling effect the Act would have on free speech. Id. at 872. However, the holding in Paxton is distinguishable in that the age verification requirement does not completely ban adults from accessing the material.
Data Privacy Concerns and Prior Age-Verification Methods
YouTube’s age verification requirement also raises questions about data privacy since most users are unaware of how their data is used. Collecting sensitive identification data poses privacy risks as large companies that collect such data are targets for hackers and can profit from it. In 2019, YouTube was fined $170 million dollars for selling children’s data to advertisers.
Prior to AI, platforms relied on other methods to limit children’s access to inappropriate online content, such as self-reporting systems. Children could easily bypass these safeguards. Some platforms also allowed parents to restrict their children’s accounts, placing the onus on parents to limit and monitor their children’s online activity. This approach arguably led to uneven levels of protection across households, while still allowing children to bypass safeguards by hiding accounts or creating new ones. For further reading from our website on the topics discussed here, see the following insights and IP Bits & Pieces®: USPTO Home Address Rule, Facial Recognition Technology and Its Application in Educational and Other Sensitive Settings, Deepfakes, Rights of Publicity and Proposed Legislation, Why You Should Care About Privacy Policies and our Artificial Intelligence FAQs.