Protecting Children and Democracy from Tech Giants
In a long-awaited turn of events, court cases may finally hold internet platforms accountable for their actions. After years of inaction by the federal government, a wave of legal challenges has emerged to fill the void. These cases have the potential to protect one vulnerable group in particular: children.
A History of Neglect
Since Russia’s interference in the U.S. presidential election using platforms like Facebook and Instagram, the government has done little to safeguard our democracy from similar assaults. The executive and legislative branches have failed to take action, leaving internet platforms free to prioritize profit over the public interest. Even the White House has done little to address the issue. Meanwhile, the courts have consistently sided with the platforms rather than the people who use them.
The Influence of Big Tech
It comes as no surprise that federal politicians have favored Big Tech. The money and influence in Silicon Valley are hard to overlook. Additionally, voters have not held politicians accountable for neglecting their duty to protect the public interest. The presence of politicians with family ties to Big Tech and staff members funded by tech giants has gone unchecked. While some state-level reforms have been passed, industry lobbying has weakened their impact.
Exploiting Legal Protections
In court, internet platforms have used the shield of free speech and the protection of Section 230 of the Communications Decency Act of 1996 to avoid unfavorable judgments. While there have historically been limits on First Amendment protections for harmful speech, the courts have not applied these limits to the speech of internet platforms. Section 230, intended to enable platforms to moderate harmful speech, has been interpreted as blanket immunity, even in cases of negligence.
The Need for Accountability
Internet platforms should not be allowed to harm children or undermine democracy without consequences. The disregard for consumer safety regulations in the tech industry is alarming. The absence of such regulations allows platforms to prioritize growth and profitability over the well-being of users. Trusting platforms to self-regulate has proven ineffective, as they have been used for acts of terrorism, spreading public health disinformation, and enabling insurrections. Thankfully, a new wave of legal cases may finally change the course.
Protecting Children Online
The focus of these cases is to challenge the design of internet platforms and their impact on children. Thirty-three state attorneys general, led by California and Colorado, have filed a case against Meta in federal court, accusing them of designing products to addict children. Nine other states have filed similar cases in their own courts. By targeting product design, these cases minimize conflict with the First Amendment and Section 230, as they address the harm caused by the design itself and the refusal to address it. With cases spread across multiple jurisdictions, the chances of a favorable outcome for the plaintiffs are higher.
Defending Privacy Rights
There is also an ongoing appeal in federal court related to California’s Age Appropriate Design Code, a law that aims to protect the privacy of minors online. This law, modeled after a successful consumer protection law in Britain, passed unanimously and was signed into law in September 2022. However, it was quickly challenged by NetChoice, a trade organization funded by tech giants. A federal district court judge granted a preliminary injunction, suggesting that the law potentially violates the First Amendment. This reasoning overlooks the fact that the law focuses on protecting privacy, not infringing on content or expression. The appeal filed by California’s Attorney General argues for the right to protect children online, stating that “childhood experiences are not for sale.”
Whistleblower Disclosures and Legislative Action
Coincidentally, recent whistleblower disclosures have shed light on the reckless practices of Meta, the parent company of platforms like Facebook and Instagram. Testimonies from whistleblowers have revealed that Meta’s management was fully aware of issues like misogyny and unwanted advances toward teenagers on Instagram but chose not to take action. Despite these revelations, Meta has managed to avoid liability. It remains to be seen if these testimonies will lead to legislative action.
The Urgent Need for Legislative Action
While court cases provide a glimmer of hope, the best way to protect consumers online is for Congress to pass laws that hold tech companies accountable for their harmful products and data practices. Until then, the courts may be the only defense for our children and the public against the power of internet platforms.