Hailed as a hero by MAGA and a heel by the left, Mark Zuckerberg recently announced he was doing away with fact-checking at Meta, opting instead for a “community notes” system, similar to Elon Musk’s X (formerly Twitter). I’m not here to offer an opinion on the merits of fact-checking systems, or on the flaws of its replacement. But it would be unwise to let the conversation about Zuckerberg’s actions become just another overheated, 48-hour exchange, only to be discarded into the digital dustbin of the Wayback Machine.
Instead, we should be discussing the core of the problem: Social media’s domination by producers and purveyors of industrial-scale slop, often created by bots and foreign actors. Rather than joining the chorus of those complaining about fact-checking, I want to offer a solution, a common-sense reform that could dramatically improve social media and decrease the destructive role it plays in our political discussions.
America Needs Social Media Sovereignty. Free speech that can be debated—amplified, embraced, or challenged—underlies the greatness of our nation. But speech manufactured by non-humans and foreign actors for the sole purpose of polluting the political discourse, offers no benefit to our country.
If we are to cast our ballots in an informed way, we need social media sovereignty and transparency, which requires user verification on social media and disclosure of non-human posters.
Few would argue that it takes less time to manufacture nonsense than it does to clean it up, and the problem is magnified exponentially when the nonsense is produced at an industrial scale, aided by algorithms, AI, or bots. A machine that can mass produce speech—a non-human nonsense factory—interferes with your right to speak and be heard as an American.
Under our system of free speech, individuals have both the right to state their view and the obligation to face the consequences. However, if your speech can be shouted down by non-humans and erased by algorithms, then your speech is not free. You might as well be whispering in a sawmill; you may take minor satisfaction from “having your say,” but the noise of the machines will drown you out.
And what’s worse, the shouting that silences your voice as an American might very well be bots from a foreign country, bots that are then amplified by an algorithm in San Francisco—or worse yet, China.
By definition, a bot can neither have any beliefs nor face any consequences for manufacturing “speech.” The destructive potential is obvious to anyone with eyes and ears over the last ten years, though each of us might point to different examples. And even while the debate over control of the nation’s physical borders rages on, our digital borders are currently undefended by even the most modest safeguards.
If you believe control over our borders is important, then it will likely concern you to consider for a moment that foreign actors— whether from Mexico, Greenland, China, or, yes, Russia—can influence our elections. Secretive algorithms produced by corporations can turbocharge these foreign influence campaigns without your knowledge, leaving Americans with no ability to challenge these industrial-scale influence efforts.
When China floods our market with cheap goods, President Donald Trump demands that we put tariffs in place to stop them. So we should also embrace tools to prevent China from flooding our social media—our marketplace of ideas—with false information designed to divide Americans and inflame political differences.
Many Americans of both parties will be troubled by the reality that foreign actors could be influencing our elections with the assistance—intentional or otherwise—of profitable but hidden algorithms controlled by tech executives who show no loyalty to any ideology, party, or creed beyond their economic self-interest.
Rightly or wrongly, the Supreme Court has extended free speech protections to corporations, but they have not yet done so for bots, especially those operated from or on behalf of, interests outside the United States.
So what can be done about this? It’s simple: Meta, X, and other dominant social media platforms should require user verification of any account before allowing it to be amplified by its algorithms. Users should be identified as humans and by the country in which their content originates. All non-human posters on social-media platforms should be clearly identified as such.
Though proposed for a different purpose, Jonathan Haidt, a leading voice on the harm that can be caused by social media platforms, has argued for user verification requirements before an account is eligible for algorithmic amplification. In a 2022 piece called, “Why the Past 10 Years of American Life Have Been Uniquely Stupid,” Haidt put it this way:
“Perhaps the biggest single change that would reduce the toxicity of existing platforms would be user verification as a precondition for gaining the algorithmic amplification that social media offers.”
His proposal even addresses some of the initial concerns regarding this proposal. It still permits the use of pseudonyms—you just have to be “verified as a real human being, in a particular country.”
We know this process is possible because both companies already require verification for political advertisers. And certainly, changes to the algorithm can have a far greater impact on the number of people who see or don’t see a post than most advertising budgets.
As Haidt points out, banks already have “know your customer” requirements to prevent money laundering. Surely, the amplification of politically toxic speech—speech that may very well be serving the interests of foreign actors—is every bit as grave a threat as money laundering.
Under this proposed approach, all Americans are free to say whatever they wish, and if the users are verified as humans, Zuckerberg or Musk can let their algorithms amplify that speech in any way they choose. But the algorithms shouldn’t be able to amplify the speech of bots and foreign actors without at least labeling it as such.
We have the right to know if content is generated by one of our friends or neighbors or an AI propaganda machine. Silicon Valley shouldn’t be able to profit so handsomely off our use of social media without accepting this small burden of verification.
Some might wish to go further and restrict what foreign actors and foreign nationals can say that might influence U.S. elections and politics, others might fear that any regulation could lead to excessive regulation. But verification and transparency are worthy steps, no matter your views on additional measures. When democracy is careening down an ice-covered hill on a greased toboggan, it seems untimely to engage in a debate on the risks of a slippery slope.
As AI becomes an even more dominant part of our lives, these problems will only be magnified, meaning we should require verification of humans and disclosure of non-human online agents sooner rather than later.
So let’s start with the most basic thing possible: let’s make social media social again by verifying that we are interacting with actual humans, and perhaps, over time, we’ll even find out we are friends again.