Sex Cult and Facebook: How the Communications Decency Act Amplifies Obscene Libel

Many people now know that social media providers suppress content at odds with certain political or ideological goals. What’s seldom discussed is that so long as content is irrelevant to those goals, social media will complacently allow it to destroy lives. Here’s a story about that.

A few years ago a middle-aged, unemployed, do-nothing internet maven took to Facebook to assert, outlandishly and falsely, that my clients, their six children, and a portion of their church had organized a sex cult and were grooming and preying upon minors. His post went viral, and, since he had provided my clients’ photographs and contact information, they endured several years of particularly vicious abuse, including death threats. As a consequence, the wife lost her successful wedding and family photography business, and they were forced to leave their church and to live more or less in hiding for two years. The strain on their children, most of whom were minors, was considerable. The poster seemed to enjoy his notoriety until we filed the defamation suit in which we prevailed. 

For the rest of this story to make sense, I have to talk a little law. The relevant bit of the Communications Decency Act, 47 U.S. Code § 230(c), reads:

“(c) Protection for “Good Samaritan” blocking and screening of offensive material

(1) Treatment of publisher or speaker

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

(2) Civil liability

No provider or user of an interactive computer service shall be held liable on account of—

(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or

(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).”

Here “publisher” is a technical term, meaning to lawyers something like “somebody who is liable in a defamation lawsuit for false and harmful speech.” So the upshot of paragraph (c)(1) is that social media providers can’t be sued for defamation on account of user content. Early on, some courts found that providers could be required, by an injunction, to remove defamatory posts, but by around 2010, many courts had held that paragraph (c)(1) forbade even this injunctive relief.

Paragraph (c)(2) allows that, although providers are not to be regarded as publishers, they may in certain instances act like one, by restricting access to objectionable or harassing material (such as, one would imagine, insane allegations of mass sexual predation). The naive hope was that this language would encourage providers to behave as “Good Samaritans” promoting the interests of users like my clients, rather than to advance providers’ own objectives.

To return to the story, immediately upon seeing the sex cult posts, my clients reported them to Facebook and requested their removal. They were ignored. I repeated their request, and was likewise ignored. 

When we filed suit we added Facebook as a party defendant. We knew that our claim against them wouldn’t go anywhere, but thought that I might get a Facebook attorney on the phone to point out that Facebook had the right to remove such defamatory posts, that it couldn’t advantage Facebook in any way to keep them up, and that, well, removing them would be a kindness. I did promptly receive a call from one of their corporate attorneys, who cordially told me that they had no intention of listening to my clients’ request, and that Facebook had adequate resources to litigate my clients into the ground.

And so today, my clients have a $1.5 million judgment against a deadbeat from whom they’ll never collect, and reputations damaged by scurrilous internet slander that may never be effaced. The current discussions of the political implications of blocking, shadow banning, and the like are useful and necessary, but will be incomplete so long as social media can, with impunity, so amplify the croaks of such venomous toads. The remedy is simple enough: when a court of competent jurisdiction finds content to be defamatory, give it the power to compel social media to remove it.

Meanwhile, let’s change the name of the thing; considering its consequences, “Communications Decency Act” is just too perverse.

Leave a Reply