SM Addiction Chronicles, Part II: Social Media Still Enjoys Immunity

Even as cigarette manufacturers can no longer escape liability for addiction-inducing practices, social media (SM) companies hide behind the First Amendment and The Communications Decency Act (CDA) to protect themselves. A recent case dents this shield. But the decision still protects some SM activities - even those allegedly engendering addictions in teens. I disagree with the Court’s reasoning. Here’s why.
Even as cigarette manufacturers can no longer escape liability for addiction-in…

Recently, 42 attorneys-general banded together to sue Meta, its minions, and compatriots (the “Meta-monde”) for harming teens, including causing suicide, anorexia, and depression.  To avoid protections provided by Section 230 of the CDA and the First Amendment, the plaintiffs focused on SM activities that are independent of content and traditional publishing activities, such as failure to require age verification, parental consent, parental notification, designing a defective product, and failing to warn of known dangers.

As I discussed last week, a  federal judge just ruled against most of the “Meta-monde” motion to dismiss the complaint. But the Court also ruled some SM activities remain protected.  These include the timing, clustering, and content prioritization of notifications, precisely the activities associated with dopamine-related addictions. Even as the Court noted the addiction association, it ruled that the plaintiffs can’t sue because:

“[T]he timing and clustering of notifications of defendants’ content to increase addictive use … is entitled to First Amendment protection. … the content of the notifications themselves, such as awards, are speech.…”

Even more than content, the quantity of time devoted to SM endangers teens, rendering them ripe targets for addiction. [1] “… [Of] nearly 750 13- to 17-year-olds [surveyed], 45% are online almost constantly.” A recent study published in JAMA indicated that “teens using social media more than three hours per day may be at heightened risk for mental health problems…” 

That’s apparently not enough for the Meta-monde. Additional SM force-feeding is mediated by algorithms fostering user interaction. “Notifications” include likes, comments, direct messages, and friend requests relayed 24/7 designed to grab and hold attention. They can be hard to ignore as sounds or pop-ups often accompany them. Notification functions also pander to teens’ quest for popularity. The “befriending and unfriending” process has even been attributed to the suicide by parents of teens who killed themselves. [2]

"Social media is designed to hook our brains, and teens are especially susceptible to its addictiveness.”  

- Nancy DeAngelis, Director of Behavioral Health

Dopamine: The Pleasure-Elixer

Notifications trigger the release of dopamine, the neurotransmitter that plays a key role in our brain's reward system – reinforcing pleasurable behaviors and “keeping us hooked.”  Because the brains of teenagers are still developing, they are supposedly more susceptible to dopamine-mediated gratification schemes provided by the platforms, the same pathways and responses implicated in addictions, according to a recent review study. 

Intermittent Reinforcement and Prioritized Content

The plaintiffs also sued for damages caused by notifications generated by “intermittent reinforcement.” This, they claimed, facilitated addiction because the lack of predictability in response times renders the notifications even more exciting, further increasing dopamine release. In the plaintiffs’ words:

“By algorithmically serving content to young users according to variable reward schedules, Meta manipulates dopamine releases in its young users, inducing them to engage repeatedly with its Platforms—much like a gambler at a slot machine.”

While Judge Gonzalez noted the addiction relationship of this activity, she also barred this claim along with allowing content prioritization and notifying users of preferred topics.  The potential impact of this practice is disastrous. As one parent of a teen who committed suicide said:

“There is a world of websites filled with negative content. … If you frequently search for negative things on Instagram, then Instagram will show you more negative content. That not just makes it harder; it makes it look like that is all there is. Because that is the only thing that comes up.”

Experts theorize that repeated desensitization to suicide via description and images enables “an acquired capability for suicide” – repeated exposure to painful and provocative life events habituates an individual’s emotional response. Some suggest that the visual stimuli may also affect acquired capability through neuropsychological processes habituating fear.

The Court’s Approach: Blind First Amendment Obeisance

While applying product liability law in a nuanced fashion to sustain various claims, the Court displayed a rather blind obeisance to First Amendment law in protecting Meta-monde’s algorithmic behaviors and the timing and clustering of notifications:

“[T]he timing and clustering of notifications of defendants’ content to increase addictive use …is entitled to First Amendment protection. There is no dispute that the content of the notifications themselves, such as awards, are speech.

Whether done by an algorithm or an editor, these are traditional editorial functions that are essential to publishing…. to maximize the duration and intensity of children’s usage” of the platform.”

Respectfully, Judge Gonzalez, I disagree:

In this context, the content or algorithmic conduct was not utilized for purposes protected by the First Amendment. Their “functional equivalent” was not to inform, persuade, or enlist. This was not a contribution to the marketplace of ideas, but instead designed to alter the conduct of children and third-party adults. Nor do I see the words: “views,” “opinions,” or “ideas” surface in the opinion - words intractably intertwined with First Amendment analysis.

Indeed, the notifications are used not to entice or convince but to control readership – no different than the bell Pavlov used to condition his dog. This is mind control. Sadly, per the decision, it is allowable under the guise of the First Amendment.

And while the Judge states that:

“[t]he basic principles of freedom of speech and the press, like the First Amendment’s command, do not vary when a new and different medium for communication appears”  

that is out-of-tune with science. It may be true when the medium is designed to spread knowledge or sway opinion, but when the words or practices are designed for the sole purpose of causing addiction (the same as adding menthol or flavoring cigarettes --which is forbidden), this cannot be sanctioned.

Vegas - Here We Come.

Like the perpetually semi-darkened Vegas gambling casinos that blur the interface between day and night, notification practices of SM bind the user to the interaction, beckoning them to engage 24/7 without regard to quotidian rhythms – via the dopamine drip. Deeming the failure to institute “[b]locks to use during certain times of day, such as during school hours or late at night” as protected from suit  -- is akin to allowing children to enter gambling casinos.

Like the gaming tables, dopamine release is triggered by stroking, intermittent stoking, and proven Pavlovian stimuli like gold stars and sugar cubes: e-rewards function no differently. It is here that the plaintiffs will face their greatest challenge. Even in the product liability matrix, the plaintiffs must prove causation. One mechanism could be the dopamine rush triggered by the intermittent notification system. Yet the Court found this conduct to be protected.

Protected Words  and First Amendment Exceptions

In the First Amendment arena, Oliver Wendell Holmes drew the famous rule that speech can be punished when words

“are used in such circumstances and are of such a nature as to create a clear and present danger…. It is a question of proximity and degree.”

Holmes addressed words that incite violence, making a determination based on “an individual or group’s protected status” considering the frequency, severity, and physical threat. How much more proximal and intense can be the availability of words, prodding, and poking by SM nudges and notices 24/7? How much more violent an attack is there than self-harm and suicide?

Words as content-laden vehicles are not the concern in this instance. The timing of the words precipitates the response – targeting not higher brain functions over which, presumably, we can exert control but over brain activity outside our volition.

Further, even under strict First Amendment analysis, exceptions apply; one is child pornography, which surfaces as a result of allowed SM conduct. As to what constitutes child porn, courts are famously reluctant to rule in advance, with Judge Stuart’s memorably saying, “I know it when I see it.”

It’s hard to understand how the judge doesn’t see that the situation here shouldn’t invoke these exceptions.

Judge Gonzalez begins her opinion by noting the goals of the CDA for child safety and well-being. Sadly, the court seems to foster First Amendment-form over substance. In rendering a biologically invalid decision. Hopefully, some appeals court will redress the errors and catch science up with the law.

 

[1] Meta internal documents reportedly refer to children as “herd animals.”

[2] These findings are in a study published in the British Medical Journal. The study relied on semi-structured interviews of parents and peers of teens who killed themselves and is more appropriately a series of observational case studies rather than an accurate epidemiological study. Nevertheless, the anecdotal interviews are telling.

Category