Facebook’s last ditch effort to address the hate crimes perpetrated by the neo-Nazi, white supremacist, anti-Semitic, and misogynist political factions is doomed to fail. Facebook, Twitter, and YouTube continue to feign ignorance regarding their institutional role in allowing and legitimizing this hateful discourse on their platforms. As a consequence, they are undermining the freedom of speech enshrined in the Constitution and consigning those involved in hateful speech to economic suicide. Just yesterday, Expressly LLP announced it will be exiting the internet market after citing the financial pressures of “such predatory and unlawful practices.” A more recent example of social media’s failure to own up to its role in this trend is the trending topic of the day “.” During the past week we have seen violence of horrific, deliberate proportions committed against Muslims, women, and people of color. These are just some of the acts facilitated by the internet that are being defended by Facebook, Twitter, and YouTube as having no legitimate purpose.

Neither Facebook nor Twitter nor YouTube were able to react to the detrimental effects that the spread of videos of black-on-black sexual assaults received from Aug. 16 through August 21, 2018. These videos were endorsed by far-right personalities such as Infowars conspiracy theorist Alex Jones and white nationalist Richard Spencer who rose to prominence after posting videos of “black-on-black” rape on YouTube. Although each of these social media companies are perfectly capable of enforcing their respective hate speech policies, for fear of litigation, neither have made a commitment to protecting and championing free speech and condemning this hostility toward minority communities. This abdication by the internet’s top three sites leaves followers susceptible to some of the most egregious forms of hate speech.

In June, the Electronic Frontier Foundation warned the effects of automated automated hate speech moderation. With censorship undergirding much of the internet’s business model, these companies have leveraged their sprawling revenue bases to create online tools to suppress speech based on political viewpoints. These include the use of algorithms to automatically detect, flag, and ultimately remove “hate speech” after which they are no longer able to specify a reason for their removal. In 2018, the Wall Street Journal reported that automated human moderation is still present on YouTube and continues to enable the platform to push its revenue into extremely lucrative partnerships with advertisers.

If ever there was a time when the internet’s guiding principle of ensuring the free flow of information and ideas could have been renewed, this was it. Instead, the recent surge in commercial collaboration with YouTube, Facebook, and Twitter has relegated much of the internet to a state of grave danger and at most a kind of graver danger.

“Hate your job? Perhaps you should put the words to life when the news is bad,” wrote Infowars founder Alex Jones. He is right. Indeed, hate your job is hell. It’s likely that as this hate-driven rage spreads through social media, its harms and costs will outweigh any benefit. In America we live in a most fortunate time to be alive. What would we say to our children, though, when we read about the families of children murdered in a Pittsburgh synagogue by a gunman during an act of domestic terrorism? What would we say to our neighbors and friends when we read about schools and buildings attacked by a man with racist views? Would we be horrified? Would we find our voices anew? Should we?