Last June, conservative YouTuber, and moron who thinks children dehydrating to death is funny, Steven Crowder, had his account demonetized by YouTube. This decision had some relationship with Crowder calling Vox journalist Carlos Maza a “lispy queer” and promoting shirts that said “socialism is for f*gs [sic]”. Some onlookers were confused when YouTube said that Crowder had not technically violated any of their rules.
YouTube’s community guidelines are written vaguely enough to have easily justified banning Crowder (the prohibition on humiliating and bullying others for example) and Crowder was in fact penalized by way of demonetization, despite not having broken any rules in the opinion of YouTube itself. You may be surprised to learn that you do not need to violate YouTube’s rules to be demonetized, or to be penalized in other ways.
Alexandria Walden, a representative of Google, told the House Judiciary Committee last April:
“We found that content can sit in a gray area, that comes right up against the line. It may be offensive, but it does not violate YouTube’s policies against incitement to violence and hate speech. When this occurs, we build a policy to drastically reduce the video’s visibility by making it ineligible for ads, removing its comments, and excluding it from our recommendation system”.
Walden was not asked about this remark by a single Congressperson present that day, not least because Candace Owens somehow managed to get invited to the same hearing.
Google, which owns YouTube, has also stated that, “content that doesn’t violate our policies, but is close to the removal line and could be offensive to some viewers, may have some features disabled”.
In a blog post published the same day Crowder was demonetized, YouTube said, “[i]n addition to removing videos that violate our policies, we also want to reduce the spread of content that comes right up to the line”. They then boasted that they had reduced the views of hateful content by 80%.
These remarks give us two interesting insights into YouTube’s content moderation: One, some material should apparently have its exposure reduced, but not eliminated; and two, YouTube targets content they do not consider to be in violation of their own rules. Steven Crowder may not have violated YouTube’s rules, but he doesn’t have to.
If you are confused by the concept of “almost hate speech” then you are missing the point. YouTube’s community guidelines, and their “advertiser-friendly content guidelines”, are designed to protect their customers, not their temporary (and easily replaced) employees.
Seen this way, YouTube is not really an abusive monopoly, since they are actually quite responsive to the needs of their advertisers (which include banning and demonetizing creators that are embarrassing for them). If anything, they are a monopsony, meaning they are a dominant buyer. In this case, they are the dominant buyer of online videos, making the creators of those videos vulnerable to YouTube’s whims and incompetence.
YouTube’s Business Model
YouTube’s relationship with its creators is counter-intuitive to some. YouTube’s customers are its advertisers, and its creators are independent contractors (Vimeo’s terms of service says this explicitly, while for whatever reason YouTube’s does not). When YouTube demonetized Steven Crowder, they essentially stopped buying his services but kept on publishing them. Crowder has tacitly agreed to all this, and continues to make his videos on a volunteer basis.
Demonetizing Crowder is analogous to cutting the hours or pay of an employee who is somewhat annoying, but not quite annoying enough to fire.
Beginning in March 2017, YouTube began creating many of the “brand safety” policies that would lead to Crowder’s demonetization two years later. This policy change was in response to an advertiser boycott that included McDonald’s and the UK government after their ads were found playing on videos with bigoted, or otherwise brand damaging content.
Advertiser boycotts continued from November of 2017 through the winter of 2018 in response to dubious videos involving minors, and included brands such as Adidas, Disney, and AT&T. These shocks to YouTube were the result of their ad-driven/crowd sourced model, which makes them accountable to the needs of many different companies with public images to protect, and gives them less control over what content appears on their platform at any given moment.
In 2018, YouTube claimed to have made at least 30 policy updates related to content moderation. They of course framed this as a sign of healthy flexibility (a signal to advertisers that they are trying), rather than an opportunity for political entrepreneurs, and a source of confusion, which is what their content moderation is becoming, and in many ways already is.
YouTube’s business model forces them to constantly adapt their content moderation policies to keep up with their customer’s needs, and avoid bad press. The combination of the incentives YouTube has to moderate content, along with their sheer size, provides a clear opportunity for political entrepreneurs to target their rivals.
The incentive for political operatives to get their foes banned from YouTube ought to be clear enough, but it is enhanced by their inability to get the state to do their censoring for them. Since it is difficult to get the government to censor for you, YouTube censorship serves as a mediocre substitute.
YouTube’s scale means that a permanent ban can severely damage someone’s influence, and YouTube’s business model makes them vulnerable to negative press. YouTube scandals such as the Crowder-Maza debacle will continue for as long as YouTube’s market dominance and business model exist.
For all I know, Carlos Maza did feel harassed by Crowder (I reached out to him for comment, but received no answer). However, if Crowder had called Maza “lispy queer” to his face, he likely would have faced no consequences. But Crowder said it on YouTube, and this gave Maza an opportunity to retaliate, and he seized it (during pride month, where it would it would be most likely to be seen and recognized).
But why couldn’t YouTube just ban Crowder? YouTube has another powerful incentive guiding its content moderation: pre-empting state regulation.
Behind every misstep YouTube takes lurks the possibility that the government may one day tell YouTube how it may and may not regulate the content that appears on its own website. Representatives of social media companies are frequently asked what they are doing to fight extremism, conspiracy theories, and misinformation, but it is rarely explained why YouTube has an obligation to “fight” these things. When legislators ask YouTube to censor content, it is because they too see YouTube censorship as a substitute for what they are not permitted to do themselves.
The implicit threat to regulate YouTube is often made explicit. During the same Judiciary Committee hearing last April Congressman Cedric Richmond (D-La) told representatives of YouTube and Facebook “I would just encourage you all to figure it out, because you don’t want us to figure it out for you”.
Freshman Senator Josh Hawley (R-Mo) has built a reputation for himself for his mutterings about the need to regulate big-tech, and the President’s eldest son has whined and moaned about his Instagram posts being removed and called for content-neutral regulation (the related Hill op-ed was re-tweeted by the President).
So while YouTube has to protect its advertisers, it also has to avoid provoking legislators with diverse motives and constituencies, and this includes conservative legislators with an interest in protecting (and being seen to protect) conservative YouTubers.
Since YouTube has to alter its behavior to avoid regulation, the threat of regulation effectively functions as real regulation.
The pressures put on YouTube all but guarantee that it continues to embarrass itself and frustrate its creators for the foreseeable future. But are there any serious alternatives to YouTube?
Ideally, a rival-YouTube would be financed by user subscriptions rather than ads, making the platform accountable to content creators first. Those creators could then sell content to earn a living, and since the content must be purchased, it would limit the responsibility of the platform to protect viewers from content they find offensive (since you know, they went through the effort of actually buying it).
At this point, you make be thinking, “wait… this moron just described Vimeo”.
Vimeo won’t save you (and neither will Twitch nor PornHub)
Vimeo has a model that would seem to shield it from the problems YouTube has. It’s primary source of revenue is subscriptions from content creators, and takes a small cut of fees charged to viewers by creators. Vimeo also sells no ads, and so doesn’t have to worry about advertiser boycotts.
But Vimeo’s rules on harassment and hate speech are stricter than YouTube’s.
Twitch (a streaming platform primarily used by gamers which sells ads and takes a cut of creator revenue) has terms of service that are tighter than YouTube’s too, and bans “otherwise objectionable” content.
Although, both Vimeo and Twitch are owned by larger companies, InterActiveCorp and Amazon respectively, and so are vulnerable to boycotts of their parent companies and the firms they own (how the Washington Post would report on a boycott of Amazon and Whole Foods due to bigoted content that originated on Twitch sounds like the plot of a boring dystopian novel. I do not believe many people know that IAC owns Tinder, The Daily Beast, and a few other things, so maybe they are safe).
There has also been the occasional whisper of PornHub expanding its “safe for work” section to rival YouTube (it’s only safe for work if you work from home). But it seems too unlikely that a brand associated with porn will be able to sell enough ad space.
The sort of model that would produce a better video sharing platform would make it accountable to creators and viewers, which means a subscriber-based model rather than an ad-based one. It would also have to be unassociated with a larger parent company. Its terms of service would also be crystal clear, and exclude phrases such as “otherwise objectionable” or “borderline content”.
I am not aware of any such platform, and am unconvinced if it would be much better than YouTube anyway.