X

Gender Analysis videos blocked in YouTube’s restricted mode

In 2010, YouTube introduced the “restricted mode” feature, an option that users can enable to “help screen out potentially mature content that you may prefer not to see or don’t want others in your family to see”. At the time this feature was rolled out, the New York Times made note of certain shortcomings of the filter, which failed to block a wide variety of graphically violent and sexually explicit content. Nevertheless, numerous websites provide instructions for system administrators to force this restricted mode in environments such as schools.

Recently, increasing attention has been given to another failing of YouTube’s restricted mode: The setting incorrectly blocks many videos containing gay, lesbian, bisexual and transgender-related content – even videos which have no explicit or remotely inappropriate material. The restricted mode has erroneously excluded coming-out stories, biographies of LGBT individuals, and information about transitioning.

As of this week, YouTube has acknowledged that the restricted mode “isn’t working the way it should”, and stated:

We designed this feature to broadly restrict content across more mature topics, whether these are videos that contain profanity, those that depict images or descriptions of violence, or discussion of certain diseases like addictions and eating disorders.

While YouTube has apologized for miscategorizing LGBT content and promised to “do a better job”, their explanation raises even more questions. And while they work to correct this feature, most videos in the Gender Analysis series remain blocked in restricted mode, despite lacking explicit content. It’s crucial that we take a closer look at what this filter is doing, and its implications for young people who are trying to access information on these important topics.

 

Essential Gender Analysis content is restricted

Much of the Gender Analysis series focuses on correcting the inaccurate information about trans people and gender topics that circulates in the public sphere. We work to educate cis people as well as equip other trans people to fight harmful myths and misconceptions. When campaigns of misinformation are circulated without facing the necessary scrutiny in response, this does a disservice to millions of trans people around the world and the countless cis people who support us.

We’ve worked for years to refute some of the most persistent and damaging deceptions of transphobia – and now, YouTube’s restricted mode is blocking our work. At the time of this writing, out of 26 Gender Analysis videos that have been uploaded to YouTube, only 7 are visible in this mode:

The Gender Analysis channel when viewed in restricted mode.

The blocked videos do not feature inappropriate subject matter. Rather, they include some of our most important coverage, such as:

Together, these videos address some of the most widespread and pernicious myths that trans people will often be confronted with in their daily lives, and the choice of YouTube as a venue is crucial for our content to reach a transgender audience. Trans youth, in particular, frequently make use of YouTube as a valuable resource for learning more about trans topics, connecting with peers, sharing knowledge and experiences, and achieving basic self-recognition:

They aren’t just turning to social media for emotional support — but also for answers as they try to make more sense of what they might be going through.

“I’m not sure I would have coped as easily if I didn’t have social media,” Ela Hosp, a 19-year-old student who identifies as non-binary, told CBS News. “For my generation, I feel like it’s a good outlet to find people you can connect with and find things in common with who you can ask questions. ‘I feel this way, I don’t know why,’ and then someone can say ‘Oh, I felt that way too.’ And then you find out that there’s thousands of other people who feel the same way.”

“There is a real community being built on YouTube for transgender and gender non-conforming people,” said Phillip Picardi, digital editorial director at Teen Vogue. “Because there’s not a wealth of information, or easily accessible information, for people about anything from hormone therapy to gender confirmation surgery, even to coming out for this demographic, they’re going to go to YouTube first, which is really cool. Because I think its a way to go right to the source of people who should be telling these stories.”

These youth are especially vulnerable to specific items of transphobia that are wielded against them by unaccepting parents, relatives, and school administrators. Trans support forums like Reddit’s /r/asktransgender are full of threads from individuals who’ve been challenged with citations of Paul McHugh’s long-debunked claims or Walt Heyer’s misrepresentations of cases of detransition, and this is a large part of why we’ve chosen to address topics such as these.

YouTube’s youth-oriented content filter is now blocking vulnerable young trans people from accessing resources that would help them fight back against the misinformation that is directly being used to harm them.

 

Content restrictions are always value judgments

In recent discussions over YouTube’s restricted mode and its effects on LGBT-related content, some have made the straightforward point that if one does not wish to use this feature, one can simply choose not to and ignore it entirely. While this may seem like a simple answer, it disregards the actual impact of the feature and fails to examine its deeper implications, instead dismissing this with the infinitely flexible tautology of “it does exist therefore it should exist”. Users in environments such as schools may not have the option to disable the filter, and even if they could, the existence of a feature is not inherently morally neutral.

A platform’s choice to make certain features available isn’t a neutral act – it invites users to try those features. The options that are offered represent a decision by the platform about which uses they will facilitate and which uses they will not. If YouTube were to implement a “show only snuff films” option, far fewer people would defend this by retreating to the excuse that ‘you can just turn it off’.

The restricted mode itself fails to meet a number of desired use cases. For instance, suppose that as parents, Heather and I would prefer that our teenage son not learn disinformation about feminism from the angry, ignorant men’s rights activists who populate YouTube. Are we afforded this option? No. Yet a filter does exist which is blocking our own efforts to push back against inaccurate and hateful content.

YouTube’s response, referring to “mature topics” and “discussion of certain diseases like addictions” as criteria for determining whether a video is restricted, does not help to clarify the standards being applied. Some content pertaining to substance addiction may be excessively graphic or otherwise inappropriate for younger viewers, yet some may not be. Notably, there isn’t necessarily a mainstream consensus on whether substance abuse is an inappropriate topic for younger audiences at all. For decades, government programs have exposed elementary and middle school students to information about illicit drug use, with graphic descriptions of the purportedly dire consequences. Sex ed at these grade levels – a “mature topic” by some standards – may be similarly explicit: in 5th and 6th grade, my class was treated to graphic visual imagery of the effects of various STIs. Are non-explicit discussions of same-sex attraction and gender identity somehow more “mature” than this in a way that merits blocking them from younger viewers?

Put simply, it’s difficult to trust a categorization of “discussion of certain diseases” on a platform where our videos are regularly spammed with hateful comments like “transgender is a mental illness”. (The restricted mode has already become a rallying point for homophobes and transphobes, being celebrated under topics like “Sodomites upset because they can’t spread their mental illness to minors”.) These are the consequences of YouTube’s imposition of a restricted status – a feature which raises the question of what exactly is subject to restriction, and why. We deserve a full explanation of which specific criteria and judgments are cloaked behind the vagueof “restricted”.

 

Addendum: Is a “restricted mode” possible?

(Status: Highly speculative.)

Offering a feature like restricted mode means taking on a difficult task: How do you draw a line separating YouTube into two parts? How do you decide, in the case of every single video, whether it belongs inside or outside of a particular box? There are simple criteria that one could choose for this: a video is either under 1 minute long, or it is not; its metadata contains the word “kitten”, or it does not; it’s been flagged at least twice, or it has not. The goals of restricted mode, however, require the use of criteria that are more complex. This can mean having to implement operational definitions that meaningfully answer very difficult questions like:

  • What is “maturity”?
  • What is appropriate for children?
  • What is a disease?

And so on. One could choose to implement this by manually programming a lengthy and ever-growing flow of various “does it contain ‘kitten’”-style rules to be applied to each video. But when looking at a very large dataset such as “every video on YouTube”, no list of rules may be long enough to account for the entire range of content. And rule-based systems can have their drawbacks, such as potentially failing to take into account the nuances of context. A gay high school student may upload a video about being called a ‘faggot’ by a classmate, and content filters may pick up on that word and restrict the video, even if it was intended to highlight the difficulties faced by LGBT students and offer a message of support and solidarity.

Instead, a platform might opt to use a set of its own content or other similar content in order to train a system, such as a neural network, to distinguish appropriate and inappropriate material. For instance, Yahoo’s open_nsfw, an open source neural network trained to assign a score of “not-safe-for-work-ness” to a given image, was taught using a set of pornographic images and a set of non-pornographic images. (The algorithm was notably “reversed” in a DeepDream-like manner and used to generate a series of images which it considered maximally pornographic or maximally safe-for-work.)

YouTube possibly makes reference to using such a system in their explanation on the Creator Blog, where they discuss “training” the restricted mode:

While the system will never be 100 percent perfect, as we said up top, we must and will do a better job. Thanks to your feedback, we’ve manually reviewed the example videos mentioned above and made sure they’re now available in Restricted Mode — we’ll also be using this input to better train our systems.

However, YouTube faces distinctly different challenges in classification. open_nsfw needs only determine the likelihood of whether the average passerby in a typical office setting would, at a glance, believe that a particular image features inappropriate (pornographic, mainly genital-centered) nudity. But such content is generally already not allowed on YouTube. Instead, the nature of “inappropriate” content on YouTube is such that it often takes on far more diverse and even abstract forms, such as:

Whether or not such material can be termed “pornography”, it’s obvious that a significant number of people are likely using it as such. These seven videos alone define a very wide range of themes for any algorithm to recognize and designate as inappropriate. And a system trained to recognize such abstract content as being inappropriate may end up falsely ensnaring a great deal of harmless material as well – “adjacent” content which might include some similar features such as shoes or discussion of gender, but with contextual differences that are too subtle for the filter to pick up, even as a person might easily see the distinction.

While YouTube rightly notes that no filter will be 100% accurate, the sheer volume of material on YouTube means that even a 1% failure rate can result in hundreds of inappropriate videos coming up for nearly any search term, even in restricted mode. When the goal is to protect youth from such material, YouTube has every reason to cast a very wide and aggressive net – and this could mean boosting the sensitivity of the filter to the point that false positives are frequent and unavoidable. For these reasons, making this feature operate in a broadly useful and accurate fashion may not be possible on a platform such as YouTube.


Enjoy our work? Gender Analysis is supported by reader pledges on Patreon.

Zinnia Jones: My work focuses on insights to be found across transgender sociology, public health, psychiatry, history of medicine, cognitive science, the social processes of science, transgender feminism, and human rights, taking an analytic approach that intersects these many perspectives and is guided by the lived experiences of transgender people. I live in Orlando with my family, and work mainly in technical writing.