Content regulation emerged as a controversial topic earlier this year after right-wing personality and frequent conspiracy theorist Alex Jones had his Infowars podcast removed from most platforms, including Apple, Spotify, Stitcher, and RadioPublic. Amid a social media firestorm, platforms rushed to ban Jones, sometimes within hours of each other, and often without articulating how exactly Jones’ speech violated their terms. The incident drew attention to the ethical and logistical challenges podcasting platforms face in balancing safety, diversity, and respect for free speech principles when articulating what content they allow on their services, and the difficulties in implementing such policies consistently.
Download the memo |
Recognizing the importance of a principled approach, the Cyberlaw Clinic is pleased to release a new memorandum on content regulation policy for the podcasting community drafted by current Clinic students Zach Glasser and Carol Lin with Assistant Director Jessica Fjeld. We gratefully acknowledge the assistance of podcasting platform RadioPublic, whose co-founder and CEO Jake Shapiro is a member of the Berkman Klein Fellows Advisory Board.
The memo emerged from the Clinic team’s discussions with RadioPublic following the Infowars controversy, and shares the results of our research about how the industry is presently dealing with hateful content through an analysis of major podcast platform content regulation policies. It lays out a concrete range of options platforms have to moderate offensive speech. It’s our hope that this memo helps tailor the wider conversation about content moderation, including the recent release of guidelines from the Change the Terms coalition, to the particular needs of podcasting platforms.
Current Content Policies in the Market
We found that podcast platforms varied widely in the language of their content policies. Some were short and broad—for example, Apple Podcasts’ terms merely say that they will prohibit content that may be “construed as racist, misogynist, or homophobic” or that depicts “hate themes.” But even within Apple there are significantly different approaches to content regulation. Apple’s App Store Guidelines, unlike their podcast equivalents, include seven paragraphs defining “offensive content.”
The type of language policies use has implications for how they regulate. Those that rely on expansive terms like “hate themes” or “mean-spirited content” have greater flexibility in deciding what content violates the policy. But they also provide little notice to podcasters and listeners about what content will be determined to violate the policy, and may invite seemingly arbitrary application. Earlier this year, Spotify received criticism for its enforcement of a vague “hateful conduct” policy and was forced to revise the policy and issue a statement admitting its error. On the other hand, policies with narrower, more specific terms provide more notice to the community but may leave a platform with its hands tied when content comes along that doesn’t fit neatly into that language.
Options for Regulation
The memo presents five possible strategies that podcast platforms could take with respect to content regulation. They are tailored to platforms that aggregate podcast feeds, as opposed to those that selectively host a subset of chosen podcast content on their own servers. The strategies fall on a spectrum from less regulation (and a stronger stand on freedom of speech) to more regulation (and a stronger stand on safety and respectful community engagement).
Starting with the least amount of regulation required, one option platforms have is to do nothing. A platform could decide that it should not have a say in deciding what content is permissible and what is not, and take a strong stance on allowing all voices to be heard, no matter how offensive. The next option, if the platform isn’t comfortable giving harmful speech a fully equal voice, would be to restrict affirmative promotion of offensive material while still allowing the podcasts to remain on the platform and appear in search results. Most platforms have some measure of affirmative promotion, whether through allowing podcasts to pay for prominent placement, or generating recommendations for listeners.
The next choice for moderation implicates the technical way that podcasts are fed through the platform. Hosts like RadioPublic provide access to podcasts’ RSS feeds; they do not actually store content themselves. They provide the additional service of listing the podcast in a catalog and search engine, but ultimately all that is required to listen to a podcast on the platform is the RSS feed’s URL – similarly to how a web browser like Chrome makes it possible for users to access an HTML page hosted anywhere. Therefore, a platform has the option of delisting content from its catalog, ensuring it won’t turn up in user search results, but allowing users with the URL to the RSS feed manually. This approach does much more to limit the reach of harmful speech than merely stopping affirmative promotion, but it won’t stop users from posting a link to the offensive content that uses the platform’s domain and branding.
Finally, the platform can block access to a podcast’s RSS feed altogether. This is the inverse of the do-nothing approach: platforms have the option to take a strong stance that certain content is simply not allowed on the platform in any form.
A Balancing Act
The choices a podcast platform faces—both in terms of the language of a policy and its options in implementation—involve balancing weighty principles. Platforms can choose to reserve more flexibility for themselves, taking an “I-know-it-when-I-see-it” approach. Or they can opt for more notice and transparency, with an eye towards moderating in a neutral and non-arbitrary manner.
There is also the ultimate trade-off between freedom of speech and reducing harm. Accepting that certain content has the capacity to harm, there is value to limiting the reach of such harm. But there is also value—especially given the spirit of the open internet in which podcasting was born—to a free and open marketplace of ideas. This balance is not black and white; platforms can opt for a middle ground that reflects the value in both positions, as some of the options above do. But there are tradeoffs in each one.
One final consideration goes into the balance and cannot be overlooked: resources. Many podcasting platforms are still in the early stages of development, and may not have the staff to comb through all of the content that comes through to find offensive content. And long-form spoken audio—published in increasingly wide range of languages—poses particular challenges to both human and automated inspection. A less regulatory approach might therefore be easier to implement consistently. To what extent should resource constraints factor into an otherwise value-laden decision?
True, the consequences of any particular platform’s decision are limited to its services and are unlikely to effectively eradicate content from the internet, as other players with the option to moderate content might; nevertheless, each company’s actions should reflect the its mission and the values of its employees.