5

Regulating the Networked
Broadcasting Frontier

In early February 2015, stories started appearing about a Twitch channel named SpectateFaker being shut down by a DMCA claim originating from a competing esports live streaming site, Azubu. The channel had allowed Twitch viewers to watch the famous professional Korean League of Legends player Lee “Faker” Sang-hyeok’s solo queue games whenever they were happening and without him initiating the stream (see figure 5.1). The channel wasn’t a rebroadcast of anything from Azubu but instead leveraged the website OP.gg’s ingenious use of the game’s internal spectator mode. Utilizing a creative chain of technology, the SpectateFaker channel automatically launched whenever Faker was playing these types of games and broadcast directly to viewers via Twitch.

images

FIGURE 5.1. Screenshot of the SpectateFaker channel in offline mode, 2015.

Azubu, which had entered into an exclusive broadcast agreement with the Korea Esports Association along with a number of Korean teams and players including Faker and his team SK Telecom T1 just six months prior, clearly got nervous. A clever fan utilizing a competing platform was now challenging that agreement, which Azubu had touted as “historic.”

The Twitch channel owner who administered the broadcasts, a user going by the name StarLordLucian, posted on the League of Legends subreddit that he felt Azubu had made a “false claim” and he was going to fight it. He also sought advice, asking what he should do and if “there [was] anyone out there with ‘Powers’ like Azubu who could clear this up and get the channel back?” He noted that he was not making any money off the Twitch stream and was “running the stream 100% for fun” (StarLordLucian 2015a). The thread reached over twelve hundred comments with people weighing in, including everything from asking for information about how the system worked to offering musings on intellectual property and ethics. That initial post was followed by nine updates from StarLordLucian as he began reaching out to Azubu, Riot, and Twitch to address the situation.

Over the course of the updates, you can see him trying to process the information that he is getting, especially the responses from Riot. The company initially emailed him to say, “If you are going to stream another player’s games, it makes sense to reach out to that player first (in this case Faker) and get their permission. It’s simply the right thing to do.” While he at first expressed satisfaction with Riot’s email answer, commenting that it was good to know its take on things, his next update reflected that he was unclear about what underlying principle was actually operating in its response. He felt that Riot didn’t address head-on the DMCA claim or the intellectual property issues that were at stake, and instead focused more on how the player might feel not realizing that they were being broadcast (ibid.). In a fairly stunning move, amid it all, Mark “Tryndramere” Merrill (2015a), the president of Riot, posted a comment on the thread with a relatively biting reply:

You are rationalizing and trying to justify the fact that you have singled out a player against their will and broadcasting their games in a way that he can do nothing about. That reeks of harassment and bullying—Azubu vs Twitch is irrelevant in my view. If you can’t see how this potentially harms Faker and/or anyone else in this situation, then that is more reinforcement that we need to take the appropriate action to protect players from this type of unique situation.

Merrill’s response, strongly personal in tone and seemingly offside a principled consideration of a complex intellectual property issue, fueled even more heated debate in the forum as well as coverage on other esports and gaming sites.

StarLordLucian (2015a), having reflected on the discussion and replies, wrote, “I’ve thought over Riot’s response and read some comments below and came to the conclusion their response really doesn’t make sense. If you really should need permission to broadcast someone’s game to 1000s of people, why don’t the pro streamers like Doubelift, Bjergsen and the others require it for team mates in their game?” Given how many people (pros and amateurs alike) do, in fact, stream games that include other players without their permission, it was an astute question. The case as a whole ventured into terrain that esports and live streaming had long neglected to face directly. It also followed a pattern we’ve seen multiple times over the years where larger corporate entities issue DMCA claims and simply expect the user to stop whatever they were doing without pushback.

For over a week, none of the major corporate parties involved issued any formal statements, though Merrill did continue to tweet about the issue, invoking notions like “e-stalking” and how Riot wanted to protect players. The only exception was Faker’s own team releasing a statement on its Facebook page:

Unfortunately, some of the fans have been re-broadcasting Faker’s (and other SKT T1 players’) games through the spectator mode, and this has negatively affected players’ streaming business. Faker, a member of the SKT T1, also expressed discomfort over the current situation where his summoner name and videos of his games are being broadcasted with no consent. SKT T1 team and its players truly appreciate the fans’ fantastic support and interest. However, we would like to politely request the re-broadcasting of our players’ games without our consent to be stopped. (SK Telecom T1, 2015)

As I mentioned earlier, it is certainly understandable that a pro esports streamer would be troubled by practice time being streamed without their consent. It is reasonable for players to be concerned about their ability to improve on strategies without revealing new tactics prematurely, or to not have weaknesses in play systematically identified. One might also certainly imagine that the average player might prefer to have privacy while playing, and not be subject to constant or unpredictable surveillance by having their games streamed against their will. Even SK’s team statement (despite its having organizational stakes in the matter) would suggest that it is indeed the player who has a final say over what happens to their games.

While Merrill and SK’s public comments on the matter suggest that this was their underlying principle, Riot’s actual legal guidelines (and likely SK’s contractual relationship with its players) speaks to a much messier set of principles in operation. Riot’s terms of use and “Legal Jibber Jabber” page (as Riot called it at the time) clearly stated that it was the sole owner of everything from game assets to chat logs as well as “methods of operation and gameplay.” And within Korean esports, a fairly regulated industry, players regularly enter into contracts with teams and hand over a wide range of rights, including around broadcasting.

Even StarLordLucian himself didn’t fundamentally offer a challenge to the corporate ownership of game performance. Doubling down on his project, he announced that he would continue the stream in the hopes of forcing Riot to address the core issue. He made a pointed argument on the subreddit:

Faker does not have any rights over the game assets. I am streaming game assets—the spectator client, not anything Faker or Azubu owns. It’s really that simple. I know some people will disagree with this and bring up ethics, but I think this whole issue is about a lot more than Faker. It’s about Riot not enforcing their own legal terms of service. It’s about a co-owner of Riot Games being completely out of touch with esports and the spectator mode. It’s about a company (Azubu) issuing a false DMCA claim for content they didn’t even own. These are issues that will affect the future of the game and the spectator mode. All of this needs to be debated for the future of League of Legends and esports. Right now nothing my stream does is illegal or against the League of Legends terms of service. Riot can always change their terms. And Riot can DMCA my stream at anytime, as they have the power to put any League related IP or Project to an end. (StarLordLucian 2015b)

For the next week, esports news outlets covered the story, and people debated the issue in online forums and via Twitter. Finally, on February 27, Riot published a statement from Merrill (2015b) titled “SpectateFaker: What We Learned and What We’ll Do.” The post recounted the incident, clarified its evaluation, and avoided claims of ill intent on the part of StarLordLucian. Riot’s final judgment of the issue was that “we will intervene and shut down streams where we perceive that it’s causing harm to individual players” (ibid.).

Of particular note was how it parsed the issue of interests and rights to the stream. Indeed, this was the animating point for most of the hundreds of messages on various Reddit discussion threads and Twitter conversations. Early in the piece Merrill (ibid.) reiterated the game’s terms of use, clarifying that “players sign away rights of ownership to the gameplay content they create within the game,” that “Azubu doesn’t own the streaming content that Faker was producing,” and that Riot had communicated this to the company. From this foundational point, he then shifted rhetorical gears to say that Riot was mostly concerned about protecting players’ interests: “With any issue like this, our guiding philosophy is to protect the interests of players; in this case, things aren’t so simple. There are two distinct player interests that are in conflict: the interests of the individual player (in this case Faker) with the interests of the thousands of players who enjoyed watching the Twitch streams of him playing via SpectateFaker (ibid.).” He described how Riot wanted to protect Faker and shut down the Twitch channel, while at the same time allowing other similar projects (such as SaltyTeemo, which broadcasts newbie games) to continue when it deemed no harm was being done. While Merrill and Riot in general tend to frame their actions in terms of “player interest,” it would be naive to think that they don’t also have their eye on ownership claims. Given Riot’s audience growth and the resulting deals it was making (with outfits like BAMTech), it is apparent that it understood the value of asserting control over the game. At the same time, it has also had to navigate, at times unsuccessfully, a series of fraught public disputes with team owners.1 While Riot emphasized player interest in its handling of the SpectateFaker case, it would be disingenuous to not situate it within a much larger media industries conversation.

One of the things that I find so compelling about this case is that it not only encapsulates so many of the most vexing aspects of regulation and live streaming right now but shows the kind of vernacular legal wrangling that everyday users undertake too. As they pick up technologies, often for the purposes of fandom, they can come head-to-head with thorny legal issues. It highlights how we are increasingly finding multiactor stakes at play in these spaces—from individuals to game developers/publishers to third-party organizations. We see the tricky line that developers frequently find themselves balancing on when trying to publicly navigate between wanting to be open to user innovation (or at least seem so) and wanting to retain foundational rights.

Over the years I’ve noticed how often the metaphor of the “Wild West” or “wild frontier” has come up in conversations and articles about live streaming. Aside from the grim subtext of these two phrases, there is certainly something that rings true about them. Live streaming is a fast-moving space, full of change and iteration. Practices, aesthetics, and genres evolve at a pace few can actually keep up with. User action frequently outpaces existing technology and tools. Just a handful of years ago, I doubt most people could have predicted things like TPP, broadcasting cosplay creation, or groups playing Dungeons and Dragons to an audience of thousands. The energetic, experimental, and inventive aspects of live streaming are indisputable.

But as the SpectateFaker incident shows, it’s important to recognize that part of the work of culture, and cultural production, are entangled with forms of regulation. People are not unhindered actors freely exploring and developing; they confront and contend with various forms of ordering and control that tweak, push, pull, and inform their activities.2 Gillespie (2018, 9) argues that “the fantasy of a truly ‘open’ platform is powerful, resonating with deep, utopian notions of community and democracy—but it is just that, a fantasy. There is no platform that does not impose rules, to some degree. Not to do so would simply be untenable.” While live streaming has been energetically developed by not only solo broadcasters in their living rooms but also large organizations reaching audiences of millions worldwide, there remain critical issues around the regulation of this new form of networked broadcast. The tremendous work and creative energy examined in prior chapters contend with many intervening organizations, practices, and forms of governance and control.

Gillespie notes in his look at platforms that “in the context of these financial, cultural and regulatory demands, these firms are working not just politically but also discursively to frame their services and technologies.” He argues that they “position themselves both to pursue current and future profits, to strike a regulatory sweet spot between legislative protections that benefit them and obligations that do not, and to lay out a cultural imaginary within which their service makes sense” (Gillespie 2010, 348). This means that while there are always important technical challenges and developments that these companies are engaged with, they are always also situated within the specificities of content distribution, which itself is subject to a wide range of legal, policy, and cultural forms of regulation and governance. While the emergent production of users and organizations alike has created amazing culture and content, we need to simultaneously keep an eye on the ways that it is not outside forms of social order and control, from the platforms that host it, the communities it lives in, or broader law and regulatory regimes.

In this chapter, I explore what I term the regulatory assemblage of networked broadcasting. Forms of governance and management operate at several layers, from the interpersonal to the algorithmic. This is not a unified system or one of shared values across all domains. Nodes often push and pull against others. The community, for instance, engages in its own forms of control, from the more positive inflections of user moderators in a channel to destructive forms such as DDOS attacks or outright harassment and hostility to a broadcaster. Law and institutional regulation become deeply implicated in how this space is adjudicating intellectual property disputes or policies around permissible subject matter. Much is still in flux around questions of ownership and appropriate content. And as with content on YouTube and other platforms, algorithmic regulation is on the rise with automated curation and monitoring. Technology is woven through all these domains, amplifying and extending the work of governance. Taken together, these varying actors and nodes mitigate otherwise-popular claims about any inherent openness of new platforms, instead highlighting how emergent practices are always embedded in complex systems of governance and regulation.

Community Management

Multiuser spaces, which include Twitch streams with their synchronous chat components, pose unique challenges given that people are engaging with each other online as well as frequently being deeply invested in the life of the channel and broadcaster. Creating these spaces requires responsibility and accountability to the communities being formed. Despite being left too often as an afterthought, or situated organizationally off to the side within game and social media companies, online community management—the governance of the environment and behavior of networked spaces—is one of the most important aspects of these sites. There is a multidecade history of volunteers stepping up and doing serious work in managing online communities, and companies are increasingly hiring community managers whose job it is to do the everyday work of engaging with users and mediating problems. While the term is now typically used to describe formal sets of policies and practices used by companies (and their representatives) to govern and handle the behavior of a platform/game/services user base, I broaden it to include volunteers as well as informal behavior and interpersonal communication taken up by the community itself to self-govern. I will also discuss the more negative instantiations of group-driven regulation that occur. Threaded throughout all these are technologies that amplify and assist community management, governance, and policing.

MODERATORS

When thinking about how online communities are managed, moderators likely come to mind first. Their active, hands-on work has long been a key component of governance, social order, and control in network environments. Moderator teams tend to be volunteer organizations, though successful streamers have started to experiment with compensating high-level mods, and some of the larger esports organizations will have a couple of head mods who are paid. Equipped with special system privileges, these people are frontline monitors of behavior and speech. At the second TwitchCon, this theme was highlighted in a number of sessions where active moderation teams discussed the work that they were doing. Over and over again, they emphasized that chat is a reflection of a channel as well as a powerful part of the product, be it a variety stream or esports broadcast. Speakers encouraged broadcasters to begin thinking proactively about best practices for their mod teams and communities. They were also clear that good chats don’t just happen by magic but instead are cultivated.

The work of moderation teams is important to understand in laying out the landscape of governance in live streaming. Game scholar Claudia Lo has distinguished between reactive and proactive models of moderation. The reactive model is likely the one most familiar to average users and mostly focuses on directly responding to negative behavior. In contrast, the proactive model seeks to foster good behavior but also undertakes “the technical work of developing, maintaining, and adapting both in-built and third-party tools for moderation would qualify as ‘moderation work,’ as would emotional and mental health work conducted by moderators for their communities and for each other” (Lo 2018, 11). The work of effective moderators and their teams is often much more expansive than normally thought.

They are tasked with monitoring the ongoing chat in a channel, and using a variety of manual and automated tools, they do things like answer questions, delete offending messages, and proactively build and sustain the culture of the channel. They may also at times assist the streamer in handling giveaways, donations, or other behind-the-scenes processes. Moderators are given an official system designation that allows them more control of the content in the chat, and beyond deleting messages, they can time people out (suspend them from chatting) or ban them from a channel altogether.3 Moderators generally also have access to managing technology to assist in chat governance.

While part of the work of moderators is reacting to issues, another critical component is modeling the behavior that they want to see within the community. Helping to set the tone, socializing chat participants into the values of the space, and redirecting bad behavior to more positive engagement is part of the work that they do. The approach can vary from channel to channel, but it can involve anything from subtle jokes to explicit referencing of stream rules. Some moderators try to redirect negative conversation into chat games, such as cooperatively building a shape using ASCII characters together. Throughout the chat experience, from rules to emoticons to tone, socialization is a powerful component of moderating chats. Experienced streamers and moderators talk about how the ultimate success in their job is demonstrated when communities themselves take on the informal work of moderating. In those instances, community members speak up to correct bad behavior even before the official mods can.

Effective moderation teams are built and have intentionality. They aren’t just formed by a streamer randomly giving mod rights to anyone who volunteers or a regular on the channel. Instead, they are cultivated groups of people who are chosen to explicitly take on the work and values of the broadcaster. Excellent teams regularly integrate application processes, training, mentoring, and trial periods. Some broadcasters have developed written guidelines to help bring some uniformity to practices and standards. A head mod or smaller group usually manages a team, thus creating additional layers of work and socialization. Successful moderation teams typically have some sustained back channel to coordinate, often utilizing third-party software such as Skype, Discord, or Slack to facilitate ongoing conversations in order to troubleshoot, iterate practices, and provide feedback to the broadcaster. In the case of large esports productions, requests for volunteers to staff an event or “emergency” calls for help frequently go out through them. Back-channel spaces are also used to build a sense of community and cohesion within the moderator group itself. This tightening of bonds, within the larger community and then within the subset of moderators, is a powerful component of managing the space. Building recognition, familiarity, and accountability between members—especially in environments with pseudonyms and transient audience populations—is no small feat, and the work that effective online community management teams do in live streaming is impressive.

HARASSMENT AND TOXIC TECHNIQUES

While the term “community management” tends to be reserved for a more positive view of handling user populations along with enforcing rules and norms, paying attention to the disruptive and disturbing forms of social control is equally important. Policing and social order can also be modalities in which harassment arises and a space takes a toxic turn. While this form of social control can have chaotic properties, we should understand the work it does to order, constrain, and regulate participation and behavior.

As I previously discussed, harassment is a common problem in game live streaming, and affects both variety and esports streams in devastating, powerful ways. Early work by internet scholars like Michelle White (2006) noted that online sites for spectatorship and performance enact forms of regulation and harassment as well, often around gender. Women, people of color, and LGBTQIA streamers—and at times even audience members—are especially subjected to a stream of cruelty that includes hate speech, incessant commentary on one’s looks or behavior, visual abuse via unwanted imagery, and practices that disrupt the channel. These are not merely random acts but also an important component of boundary policing that gets taken up to signal, frequently in devastating ways, “you are not welcome here.” Harassment can be deeply enmeshed with the policing boundaries of participation, forms of identity, and behavior. It is not simply something directed at an individual that incurs personal costs; it can also be a public act and form of socialization directed at witnesses and bystanders. It constructs values and seeks to set up norms for participation and speech. It signals what is permitted and even expected. Harassment is the flip side of the positive processes of community management.

Online harassment in game broadcasting can end up causing streamers to constrain or significantly alter their behaviors to mitigate risk and harm. This can include everything from not using a camera to building up large moderation teams to buffer attacks. It can also involve the substantial psychological work of “toughening up” or “growing a thick skin.” Constant harassment, even at a low level, has become a way of producing a particular kind of subjectivity online where the expectation is “you shouldn’t let it bother you.” But this is not without costs, desired, or even possible, for everyone. Those who don’t find this to be a viable position will often leave. Others will feel bad that they can’t quite toughen up enough and still feel the effects of the harassment. As a form of boundary policing, it’s a devastatingly effective technique.

One of the most dangerous forms of harassment, swatting (an acronym for “special weapons and tactics”) actively disrupts the offline/online boundary and puts the victim in potential physical harm. Swatting incidents involve someone contacting law enforcement and falsely reporting a crime (such as a shooting or hostage situation) at the target’s address. This leads police to show up at the victim’s house anticipating an armed confrontation. These are extremely dangerous situations, and a number of high-profile incidents involving both YouTube and Twitch have brought visibility to the issue. For example, Jordan “Kootra” Mathewson was live streaming in 2014 when he was swatted, and the entire incident ended up being broadcast until the police noticed the ongoing camera feed.

Game studies scholar Alexander Champlin discusses how raids become “media objects” and that those who initiate them demonstrate “a dangerously blasé understanding of trends toward police militarization. These pranksters are participating in a game with stakes that appear game-like, but which have far more material consequences when we consider swatting in relation to broader tendencies in police deployment” (Champlin 2016, 4).4 As shown by the horrific 2017 incident in which a Kansas man was killed when police responding to a swatting call ended up at the house of a neighbor, not the targeted gamer, such scenarios pose serious dangers.

Live streamers are aware of the risk of swatting, and some will contact their local police departments before any incidents to explain who they are and the potential risk. While most of the people I spoke with did not constantly worry about swatting, they all took precautions to make sure their home address was kept private. The use of post office boxes and being vague in conversations with audience members about where they lived (often talking about a region rather than a city) were common. Given how much variety streamers in particular utilize connection with their audiences, the threat that governs this boundary line is notable.

These forms of harassment not only affect the streamer but work back on the audience as well. They profoundly shape the tone of a channel, frequently setting up a cycle of amplification where other viewers chime in with further assaults. Harassing and abusive chat behavior can also be a powerful signal to any viewers who pop onto a channel about who the imagined audience is. It can telegraph who is welcome in that space, and who should “keep quiet” or leave. Rhetoric that someone should just “hide chat” if they don’t like it is built on controlling boundaries of participation and inclusion.

SOCIOTECHNICAL ACTORS

Technologies are woven throughout both positive forms of community management as well as harassment. A variety of sociotechnical actors help govern (and at times disrupt) the space. For example, Twitch implemented an interesting system in which channels could elect to have users “agree” to rules that would pop up when they entered a channel. Its own internal research showed that the system did not meaningfully negatively impact participation, and indeed, there was a statistically significant reduction in time-outs and bans when a user agreed to the rules (Toner 2017). Paying attention to the nonhuman labor regulating live streaming is crucial to understanding how the system currently operates as a social milieu.

For example, bots, which are small bits of code that automate a variety of functions, have long been critical to managing online chat spaces.5 At its base, Twitch chat was built around simple input/output communication system harkening back to IRC, a text-based multiuser technology from 1988. Users type in their messages, which then appear in the chat window to everyone else in the channel. They can also issue basic commands to get information. And unlike the original IRC protocol, which was rooted in ASCII text, special emoticons can be used (many of which are channel specific).

Twitch was actually quite clever in leveraging the power of IRC for its chat given not only that system’s robustness but the amount of third-party development that had gone into it over the years. Special clients could be used to handle the text outside the Twitch user interface—an important function for moderators given the amount and speed of chat in large channels. IRC also had an extensive set of bots that could be used nearly right out of the gate. Bots not only can automate some processes but also “sit” in the channel and “listen in,” providing useful information to users if they query them and helping moderators to preemptively take action. They often “act” as users, appearing with a username and “speaking” on the channel. Just as important, bots are independent bits of code that act with a kind of quasi autonomy. The presence of the moderator or streamer is not required, and bots continue to operate on a channel even if no live broadcast is occurring.

Almost from the get-go of the platform, users worked on bots developed specifically for Twitch. With names such as Nightbot, Xanbot, and Moobot, these bits of code monitor chat, checking against prohibited speech, filtering, answering simple questions, and providing regular information (for instance, when the streamer is next scheduled to broadcast). They are nonhuman community managers and regular members of moderation teams. Bots operate with autonomy on channels, but are regularly tweaked and extended by human moderators to better function based on the specific context of a channel. Word lists that the bot uses to watch for prohibited speech will be extended and modified, often on the spot, to deal with emerging channel behavior. The fact that bot behavior is malleable and subject to moderator input highlights that while they may at times act autonomously, what they do is deeply tied to both developer and moderator notions of what should be fostered or prohibited in chat.

Over the years, both Twitch software developers and third parties have continued to push tool development to better keep up with the practices of streamers and audience members. Lo (2018) offers a detailed account of the rich work that moderators do to create an assemblage of systems to facilitate their management of communities. From bots to tools like Logviewer, which allows mods to keep records of specific users’ chats across multiple channels, she highlights the ways backstage labor has been adept at leveraging technology to manage broadcasts. Often it is working well beyond the given parameters of the platform. This is perhaps not surprising to anyone who follows the extensive ways that users reconfigure and modify game spaces. Moderators certainly bring that sensibility to their community management work.

One of the biggest challenges with these systems, however, is that it requires the broadcaster to know that these tools exist, understand how they work and which ones to use, and install and administer them. While web pages (including at Twitch) and forums are filled with advice about how to do this, it is a hurdle not everyone can get over. One of the most important developments in how the platform has tackled the issue of technology and community management occurred in December 2016 with the release of AutoMod, a pretrained, off-the-shelf piece of machine language software that was tweaked for Twitch. By simply going to your broadcast settings page you could configure different levels of protection akin to the work that third-party bots had been doing. If a streamer elected to use the tool, they could move a slider to set threshold levels for moderation. The system has been refined a bit since launch, and at the time of this writing breaks down into four categories: identity/language referring to “race, religion, gender orientation, disability, or similar”; sexually explicit language covering “sexual acts, sexual content, and body parts”; aggressive language dealing with “hostility towards other people, often associated with bullying”; and finally, profanity that includes “expletives, curse words, things you wouldn’t say to grandma.”

As the user slides the marker over, more category moderation kicks in. You cannot actually invoke the highest level of filtering for categories independently. For example, setting the slider to “more moderation” provides “more filtering” of the first three areas (indicated by three tiny shields) but none on profanity. The only way to achieve “most filtering” on aggressive language is to set the entire system to “most moderation” (which in turn triggers “more filtering” to the other categories). Aspects of the categorization and filter tiers are a bit unclear, and while experienced users of other bots might find the black box design of this system a bit limiting, it was certainly a crucial move in putting moderation tools into the hands of less experienced users.

Two additional factors make the tool notable: the disclosure that it leverages machine learning and natural language processing algorithms, and the way it shapes the experience of chat. While details about how machine learning and natural language processing are being used within the system were not specified at the time of release, and it is unclear how things have developed on that side of the technology since launch, one article noted that it may be one place where Amazon’s purchase of Twitch is paying engineering dividends through the possible use of Amazon’s AWS machine learning platform along with techniques that are coming to fruition in devices like the Echo (Orland 2016). Given the flexibility of chat communities in their attempts to constantly try and foil current bot systems, and the ongoing work that moderators have to do to keep up, a system that can “learn” and adapt would be game changing. Ultimately it is still unclear how that component will fare. As we’ve seen with a number of high-profile machine learning missteps (for example, Microsoft’s artificial intelligence named Tay, which ended up spouting conspiracy, Nazi, and generally abusive speech it “learned” via online training), serious challenges remain to this approach.

A second aspect of AutoMod, and to my mind one of the most significant, is the way that the system will hold messages in a queue awaiting moderator action. Up until this tool, the state of moderated chat was one in which the sidebar of a stream could be a huge list of “<message deleted>” lines. Though the offending speech was removed (even quickly), its presence remained. These ghostly echoes of deleted text helped create a feeling that harassing things were going on in the channel even if you couldn’t see the specifics. A viewer would sense the vibe was lousy and that it wasn’t a great place to be.

The AutoMod system not only holds messages in a queue but also, if they not approved, does not show them, or any indication that they existed at all, in the chat. This is an incisive way to disrupt the affective power of visibly deleted messages, the lingering sense of harassment produced when traces of abusive speech remain. As one streamer put in a Kotaku article on the subject, “Don’t give the attention to the people that are causing these problems, and nobody else gets the idea to jump on that bandwagon” (D’Anastasio 2016). AutoMod addresses the way that speech has a social presence and how an interface can facilitate that position. One of the least satisfying responses to harassment is telling a victim to simply ignore, block, or otherwise hide the speech. That approach completely misunderstands the act as both personal and social. Tools that remove harassing speech from the collective space and don’t allow their echoes to linger via “<message deleted>” notifications are much more attuned to the social impact of harassment.

Compared to bots, DDOS attacks are a crude, simple form of technical intervention, but debilitating to a network when carried out at scale. Through a repeated “knocking at the door” of the target computer, they can disable a system by requesting that it respond to queries. DDOSing has been deployed for a variety of purposes, including political protest and, within the framework of this discussion, harassment.6 DDOS vulnerabilities have not only come through Twitch itself but via the use of other programs, such as Skype, which exposes a user’s IP address to those on their friend list. There is an unfortunate irony in the fact that the use of third-party tools to facilitate connecting to others can get turned against users in this way. By distributing the IP address to multiple attackers, the target’s network system is overloaded, paralyzing it and shutting down the possibilities for communication and participation. It is not uncommon for popular streamers who regularly use Skype (and other IP-exposing programs) to pay for a proxy service to try to protect themselves. They may also filter those who they let into their “Skype circle” to only the most trusted and relegate others to alternate communication methods. The possibility of attack is enough to alter streaming practices. DDOSing has become such a ubiquitous part of networked life that it is simply taken as an everyday nuisance to be accounted for and hopefully prevented.

As we can see through the above examples, the work of community management and policing (even if in the negative) is regularly delegated out to nonhuman actors, many of whom can do the work of moderating, regulating, and disciplining communities well beyond what any single person can do. They come to act alongside the human moderators to do tremendous work in the overall governance of streaming spaces: from formally handling chat to socializing participants via their speech and actions. While we often only think of regulation at the level of policy or law, which I will now turn to, it is important to keep in mind a broader definition that encompasses disciplining and socializing at the interpersonal as well as community level, and that is carried out through humans and technologies that work together.

Policy

If community management speaks to the more micro level of regulation, policy is a middle layer between it and law. Though from its earliest days many invoked the rhetoric of an open and free network, the internet has in fact long had layers of governance, policy, and bureaucracy operating.7 Given the multitude of organizations and interests involved, it is hard to imagine that it could be otherwise. While communities do a huge amount of work to govern themselves, most companies are not content to leave it solely in their hands. Policies arise, and in the case of live streaming, these include those coming from game developers, teams or agencies, sponsors, and the platform itself. Like other forms of governance, there are often skirmishes, many of which morph and adjust in relation to community practices.

Nearly all online services as well as game companies typically sketch out the boundaries of appropriate use of their platforms or content via documents like terms of use/service/conduct and end user license agreements, which users typically, though not always, have to agree to before use. The oft-remarked irony of these is that people generally do not read them in detail given that they tend to be pages long and filled with legal jargon. These agreements also regularly outline terms that can go against normal use practices, such as account sharing.8

In the case of Twitch, a number of issues have played a central role in their terms of use: the company’s own intentions for the platform and its intended use, their need to maintain amicable relationships with game developers, the centrality of advertising to the financial model and subsequent reliance on audience, and navigating legal terrain around intellectual property. When I first began researching the site in 2012, Twitch was still very much in its early days of policy formulation. At that time, it hosted forums that were filled with discussions between streamers and a couple of official moderators. Those moderators fielded a wide range of questions from users, including what was permissible on the site. Posts with names like “How is racism handled?” “Emulators/ROMs, is it OK to stream?” and “What’s the policy on listening to music while you game?” filled the conversation. During this period, the official forums hosted at Twitch.tv were an important community space for streamers to figure out what exactly they were allowed to do on the platform.

One moderator, Russell “Horror” Laksh, who was the site’s lead administrator early on, did an enormous amount of work trying to explain policies to users. As he replied to a query asking if people were even allowed to discuss pirated games in a channel’s chat, “We don’t support piracy in any way, and I suggest you don’t encourage piracy on your stream. There is nothing cool about theft” (Laksh 2011a). A big part of the work that these forum moderators undertook was focused on educating people not simply about how to do live streaming but also about the boundaries of UGC.

This period of Twitch policy development, and the direct conversations between Laksh and users, highlights a distinct moment in the site’s history where those managing the service and those using it were still trying to figure out what, exactly, was appropriate. Although there were some clear lines in the sand (no streaming unreleased games or outright porn games, for example), much of the forum was filled with back-and-forth discussions. While formal moderators assumed a voice of authority, they encouraged users to be in ongoing conversation with them and ask about specific cases if they had any questions. Broadcasters themselves frequently weighed in, offering their thoughts on what was legitimate to pursue on the platform. Context and nuance were framed as central in navigating what was permissible, and pointed to a richer understanding of live streams as potentially complex visual and cultural products.

As the site grew, the official forum seemed to become a less tenable place to handle queries. Trying to officially keep up with fielding a mountain of specific content questions on the forums simply did not scale. Jared Rea, the community manager who was head of policy and moderation (including the volunteer mod team) during this early period, was instrumental in not only creating early terms of service that would provide guidelines for streamers but wording them in ways so as to make them accessible to average users. This approach set the tone that Twitch continued to take for many years whereby it adopted a more informal communication style to convey policy alongside the typical legal language being utilized. It was an effective rhetorical move given how iterative the policies were, morphing and changing in relation to developments on the site, external relations, and user practices. Given that Twitch regularly positions the platform itself as a community, with a passionate “family” of users, it navigates a balancing act as that framework is juxtaposed to formal and often-legal policies.

ADULT CONTENT

Despite Twitch trying to ground its policies and language in a colloquial tone that’s in step with user practice, it has at times seemed to have to square a circle, and some decisions have been contentious. Aside from intellectual property issues (something I’ll look at in more detail in the next section), the issue of not only adult content but also overall scrutiny around embodiment on the platform has caused many debates over the years.

One of the first threads on the early forum that really caught my attention was about pornography. Laksh had authored a post clarifying that Twitch didn’t allow porn (including pornographic games) on the site and that it held a “ban first, ask questions later” approach. He went on to note, however, that “not all nudity is pornography. Specifically, I am talking about nudity within video games. If you are streaming a video game that is not a porn based game, as in, the fact that it has nudity [that] is not one of its main featured attractions, you will not be asked to stop, and you will not be punished.” He said the title should be marked as “mature,” and added, “If you ever have a question of wether [sic] or not you can stream a certain game, please ask first! It is better to be safe than sorry, as we cannot allow pornographic content on the site” (Laksh 2011b).

A poster then asked a follow-up question about if it was OK to stream Second Life, a sandbox virtual world fundamentally built around UGC that allowed people to create a wide range of customized avatars and spaces. Laksh (2011b) replied,

For second life [sic] I am going to say no. Second Life is not itself a porn game, but a vast majority of the games user created items are porn related, to an unavoidable degree. The game does indeed have areas intended for teenagers, or general non-adult related content, but these areas are the prime target for trolls and others alike to just post the worst of their porn related collections! There isn’t really a safe spot you could stream in that game without the risk of accidentally showing something we would not approve of. Because of all of this, I think the safest bet is to not allow this game in general, just to keep people out of trouble.

Second Life did certainly have areas where adult content and sexual behavior was the norm but it also hosted classroom spaces, gamelike zones, and even quasi-business hubs. Though reading Laksh’s post gave me a chuckle—the danger of Second Life!—it did point to the way that the platform tries to navigate encouraging broadcasters to dive in and create compelling content for others but within “PG” bounds. Twitch’s list of prohibited games, which includes titles that “violate our Community Guidelines as it applies to hate speech, sex, nudity, gratuitous gore, or extreme violence,” also includes those with an Entertainment Software Rating Board classification of “Adults Only.”

While many of the games listed certainly sound like they shouldn’t be on the platform (or any, to be frank, with titles like Battle Rape or The Maiden Rape Assault), others have sparked broader conversation not just around adult-themed games, even with sexuality, but also about what it means to have such a popular media platform block particular subject areas or new cultural products wholesale. At a moment when more and more designers are pushing games to speak to serious, meaningful, emotionally mature, or nuanced issues, it should not be surprising if creators turn to the platform with content that pushes its boundaries.

One of the more nuanced discussions on this topic has centered on the games of indie designer Robert Yang, who has had several titles banned from the platform due to “sexually explicit acts or conduct” and issues around nudity. Yang (2015), whose work has taken up gay identity and sexuality in his games, has pointed out the strangeness of a policy that he characterizes as “as long as it’s not important, it’s OK.” He maintains that Twitch’s policies are opaque and unevenly applied, saying, “Their goal is to remain vague and hazy, so that they can randomly decide what ‘too much sex’ or the ‘wrong kind of sex’ is, while carving out special exceptions for large companies or business partners. I’m sure this is good for business, but it’s very bad for creative culture” (Yang 2016). Developers like Yang who are doing deeply original, creative work on themes not regularly tackled in the mainstream game industry are put as a serious disadvantage.

A significant part of Yang’s (2015) pushback on Twitch’s policy is that it doesn’t take into account context, or the ways that his games actually “focus heavily on ideas of consent, boundaries, bodies, and respect.” Though many mainstream games may indeed not have as much nudity or sexuality as his, we should pause to think about how sexual threat and violence is regularly, even mundanely, deployed in popular titles. Many have certainly pointed out over the years how US media is all too casual about the ways that it broadcasts violence of all sorts, while panicking about nonexploitative nudity or sexuality. In this regard, Twitch’s policies are not all that far off from its traditional broadcast television contemporaries, though we should keep in mind that cable at least provides an outlet for content not deemed suitable for a general viewer. On Twitch, there is as of yet no such bypass mechanism.

While the issue of adult content, including its definition and prohibition, is certainly a familiar one for many traditional media outlets, how it has played out on Twitch can’t be reduced to quite the same framework. There are deeper underlying issues on the platform about what sorts of images, content, and modes of presentation it sees as core to its identity as well as what it wants to foster. Though it has expanded to allow a range of shows from games to people making cosplay outfits, and often signals that it values a wide range of creators and interests, it does at times revert back to narrower formulations.

DRESS CODES AND “FAKE GAMER GIRLS”

One of the most debated policies has been around the regulation of streamer attire. In many ways it goes to the heart of tensions that exist both on Twitch and within game culture more broadly regarding gender and participation. It taps into how the platform frames what counts as legitimate content and presence, and how that model at times runs up against actual user practices and desires. Though Twitch has continued to expand what it allows beyond straightforward video gaming—you can now see people engage in “social eating,” music production, and any number of other creative endeavors—there remain boundaries that are articulated and enforced by the company, and policed by some of the community.

In October 2014, Twitch released a revised Rules of Conduct (eventually renamed Community Guidelines), setting off widespread coverage, heated discussion, and op-eds across a variety of sites. In it, Twitch specified the following guidelines regarding streamer attire on the platform:

DRESS . . . APPROPRIATELY

Nerds are sexy, and you’re all magnificent, beautiful creatures, but let’s try and keep this about the games, shall we? Wearing no clothing or sexually suggestive clothing—including lingerie, swimsuits, pasties, and undergarments—is prohibited, as well as any full nude torsos*, [sic] which applies to both male and female broadcasters. You may have a great six-pack, but that’s better shared on the beach during a 2-on-2 volleyball game blasting “Playing with the Boys.” We sell t-shirts, and those are always acceptable. #Kappa

* If it’s unbearably hot where you are, and you happen to have your shirt off (guys) or a bikini top (grills) [a misspelling of “girls” as a linguistic meme], then just crop the webcam to your face. Problem solved. (Twitch 2014)

That day, popular streamer Meg Turney tweeted out a message that she’d received from Twitch informing her that an image in the profile panel on the site was deemed inappropriate and “not suitable for Twitch in any capacity,” noting that it had to be removed within a week or the channel would be suspended. Dressed in lacy shorts and a bikini top, holding a game controller, and standing in what appears to be a living room decorated with a variety of game artifacts, the stylized professional-looking photo struck many as a poor choice for Twitch to target. Numerous commentators found the policy at odds with other material that was regularly broadcast on the platform, while some saw it as dovetailing with an ongoing set of attacks directed at women in game culture. Although Turney’s original tweet expressed real outrage, she later commented to the Huffington Post that “it’s not really slut-shaming, it’s more like body policing. Or enforcing a stricter dress code. . . . I just think the whole situation is silly” (quoted in Beres 2014).

On the content front, many noted the irony of trying to regulate streamers’ attire amid games that clearly violated the standards being imposed. As noted above, while Twitch prohibited explicitly pornographic games, the platform was filled with titles that regularly showed women in not only revealing clothes but also in scenes of sexual violence and harm—motifs that some genres routinely traffic in. Mitchell (2014b) turned his attention to this dissonance, observing, “If Twitch is trying to make it to the big leagues and be taken seriously, then at some point it’s going to have to acknowledge the obvious contradiction built into its new policy: the games themselves display a lot more sexually suggestive themes than most streams.” He went on to argue that it was not only within game content that the limits of this kind of policy were apparent. Twitch’s own forays into supporting live music on the site had run aground when DJ and electronic dance music producer Borgore showcased a live event from his home but had to end the feed because it included women in bikinis hanging out poolside (Mitchell 2015).

Other critiques honed in on how the policy synched up all too well with broader battles around gender and sexism. Game critic Matt Albrecht, in a piece republished at the online popular culture fan site The Mary Sue, wrote about how the policy, while formally addressed to both men and women, was playing into larger panics about “fake gamer girls” and fears of women utilizing their sexuality for advantage within an entertainment context. He asserted that

when a woman barters her sexuality for a competitive viewership advantage with no promise of actual sexual favors or bearing of offspring, those who are oblivious to the patriarchal systems that even lead to this sexual bartering system to begin with raise up their pitchforks and cry foul. . . . Never mind the implied criticism that these women streamers might all be those dreaded “fake gamer girls”; the truth is that women merely having female bodies, regardless of how conservatively they dress, will be perceived as sexually inviting and exploitative. Merely owning boobs is considered enough provocation for conservative critics and for harassers to feel justified. For women, there is never a sweet spot for their sexuality. (Albrecht 2014)

Albrecht’s contention, harking back to Turney’s comment that the very subjectivity and embodiment of women was being policed, was insightful giving the timing of it all. Twitch repeatedly tried to clarify that this wasn’t a change but instead merely a restatement and clarification of a long-existing policy. Yet it occurred at a time when just a few months earlier, in August 2014, a faction known as GamerGate launched. That timing provided a particular tone and context that the statement got read within.

Trying to pass itself off as a movement about “ethics in gaming” while in practice acting as repudiations of feminism and the increasing heterogeneity of gamers within the culture, GamerGate became a black box term that contained a multitude of often vile and harmful impulses and practices.9 While GamerGate could be devastatingly and dangerously focused, as when it came to women like game developer Zoë Quinn or cultural critic Anita Sarkeesian, it also served as a larger cultural ethos whose attention turned to whatever might slip into view. Challenges, whether from academics or popular press authors, to hegemonic ideas about games and gamers, or the costs of toxic masculinity, were met with virulent, frequently pinpointed attacks (typically coordinated in a handful of outlets like IRC, 4Chan, and Reddit).

During 2014 and 2015, a number of people became targets for those upset that game culture was, they felt, being disrupted by participants who might hold different sets of values and approaches. “Social justice warriors” were seen as interjecting too many “political” or “feminist” issues into game content and culture. Perhaps almost more powerful was the way that identity itself became an uneasy variable for so many of these reactionary stances. Over and over again, GamerGate participants tried to argue that it wasn’t that they didn’t want women, people of color, or LGBTQIA folks in gaming but rather that those people shouldn’t “drag their identities” into it. Anyone was welcome as long as they could fit into the forms of identity, embodiment, and engagement that already easily occupied gaming.

There has long been a painful irony at work in game culture. It has a history as a space for outsiders or the marginalized, for geeky women or forms of masculinity that didn’t fit a hegemonic model. But it has also policed its boundaries in complex ways, and I am not alone in noting the dissonance of what was originally outsider culture becoming so intensely harsh a judge of other outsiders. The gauntlet for entry into game culture can be vicious, and its “rules” hard to pin down. Inhabiting a subjectivity that is permitted in it can seem like threading a needle.

One variable remarked on over the last several years, subject to heightened scrutiny, is femininity, whether embodied in men or women.10 As a woman who is older, is known by initials, and doesn’t dress in particularly feminine ways, I’ve long been struck by how little I’ve been targeted despite doing publicly feminist work. One of my longtime informants crystallized this for me one day when he said, trying to clarify his own frustration with “social justice warriors,” that he didn’t mind “people like me” who didn’t “push their gender on everyone.” As he commented, my name was gender neutral, and even my Twitter handle (“ybika”) wasn’t clearly gendered. Perhaps left unspoken was how my age also factored in. My own gendered identity performance was fine with him, and he said that if others were like that, he’d have no problems. It was the ones who make it “a thing” that cause problems. I hypothesized back that it was only because my gender performance didn’t upset his mental schema for who was a legitimate participant in game culture that he had no problems with it. Other women in the space have remarked on this, observing that as long as they dressed “like a tomboy,” or took up language conventions or other mannerisms of the men they gamed with, they had few problems.11

On Twitch, this has played out as tirades against what some see as “cleavage cams” and a strange fear that men are being manipulated by women’s bodies.12 Posts on the Twitch subreddit “alerting” the community to what they identify as a “cam girl” (sometimes “cam whore” or “titty streamer”) regularly appear. As one poster, ellis0896, wrote on November 30, 2014, “There’s a League Of Legends streamer right now who literally has the biggest breasts I’ve ever seen in my life but she has them hanging out of her top so is this still allowed? I don’t want [to] ruin her income and whatnot but it should be about the game, not her incredibly large breasts.” Another, HeartofTractors, cut to the chase more quickly on January 13, 2015, asking, “Why do so many of you put on make up and all this other beauty crap just to play games?” Over and over again, judgments and policing around femininity, sexuality, embodiment, and women’s presence have come up. And while many took pains to point out that the policy was formally addressed to men as well (no bare chests allowed), simply put, their bodies were not under constant scrutiny like women’s were. Rhetoric of evenhandedness completely sidestepped the reality and context in which the policy circulated.

A handful of women streamers jumped into the discussion, talking about the extent that they are harassed for just being who they are and articulating their frustration with the strange ways that the policy is out of step with everyday life. One, hmet11, responding on January 21, 2015, to a thread titled “When will Twitch take action against Female Streamers who clearly are using the streaming service as a platform to ask for money,” wrote,

Female streamer here. I’ll try and say my piece without sounding defensive, although I’m pretty offended by this post. I’ve been streaming for about 8 months now. I do it because I love the community, have been playing games my whole life, and overall love it. But you know why else? I do it to make money. I work my ass off to get donations, grow my numbers, and hopefully one day get partnered, because i’d [sic] so much rather be a full time streamer than work some shitty 9–5 job. I feel pressured to never even wear low cut shirts, shirts I would wear in public to the supermarket as they’re that acceptable, because punk asses like you come in my chat and automatically tell me I’m abusing the system to get money. So instead I feel the need to cover up.

There is, of course, an absurdity in accusing these women of using the service for financial benefit given how central that very ability has become for aspiring professional streamers. But over and over again, women—cis and trans, white and of color, gay and straight—have been targeted when their bodies, performances, or identities don’t correspond to an imagined ideal of what a streamer should look like or be doing. The policy unfortunately seemed to legitimize, and indeed deputize, people who were keen on calling out women for not using the platform “right.”

This occurred within a much larger trend both on the site and off to target anyone who wasn’t deemed a legitimate occupier of game culture. Leslie (2015b) highlighted this long-standing pattern in an article about “Forsen Army,” a group of trolls centered on the popular streamer Forsen that has been particularly vicious in finding women, LGBTQIA folks, and people of color who are streaming, and then “raiding” their channels to post hateful and harassing comments. The practice of channel raiding for harassment (versus surprising a small streamer with positive attention—another common practice) has a long history, and unfortunately the policy statement only ended up adding fuel to an already-burning fire within game culture.13

While Twitch as a whole certainly didn’t traffic in this approach or endorse any of the harassment, in the context of the particular moment when GamerGate was on the rise and becoming a powerful cultural force, it was hard to not read a statement that encouraged streamers to “keep this about the games” as syncing all too well with a broader regressive turn in the overall policing of entree into game culture with an often-vicious hand. It didn’t help that the statement even used the term “grills” instead of “girls.” Though intended as a familiar “joke,” the word has become persistently ugly shorthand that floods channels when a woman is on the stream, washing out any specificity that they have as a person and simply referring to them by their gender (much how the trihard emote gets spammed when a person of color is on-screen). Although several people inside the company confided in me that they were troubled by this “policy” and how it had all been handled, noting that internal discussions had at times been quite strong, the public face of the company remained unified.

In November 2015, bundled into a mix of many other updates, Twitch revised its policy again. This time the blog post announcing the changes was quite different in tone. No jokes, no mention of grills or sexy attire, but instead a simple bullet point amid others noting the update. The new policy succinctly stated (and continues to be so at the time of this writing), “Nudity and conduct involving overtly sexual behavior and/or attire are prohibited.” The change was little remarked by the community; indeed, it was only a post on the subreddit several months after the fact that even alerted me to it. As with many of the other topics, dealing with everything from intellectual property to scams, the informal, insider lingo had been removed, and in its place, fairly black box rules remain—ones that the community continues, albeit less heatedly, to debate and police.

Law

The earliest days of Twitch’s forums were filled with people asking not only about how to stream or what kinds of titles were allowed, but also fundamental questions about whether it was even legal to broadcast games at all without developer permission. Time and again, these questions went unanswered by official moderators, even when they replied to other queries. Other users would sometimes chime in to help, reassuring people that it was all covered by fair use “like on YouTube,” but also routinely stating that they were unclear how Twitch as a company was navigating this issue. In the earliest days of the forums, the absence of a well-articulated statement from Twitch on the very legality of streaming—especially amid so many other content guideline answers—was notable.

There is a provocative intersection at work in our culture. The tremendous growth of digital gaming among youths and adults alike exists alongside as well as within regulatory and governance regimes, from everyday practices to software and law. Play does not exist outside these systems but rather navigates and makes meaning within as well as around them; it lives within a DMCA world. I have long been drawn to exploring this relationship as it is negotiated in the area of intellectual property and terms of service/use policies. In those moments of conflict, compromise, and control, we are afforded the opportunity to peer a bit more closely at systems of meaning and practice that are otherwise naturalized or hidden.

One of the most powerful things that the qualitative study of digital gaming has afforded us is a deep look at how players encounter software systems, and rather than only simply accepting them as given, take them and make something else. It is key to recognize this as a sociological account and not an individualistic one. While any single player may not tweak or alter their own individual play/game, the overall pattern is one of transformation. Game communities are avid, dynamic interlocutors with the systems that they engage. It could not be otherwise; this is fundamentally what culture does.

The work of culture also involves a constant dance around control and order. Regulation can take place at a variety of levels. It can operate top down, bottom up, or laterally across peers. It can be found in everything from code to social practices. Currently one key site for the governance of digital spaces is through the use of corporate policies (terms of service and end user license agreements), software, and intellectual property regimes. Underpinning this approach to regulation tends to be a basic assertion of ownership residing with developers and publishers; this is the frame that argues gamers make use of these digital artifacts essentially at their pleasure.

Yet it is unavoidable that cultural actors will always take up the objects and systems that they encounter and remake them for their own purposes. Games do not live outside culture but within it. They are objects of culture, and as such are accountable to it. They are at play within culture. This formulation, of course, is itself still not quite right because there is no single “culture”; there are many. They overlap, diverge, and exist within their own ordering, tensions, and struggles with each other. As individuals, we move through and inhabit a range of them. It is a beautiful mess that poses both methodological and analytic challenges. But one thing we can be certain of is that there is no system that is somehow magically immune from the work of culture. This provocative entanglement is the norm.

Our current moment, however, is not evenly weighted in terms of power. As legal scholar Rebecca Tushnet (2010, 892) remarks,

Copyright law’s expansion tends to restrict individual freedoms more than those of specific represented industries. Even when exceptions or limits are preserved, they are often complex to the point of near-unintelligibility, so that only a well-advised institutional player can confidently take advantage of them. This is a deeply unhealthy system, guaranteeing that citizens attempting to express themselves and participate in cultural and political dialogue can find themselves unexpectedly threatened or silenced by copyright claims.

Equitable pushes, pulls, and scuffles are not what we find in digital gaming. Instead, we frequently see players struggle to use games within systems that are not always adapting to emergent practice. And far too often when new uses are acknowledged and addressed, it is within a framework that continues to uphold a flawed understanding of ownership. Companies, when they do “allow unanticipated uses, never fundamentally reckon with the generative work that players do and deep investments that they can develop.

In addition to the larger organizational skirmishes such as the SpectateFaker case that I recounted earlier, over the years I’ve seen players themselves struggle with this tension. On the one hand, they typically recognize, acknowledge, and value the work of developers, giving them tremendous praise and credit for games. At the same time, they can struggle with how to articulate their own sense that somehow something more is created through their interaction with systems that in turn make it also theirs. As one streamer insightfully put it,

So when you stream and you add any elements of customization beyond the game itself, when you start creating your own content, when you start adding humor, and you start doing different things, I think it takes it to a new level that is outside of the black or white of saying it’s owned by the game creator. It becomes something of your own and it’s part of the subculture of the internet as well. . . . The internet doesn’t like the concept of people holding, withholding, valuable information or valuable resources from the community at large, especially to make money. That’s not what we’re about. That’s not what the internet is about.

Legal scholar Julie Cohen contends that there has been a misguided understanding of creativity unpinning intellectual property regulation—one that has overly dichotomized author and reader/user. She writes that “what is needed is not a better definition of authorship, nor an airtight conception of usership that is distinct from authorship, but rather a good understanding of the complicated interrelationship between authorship and usership, and the ways in which that interrelationship plays out in the cultural environments where creative practice occurs” (Cohen 2012, 69). I develop this line one step further by taking the processes and words of live streamers to heart; I argue for conceptualizing play as transformative work, and as such, posing challenges to how we think about participation and ownership in a digital age.14

FAIR USE AND FAN PRODUCTION

Professionals and amateurs alike are constantly taking up materials produced by others and working with them. An important component of the US intellectual property regime is the designation of fair use (a component of the 1976 Copyright Act), which affords creators various kinds of protection when working with someone else’s intellectual property. The Organization for Transformative Works (2015) notes that “fair use is the right to make some use of copyrighted material without getting permission or paying. It is a basic limit on copyright law that protects free expression. ‘Fair use’ is an American phrase, although all copyright laws have some limits that keep copyright from being private censorship.” Generally speaking, there are a number of factors considered when a fair use claim is made:

the purpose and character of the use

the nature of the copyrighted work

the amount and substantiality of the portion taken

the effect of the use on the potential market15

These make up what is commonly referred to as the “four-factor” test, though they are not a test in any conventional sense; they are tied to juridical interpretation, and have posed tremendous confusion and frustration for professional and amateur creators alike.16 Of particular interest for the argument here are the components of the test relating to the purpose and character of a work along with its market effects.

The purpose and character of new creative work is critical in understanding its legal position. Fair use offers a protective foothold for creative endeavors that utilize someone else’s intellectual property and transform it through “adding new expression or meaning,” and producing value “by creating new information, new aesthetics, new insights, and understandings.” Transformative work generates a meaningfully new cultural artifact.

Amateur creators seeking fair use protection have done a great service in publicizing and animating conversations on the subject. Fan-driven sites like Fiction Alley and FanFiction.net as well as predecessor nodes on Usenet, mailing lists, and forums offered creators an opportunity to not only share their work but also discuss the climate of production and legal challenges. Sites such as Lumen (formerly known as ChillingEffects.org) or the Organization for Transformative Works have not only raised attention to legal issues around fan production but worked to provide resources and information to help people navigate this fraught terrain too.17

One of the central moves in educating fan producers about their legal footing has been in explaining fair use along with helping amateur creators utilize legal and rhetorical arguments to frame their activities. In practice, this has tended to mean that there has been an emphasis on noncommercial uses as well as situating fan activity as primarily driven by passion, love, and a kind of purity of intent free of monetary self-interest. The focus has emphasized a creative, community-oriented activity. This rhetorical strategy is particularly well captured in the Organization for Transformative Works’s (2013) membership drive:

Why do you participate in fandom? For many of us, the answer to that question is love—love of a favorite TV show, video game, or band; love of fannish communities and the friends we make there; or love of the creative process involved in transforming canon to create something new. Fans put in long hours making and consuming fanworks, traveling to conventions, moderating communities, and chatting about their latest fannish passions—not out of obligation, not for pay, but because it brings us joy.

This is an entirely understandable, even accurate representation for many amateur producers. It captures much of the pleasure, relationality, and commitment that develop for all creators. It speaks to a kind of serious leisure that helps us understand the level of commitment and investment that a fan might have (Stebbins 2004).

The problem, however, is that when this approach is framed as the dominant orientation, it leaves us critically and analytically unprepared to explore the commercial intent of amateur or fan producers. It can truncate our full understanding of how such endeavors can be forms of labor and work. It doesn’t help us in navigating the skirmishes, battles, or tensions within emerging production models. While a compelling rhetorical shift to help fans reclaim some legal footing—and indeed perhaps a needed one at a particular historical moment—I am concerned it closes off too much both critically and analytically.

Within gaming we have long been faced with a much messier picture of fan and user production that has involved commercial or professional aspirations, and complex assemblages of actors and intents where notions of work, grind, and even pain are woven in. The standard rhetoric about fair use and fandom do not help us fully get at the range of creative activities that we see. Gamers often push the line of “noncommercial love” well past the point of breaking. In my previous work on massively multiplayer online games, I recounted the struggles between players and game developers/publishers around the ownership of digital goods (Taylor 2006b). Whether it was trying to sell your account on eBay or trading digital items for “real-world” currency, there has long been a tradition in game spaces whereby fan and players have attempted to make money from their play.18 There has also been a robust history in digital gaming of modding, add-on creation, and mapmaking by someone other than the formal game developer.19 Sometimes these initiatives are noncommercial in orientation, but we have also seen developers—at times a fan/player of the game, and in other instances a more professional outfit—seek financial support for their work.

The second critical component to specifically pay attention to in fair use arguments is how the new creation impacts the preexisting work. Courts are particularly attuned to whether work “deprives the copyright owner of income or undermines a new or potential market for the copyrighted work.” This is part of the reason that most people believe fair use is fundamentally about noncommercialism and that if they don’t make money on something, it is automatically protected under fair use. That isn’t actually the case, and the courts may rule you don’t have a fair use claim even if you are giving something away for free.

Perhaps surprising to many, though, is the fact that some works that would fall under fair use protection may indeed negatively impact the existing market. It has been noted regarding parody, for instance, that

it’s possible that a parody may diminish or even destroy the market value of the original work. That is, the parody may be so good that the public can never take the original work seriously again. Although this may cause a loss of income, it’s not the same type of loss as when an infringer merely appropriates the work. As one judge explained, “The economic effect of a parody with which we are concerned is not its potential to destroy or diminish the market for the original—any bad review can have that effect—but whether it fulfills the demand for the original.” (Fisher v. Dees, 794 F.2d 432 [9th Cir. 1986]; Stim 2016, 276)

The economic side of a fair use assertion, while often tilting toward more highly valuing the noncommercial, is messier than at first glance. It is not at all clear, based on empirical evidence, that live streams as a wholesale category fulfill any original demand that we might attribute to a game. Indeed, part of what has made them such a vibrant new media space is that they regularly transform private play into public entertainment; they are often entirely new products. Given that fair use is oriented to “protect[ing the] freedom of expression and the capacity of a culture to develop” along with, as I will describe below, the power of transformative works, we can fruitfully probe the issue of commercialism (Aufderheide and Jaszi 2011, 26).

The growth of video production and distribution centered around games has also led to an explosion of creative activity that while using games as a digital playing field, exceeds the bounds of “just” playing. From the earliest productions that utilized game engines for movie making to the YouTube content producers who built an innovative new media scene by providing game-focused entertainment for others, we can see a long tradition of players taking up game artifacts and making something more—something often with commercial aspirations.20 That many of these innovations and moves have involved the desire for monetization should not be simply dismissed. It speaks to a core issue that we would be remiss to overlook: the easy boundaries between commercial and noncommercial, amateur and paid, and fan and professional simply do not hold. The robust history of scholarship around participatory culture and media (including games) suggests that we need a fundamental reorientation of how we understand the work of play—one that explores its transformative nature.

PLAY AS TRANSFORMATIVE WORK

Over the course of researching gamers across multiple projects (from massively multiplayer online games to professional gaming to live streaming), I’ve come to see that they frequently hold much more nuanced approaches to understanding the productive and co-creative nature of their play. Game scholar Hanna Wirman (2009, section 2.3) argues that there are at least five forms of player productivity, ranging from the expressive to the instrumental, and they should “be understood as a precondition for the game as a cultural text.” Sal Humphreys (2005), in her work on massively multiplayer online games, contends that linear notions of authorship and subsequent understandings of copyright are disrupted when accounting for a notion of “productive players.” She and fellow game researcher John Banks have examined the power of users to reconfigure institutions and markets by their activities. They assert that this is most interestingly seen in the “hybrid configurations and the entities that emerge, which are an uneasy and at times messy mix of the commercial and non-commercial, markets and non-markets, the proprietary and the non-proprietary” (Humphreys and Banks 2008, 406).21 These early game studies findings continue to express themselves in the work of live streaming producers as they try to situate—culturally, structurally, and legally—their creative engagements.

A large part of what broadcasters themselves are contending with is that, as one expressed it, “technology moves at a million miles an hour, and laws move like the opposite direction.” One streamer I spoke with, thinking through the relationship between the game and his productions, said,

What is it that keeps people watching my cast? Is it me as a person, or is it just that I’m playing the games that they want to see? I definitely think it’s a mixture of both. I definitely have my core fan base of people who definitely watch my cast for me as a person, and those are the repeats. Those are the viewers who keep coming back, but there’s definitely a percentage of viewers every night who just sort of pop in because they see me playing a certain game. . . . I really do believe you can watch two different people broadcast the same game and have totally different experiences and totally different stories.

The sense that a person’s unique engagement with the system—the particular circuit between them and a game—is central to broadcasting animates many of the conversations that I find myself in with live streamers. There is typically a strong sense of the performative nature of gameplay: that the game provides a field on and through which individual play unfolds.22

The performative aspect and ownership stakes in this formulation were clearly articulated by one streamer I interviewed when he sought to find a good analogy to explain to me how he thought about his work. He likened what he does to a comedian or musician who, though using a club’s venue, still creates something that is unique. Even though they are using the space, “the person who’s up there performing, that’s their act. That’s theirs. So when I’m playing a game and I’m sitting there, I’m on stream, everything. And what is mine is anything, any content I create whenever I turn on my stream. That is my content. That is me. This is mine.”

Another sought to point out the distinctiveness of this form of media, saying, “I totally get the legality of not sharing or streaming music and movies or books because those art forms, those mediums, they are very much set. When you watch a film, it is the same film beginning to end every time. Yeah, you can copyright that. For me, the act of watching somebody play a game, you are not experiencing a game.”23 Instead, he argued, you are watching a specific entertainment product—one produced through the streamer’s unique actions assembled for a broadcast.

The live streamers I spoke with consistently drew out how their productions are transformative; that their work produced new forms of expression, aesthetics, and cultural products. It should perhaps not be surprising, then, when they also say, as one did, “If I could take my live stream and turn it into a brand that people want, and I can take that brand and turn it into a business, then that would be amazing.” Another framed how he approached monetization as connected with both his passion for the work and pragmatic concerns:

I want to make it clear that I make money so that I can stream. I don’t stream to make money. . . . Nobody’s just going live and play[ing] games and not think[ing] about providing for their kids or knowing what insurance you have, hospital bills, having money to pay for the car when it breaks down. It’s an aspect of this that is inevitable that you have to think about. It’s all hand in hand. It goes along with the territory. I’m going to approach the business side of this with the same intensity that I’m going to approach the gaming side of this. Because to me, it’s all synonymous. It’s all the same thing.

While much of what has been written around UGC and gaming has focused on its noncommercial side, over and over again, the live streamers I spoke with had woven together their creative and commercial aspirations. They also felt themselves bumping up against legal structures and understandings of game artifacts as narrowly construed intellectual properties. Yet their transformative work was always in the foreground of their stories.

VERNACULAR LAW

This gap between how they experience their work and creative outputs, and the legal structures that in turn regulate them, is worth lingering on. Perhaps one of the most interesting threads within recent legal scholarship has been an increasing turn toward the empirical along with the role of “vernacular law.” Much in the same way that Burgess’s helpful concept of “vernacular creativity” (2006, 2007) captures the ways that “everyday creative practices” are important and can thrive outside high culture or commercialized paths, legal scholars have sought to understand how creative professionals actually think about their process and the meanings around ownership in their daily lives.

While there is a powerful myth surrounding the necessity of avidly protecting intellectual property to maintain “monetary incentives and wealth maximization,” as legal scholar Jessica Silbey (2015, 6) documents through her interviews with various kinds of creators, intellectual property holds “diverse functions and sporadic manifestations in the lives and work of artists, scientists and their business partners and managers.”24 Her story is one in which people who are commonly accorded intellectual property rights actually have a more nuanced understanding than the law typically does of its function and role in, and limits to, creative activity. Tushnet’s examination of the ability of specific creative communities to sensibly evaluate fair use claims also speaks to the thoughtfulness that producers bring to the issue. As she argues, “While copyright owners’ interests must not be ignored, and wholesale, commercial copying is extremely unlikely to constitute fair use, creative communities recognize these principles and are capable of respecting copyright’s legitimate scope while preserving space for transformation” (2008, 104).

This is resonant with the flip side claims that user producers (such as live streamers) make when reflecting on their formal legal versus experiential standing. While often stating that they have no meaningful legal protections or rights, they simultaneously talk about a profound feeling that they have real stakes as creative producers—ones that should be acknowledged and formally recognized. The broadcasters I’ve spoken with over the years actually understand that the rhetoric around intellectual property does not line up with everyday practices and does a disservice to the complexities of cultural production. A much broader range of actors, and frequently in much messier ways than contemporary regulatory regimes acknowledge, produce innovation, cultural activity, and transformative works.

Legal scholars Burns Westen and David Bollier (2013) maintain that vernacular law—the rules and forms of moral legitimacy as well as the authority that can arise socially within everyday life—can offer a powerful “corrective to formal, organized legal systems” that may be deemed unjust, unresponsive, or dysfunctional. Communications scholar Olivia Conti (2013, n.p.) in exploring the emergence of UGC, suggests that “YouTube and other UGC platforms represent a fraught layer of mediation between institutional and vernacular.”

These everyday conversations along with the lay theorizing around property claims and moral rights, or the desire for monetization by user producers, can be found in comment threads, subreddits, and ethnographic fieldwork. They consistently point to a more complex understanding of cultural production than we typically find constituted in the law. While claims about fair use offer “the assertion of creator agency against unfair copyright law, vernacular discourse represents the assertion of a localised [sic] community within a world dominated by institutional discourses” (ibid.). The arguments that live streamers regularly make about their productions represent a powerful form of vernacular interventions on legal frameworks—ones that at their heart, present a much more expansive rendering of creative action and production with commercial products. They highlight a deeply co-creative model of culture, echoing legal scholar Rosemary Coombs’ (1998, 270) understanding that the “use of commercial media to make meaning is often a constitutive and transformative activity, not merely a referential or descriptive one.”

As a company, Twitch certainly recognizes the protective power that a designation of transformative work holds for its broadcaster’s content. The company’s annual convention, TwitchCon, routinely hosts panels on the subject of intellectual property, offers partner- and affiliate-only discussions to directly answer general questions, and on a number of occasions, I’ve heard staff members encourage streamers to think about transformative aspects that they can add to their shows. Broadcasters are encouraged to become educated about what is legally permitted (no small feat given the overall legal limbo that much of this form of content creation lingers in). Plainly the company’s interests are in broadcasters not running afoul of game developers or publishers, and it strives to have streamers engage in good faith practices.

That said, as a company it does not offer legal representation to its streamers, and situates them as independent producers who are encouraged to be educated about the issues and, ultimately, solely responsible for what they produce. As I was in the final stages of preparing this book, I learned that the company had, in partnership with the California Lawyers for the Arts and Legal.io (a legal services platform), launched a new site (at legal.io) to assist streamers with a variety of legal issues. It offers a number of guides, from licenses with Creative Commons to fair use and DMCA. Users can also find attorneys through the site, and get more info about creating limited liability companies or trademarks. On the one hand, it is great to see such resources being offered to broadcasters, who are frequently desperate for help and guidance. On the other, as labor scholar Jamie Woodcock more critically remarked to me, this type of setup has been a way that “gig economy” platforms have sidestepped meaningful accountability to their workers. Though these companies rest on the labor of nonemployees, they expect them to function as independent operators who bear the risk. Given how much the playing field is tipped against smaller content creators with our current intellectual property regimes, and how much precarity overall streamers face, I am concerned about the position this puts many of them in.

The desire of many live streamers to profit from their work, to live within what are admittedly turbulent commercial systems built on platforms that they don’t own, must be better reckoned with. Such desires cannot be written off as simply co-opted fandom or exploitation, or simply tolerated monetization at the discretion of the “real” intellectual property holders. The activities of players, which might otherwise be understood as simply enacting a game as given, can be a form of productive, creative engagement and transformative work, warranting both cultural recognition and legal protection.

AUTOMATED ENFORCEMENT

Though we can push to think more expansively about the transformative work streamers do, when technology gets enlisted to embody legal structures, things can be painfully reduced. Earlier in the chapter, I discussed the varying ways that sociotechnical actors were enlisted, from bots to DDOSing. Given the abundance of UGC hosted on platforms, many companies have taken up technical solutions to try to deal with everything from child pornography to intellectual property infringement. Though human review and the manual handling of data still play significant roles in content management, increasingly software is being deployed to catch and remove problematic content. Technical interventions have similarly been deployed to help govern policy on Twitch. While some policies are oriented toward enforcing a brand identity, others serve a role in legal protection for the service itself.

Content distribution platforms like YouTube and Twitch seek legal safety against copyright infringement claims via the safe harbor provision of the DMCA. As legal scholar Joshua Fairfield (2009, 1031) notes in his review of how the law originally sought to address the potential risks that online services faced, they “would be protected from claims of vicarious and contributory infringement if they exercised their ability to control on behalf of third-party owners of intellectual property. . . . These safe harbors permit ISPs to take action to limit infringers, while avoiding liability for acting to control the content, if certain standards are met.”25 Under the safe harbor provision, providers must “acts expeditiously to remove, or disable access to, the material” that is the subject of a notice of infringement. It must also have a policy for terminating the accounts of repeat infringer[s].” Safe harbor can be revoked when a provider has knowledge of specific infringing activity and does nothing, or when it has a “financial benefit directly attributable to” the infringement (17 U.S. Code § 512).26

In practice, this provision has led platform companies to put mechanisms in place for copyright holders to easily make claims against infringing content, which is then removed from the service. Some of these claims go through human mediators, but increasingly they are handled by automated systems. For example, YouTube’s (2013) ContentID works by having “rights holders deliver YouTube reference files (audio-only or video) of content they own, metadata describing that content, and policies on what they want YouTube to do when we find a match. We compare videos uploaded to YouTube against those reference files. Our technology automatically identifies your content and applies your preferred policy: monetize, track, or block.” Systems such as this (often called “digital fingerprinting”) are especially good at catching a wide variety of recorded video and audio.

In the case of live content, however, the challenges are significant. Patterns of identification may not be known in advance. Permissible use mashed together with creative content may confound the system. Indeed, early attempts at automatically catching and shutting down live streamed content resulted in a number of bungled efforts, including the wrongful flagging of Michelle Obama’s 2012 Democratic National Convention speech that was live streamed on YouTube and the 2012 Hugo Awards broadcast over Ustream. In both cases, the platforms apologized for the error, but as Wired writer Geeta Dayal (2012) observed, “Copyright bots are being wired into that infrastructure, programmed as stern and unyielding censors with one hand ever poised at the off switch. What happens if the bot detects snippets of a copyrighted song or movie clip in the background? Say a ringtone from a phone not shut off at a PTA meeting? Or a short YouTube clip shown by a convention speaker to illustrate a funny point? Will the future of livestreaming be so fragile as to be unusable?”

In August 2014, Twitch announced that it would be using software from the company Audible Magic to catch and mute infringing audio in recorded video of streams. On rollout, however, some expressed annoyance with the technology, which mistakenly muted large sections of videos. Game music composer Danny Baranowsky, for instance, was surprised to find videos of a game that he was working on being hit by the software and having chunks muted, despite his not requesting such policing (Kollar 2014). Others found their content silenced merely due to game sounds included in the video. Twitch’s own weekly show was itself briefly targeted and muted by the software. The launch coincided with rumors that Google was planning to buy Twitch; many who were already frustrated with YouTube’s content management system saw this as a dire path for the platform. Others expressed broader concerns about how the widespread practice of having your favorite music play in the background while you streamed was going to be quashed in a way that undermined the vibrancy of broadcasts.27 Given many had demonstrated an interest in being able to use music legally, perhaps via some payment system, the implementation was met with significant pushback.

Streamers perhaps had both less and more to worry about than this first brush with automated content regulation suggested. On the one hand, live content is some of the most difficult to handle through automated systems. Beyond basic issues about permissible content (such as with the Hugo Awards or via fair use claims), in the case of a Twitch broadcast there are multiple layers of audio as well as video content that would have to be pulled apart and parsed. As I described in chapter 3, raw gameplay content makes up only a portion of any given broadcast. Given the technical challenges, broadcasters could take some solace in the fact that software was unlikely to be able to fully regulate their content immediately. On the other hand, automated regulation of live content is becoming increasingly important to a variety of media stakeholders, especially as even traditional content like sports starts being distributed online. Given the potential revenue possibilities for strong software to enter the automated regulatory regime, it is likely just a matter of time.

Co-creative Culture

Ultimately, game live streaming has come to be another domain in which we can see the active, engaged participation of users working with, building on, and extending commercially available platforms. Their innovations, undertaken in concert with sites like Twitch, reveal the transformative work of play and shine a light on the co-creative mode at the heart of gaming. Rather than frame users and systems as oppositional, we might look to the ways that they iterate each other, shaping practices and meanings in an ongoing dance of cultural production. As game scholar Seth Giddings (2008, 160) has argued, “We are no longer looking at just a ‘technology’ and its ‘users’ but the event of their relationships, of their reciprocal configuration.” The interrelation is key. From game artifacts to platforms like Twitch, users are constantly working over and transforming the systems that they encounter.

But even co-creative models have forms of control and regulation with them. At times those come from users themselves who seek to police boundaries and innovations, constraining for both good and ill the engagements of others. At other times, it is the work of moderation systems that put up guardrails to direct the participation of users. Whether through the work of human moderators or technology delegated to do so, formal community management is also influencing what is happening online.

Policy is typically where we see the institutional principles come into sharpest relief. Formalized structures—articulated in terms of use and community guidelines—highlight what companies see as permissible or legitimate behavior; those in turn both shape and constrain what users do in these systems. Far from being simple neutral platforms, companies like Twitch are invested in honing what happens at their site for a variety of reasons.

Finally, at the most macro level, we can see the ways that an understanding of intellectual property comes into direct conversation with work broadcasters are doing in these spaces. Perhaps one of the most striking things that you hear from streamers is how much thought they put into the transformative work that they do and how deeply aware they are of the ways that our current intellectual property regimes are not only out of step with their practice but also threaten creativity. And rather than simply seeing themselves as just freely appropriating the intellectual property of developers, they express a more nuanced understanding of how cultural products are always co-creative in nature.

In this and previous chapters, I have sought to show how game live streaming is made up of a complex assemblage of human and nonhuman actors, as well as organizations and platforms, that enact their vision via practices and policies. Anthropologist Paul Rabinow (2003, 56) has written of assemblages that “they are not yet an experimental system in which controlled variation can be produced, measured, and observed. They are comparatively effervescent, disappearing in years or decades rather than centuries.” We can extend this framework to think about the assemblage of regulatory mechanisms that are at work interacting with, shaping, amplifying, and restricting the engagements of users. This circuit is always iterating, shifting in relation to human practices, social development, and technologies. Ultimately, cultural production is a system of co-creativity, and we must continue to push for institutions and law that recognize that foundational truth.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.224.44.53