Abstract: The article is concerned with the phenomenon of crowdsourcing as applied to translation processes. It first deals with conceptual and terminological questions and introduces different typologies used to describe the various applications of crowdsourcing translation. It then outlines five examples of successful crowdsourcing translation projects involving Romance languages. Furthermore, the paper examines advantages and challenges related to crowdsourcing practices as compared to traditional translation processes performed by professionals and deals with the question of what motivates the crowd to participate in the translation activity without remuneration. Another important issue is how quality can be ensured in crowdsourcing environments and how translation quality is to be assessed. The article also examines the role of professional translators and their relation to crowdsourcing translation as well as the role of crowdsourcing translation in translation studies.
Keywords: collaborative translation, community translation, crowdsourcing, fansubbing, translation, translation studies
The World Wide Web has evolved into an enormous platform for global human interaction. People come together on the web to combine resources, knowledge, creativity, or skills to contribute to a specific task or project. This phenomenon is commonly referred to as crowdsourcing (Geiger et al. 2011). The term was coined by Jeff Howe (2006 and 2008) in analogy to the pre-existing practice of outsourcing, highlighting the power of a crowd organically formed to perform a given task. The same term is used in the Romance languages. Crowdsourcing is not a single strategy, it is “an umbrella term for a highly varied group of approaches that share one obvious attribute in common: they all depend on some contribution from the crowd” (Howe 2008, 280). Crowdsourcing relies on volunteers, mostly amateurs, who are willing to devote their time and energy contributing to a given task with little remuneration or no remuneration at all. Wikipedia is one of the pioneering examples, proving that mass collaboration can be successful when people voluntarily participate in a purposeful activity. Crowdsourcing is also used by companies to obtain services, ideas, or content from a large group of volunteers, especially from an online community, rather than from traditional employees. But it is not only a business solution aimed at achieving business goals, it is also used in non-profit contexts, e. g., for humanitarian work or as a form of contribution to a fan community.
Crowdsourcing is also applied to translation processes. Although the term crowdsourcing was introduced as recently as 2006 (cf. Howe 2006), the idea of crowdsourced translation is not new. A striking example is the creation and localization of Free and Open-Source Software (FOSS), constructed over the last two decades by non-professional volunteers in a large number of the world’s languages (cf. Seiler ↗22 Software Localization into Romance Languages). Even before FOSS, fans of particular media products were engaged in the practice known as fansubbing in order to share content with fellow fans (cf. Ferrer Simó 2005; Díaz-Cintas/Muñoz Sánchez 2006; O’Hagan 2009). More recently, crowdsourcing has been used to translate social media sites such as Facebook and Twitter. It is also used by humanitarian and non-profit organizations such as Kiva and the Rosetta Foundation in the field of citizen journalism (e. g., Global Voices) to spread and share information about groups neglected by mainstream media, or during crisis situations. For example, after the 2010 earthquake in Haiti, crowdsourced volunteers helped to localize thousands of victims by translating their text messages written in Haitian Kreyol. Phenomena like these have been referred to as crowdsourcing translation (cf. Munro 2010; Anastasiou/Gupta 2011; Zaidan/Callison-Burch 2011; EU 2012; Sutherlin 2013), translation crowdsourcing (cf. Mesipuu 2010; 2012), community translation (cf. O’Hagan 2011; Kelly/Ray/DePalma 2011), user-generated translation (cf. O’Hagan 2009; Perrino 2009), collaborative translation (cf. Désilets 2011; Désilets/van der Meer 2011), and community-driven translation (cf. Ellis 2009) or CT3 (community, crowdsourced and collaborative translation, cf. DePalma/Kelly 2008; Kelly 2009). These terms are sometimes used synonymously, sometimes with slight differences in meaning, depending on the author. According to O’Hagan (2011, 11s.), the ambiguity of certain terms is an indication of the terminological instability typical of an emerging paradigm. The concept of community translation, for example, is not entirely transparent because of its closeness to the existing concept of community interpretation which has become well established in Translation Studies (cf. Pym 2011; O’Hagan 2011). Désilets/van der Meer (2011) use collaborative translation as an umbrella term for different uses of collaborative technologies, including agile translation teamware, collaborative terminology resources, translation memory sharing, online marketplaces for translators, translation crowdsourcing and the process of post-editing the crowd.
The main characteristics shared by all of these terms are that the translation is done voluntarily by Internet users and is usually produced in some form of collaboration, often by a group of people forming an online community. The example of Facebook illustrates how the idea of crowdsourcing can be closely connected to the very idea of social networking. Facebook has developed an efficient strategy to have its site translated by its members.
For Facebook users, their active participation in the translation process is an opportunity to share something with others, to improve the tool they love, to feel like active developers instead of passive consumers, and to help other people to access the network in their native language (cf. EU 2012, 26). In such cases, the translation process can be seen as a social activity, an extension of the user’s activities on the networking site, rather than a work engagement. Translation itself becomes a form of entertainment (cf. O’Hagan 2012, 126).
While there are many case studies which exemplify successful uses of crowdsourcing in translation, there are also negative examples such as the case with LinkedIn. In 2009, following the example of Facebook, LinkedIn sent a survey to the language professionals among its members asking whether they were willing to participate in the translation of the website and which incentives they would prefer, proposing five non-monetary choices. This provoked strong negative reactions from professional translators who felt exploited and refused to work for free for a profit-making enterprise, and led to the formation of a new LinkedIn Group “Translators Against Crowdsourcing by Commercial Businesses” (cf. Kelly 2009; EU 2012, 24/45). Personal motivation is one of the key factors in crowdsourcing translation (cf. section 6). It seems that the applicability of crowdsourcing translation is limited to specific contexts, especially to situations where there is a community of people with a strong emotional bond to the content being translated (cf. Désilets/van der Meer 2011, 28).
The areas where crowdsourcing translation is used are constantly increasing. Recent studies have proposed different typologies to describe possible applications of crowdsourcing translation. In a study of 104 community translation platforms, Ray/Kelly (2011) distinguish three types of crowdsourced translation, namely a cause-driven, a product-driven and an outsourcing-driven type. Cause-driven translations usually focus on a non-profit, often humanitarian project. People choose and translate content that interests them, e. g., in the case of fansubbing, where fans translate the subtitles of their favorite films or television program(s). Product-driven translations are usually projects in which for-profit companies recruit and manage a crowd, as in the cases of Adobe or Skype. The volunteers are often remunerated through free products, services, or promotional merchandise from the company. Outsourcing portals such as Crowdflower or Microtask offer crowdsourced translation for a charge, either as their main revenue driver or as one of their services (cf. also Kelly/Ray/DePalma 2011).
From a global standpoint, managed crowdsourcing projects can be distinguished from self-organized crowdsourcing projects (cf. Jiménez-Crespo 2011, 135s.; EU 2012, 29). In the first case, the project is launched by a company or non-profit organization appealing to a crowd for their translation needs. All of the work is carried out by the crowd, but the project is managed by the respective company or organization. In the second case, a group of users organizes the translation itself without any external control, like in the communities of fansubbers or in localization projects of Free and Open Source Software. The crowd is entirely in charge and usually has some kind of ideological motivation.
Mesipuu (2010; 2012) refers to two different approaches to implementing crowdsourcing translation, namely an open community approach and a closed community approach. The former, employed for example by Facebook, means that anyone can participate in the crowdsourcing project, provided that he is a registered user. In the latter, used for example by Skype, the crowd is limited to a number of pre-selected users who can participate in the translation project. The community is more exclusive and is bound to certain obligations like confidentiality agreements and translation deadlines.
Besides applications of crowdsourcing purely dedicated to translation, there are also forms where translation is not the main focus, but where it is, nonetheless, an important tool (cf. EU 2012, 46). Examples are the online dictionary WordReference, one of the most consulted websites of the world, offering a discussion forum for translations, or Duolingo, a free language-learning website integrating a crowdsourcing translation project (cf. section 4).
Crowdsourcing translation has been successfully used in several different contexts. Some examples of crowdsourcing projects involving Romance languages will be briefly presented in the following sections.
One of the most prominent examples of crowdsourcing in translation is the translation of Facebook’s interface in a very large number of languages. The aim is to extend the reach of Facebook to all Internet users, including speakers of commonly ignored languages. Facebook launched its translation application in 2008, allowing registered Facebook users to translate fragments of text used on the Facebook website. In the case of Spanish, for instance, the whole site’s interface was translated by Spanish-speaking users within one week. A draft of the French version was completed in 24 hours (cf. Ellis 2009). The site has also been translated in multiple varieties of Latin American Spanish using crowdsourcing (cf. Kelly/Ray/DePalma 2011, 88).
The translation process basically consists of four steps (cf. Kelly/Ray/DePalma 2011, 86). In the first instance, Facebook users translate strings and sentences in the interface. In the second step, members of the community vote on the translation alternatives proposed by different users, marking them as appropriate or not appropriate. As a result of the voting, the most popular translations are selected. In the third step, the translators review and solve trickier translations on discussion boards. In a final step, all translations are reviewed by paid professional translators. Hence, contrary to other crowdsourcing projects, there is clear intervention by professionals. All in all, the translation process appears to be very well organized. Within the application, the users can also access a glossary with the most important terms, a style guide, and the leader board that shows the best and most active translators. Apart from the final control by professional translators, different control mechanisms have been built directly into the Translations app. There is, for example, an automatic check of capitalization and punctuation and an automatic indication of glossary entries with definitions and approved translations for technical terms (cf. Kelly/Ray/DePalma 2011, 86). Initial errors, such as the occurrence of “aser” instead of “hacer” in the Spanish version, were quickly corrected (cf. García 2010), and the overall quality of the translations appears to be convincing. In August 2009, Facebook applied for a patent for its community translation platform at the US Patent & Trademark Office (cf. O’Hagan 2011, 12).
In 2011, Facebook launched a new application combining machine translation and crowdsourcing. This application allows for the translation of posts and comments on public pages into the languages marked as native in the user’s profile. The text is machine-translated and can then be revised and improved by the user. If a translation suggested by a user gets enough positive votes from other users, it replaces the machine translation (cf. EU 2012, 25).
Crowdsourcing has also been used in crisis situations, such as natural disasters or violent political upheaval. The most famous initiative of that kind was Mission 4636, a humanitarian crowdsourcing project established for disaster response following the 2010 earthquake in Haiti (cf. Munro 2010; 2012). Because the existing emergency response service was inoperable in the wake of the earthquake, but most cell-towers were intact, a text message-based emergency reporting system was created, allowing anyone within Haiti to send a text message for free to the phone number 4636. The majority of the messages were in Haitian Kreyòl which was not understood by most of the international relief workers arriving in the country. Hence, in an effort called Mission 4636, the messages were translated, categorized and mapped by Kreyòl and French-speaking volunteers worldwide via online crowdsourcing platforms. The data were then streamed back to the relief efforts in Haiti, with a median turnaround time of less than 5 minutes. About 80,000 text messages were translated in that way, helping to save hundreds of lives and to deliver first aid to thousands of Haitians. The core information sharing interface was a simple online chat room, allowing the discussion of difficult translations and the exchange of information between the translators. A great majority of the volunteer translators were from the Haitian diaspora who collaborated online from at least 49 countries. Their knowledge of Haitian Kreyòl as well as their geographic knowledge were essential to the success of the project (cf. Munro 2010; 2012). Crowdsourcing translation was also used in other crisis situations such as the Arab Spring in Egypt and Libya in 2011 or in Somalia in 2011/12 (cf. Sutherlin 2013).
A typical case of self-organized crowdsourcing translation is the phenomenon of fansubbing, i.e., collaborative subtitling by fans for fans. Fansubs started as fan-produced, subtitled versions of anime, Japanese animation films, in the 1980s and mainly developed in the mid 1990s with the advent of cheap computer software and the availability of free subbing equipment on the Internet (cf. Díaz-Cintas/Muñoz Sánchez 2006). The philosophy behind this kind of subtitling is the free distribution of audiovisual programs with subtitles done by a fan community. It emerged as an attempt to make Japanese animes available to their European and American fans who were faced with two problems, the linguistic barrier and the scant distribution of these series in their countries. Despite the questionable legality of such activities, fans decided to subtitle the animes themselves and to distribute them on videotapes, then later on DVD and on the Internet (cf. Díaz-Cintas 2005, 16).
Fansubs are a specific form of subtitles that differ from commercial subtitles, among other things, in that they tend to stay close to the original text and preserve some cultural idiosyncrasies. Certain cultural referents such as names of places or traditions are explained by showing the translator’s notes at the top of the screen. They appear and disappear together with the subtitles which make the reading challenging. The linguistic quality of the subtitles is not always ideal and translation errors are quite common (cf. Díaz-Cintas/Muñoz Sánchez 2006). English fansubs are often produced by Japanese native speakers. Fansubs in other languages, especially in Romance languages, are often translated from the English version and not from the Japanese original. Spanish fansubs have been described in more detail by Ferrer Simó (2005) and Díaz-Cintas/Muñoz Sánchez (2006).
Nowadays, the practice of fansubbing is not limited to Japanese anime, but has also spread to films and television programs in various languages. It is especially used to make American TV series more available to non-English speaking countries. For example, popular TV series such as “Lost”, “Dexter”, “Heroes” or “How I Met Your Mother” can be found subtitled into Spanish, since there is a huge community of fansubbers in Latin America (cf. Fernández Costales 2011). In countries like Italy and Spain, where dubbing is the standard practice and subtitled versions are not commercially available, fansubbing is often the only option for fans to get an original version with the original voices. Even if a commercially translated version exists, fans often prefer the fansubs because they are usually more faithful to the original text and because they are available much more rapidly than commercial versions. Files with the subtitles of new episodes are often uploaded the day after being broadcasted in the United States (cf. EU 2012, 30). In some cases, the fan translations are so successful that they can influence decisions in professional translations. A striking example is the Italian version of the US television series “The Big Bang Theory”. The fan translators, who are usually real experts about everything concerning the series, criticized the official dubbing for simplifying and streamlining the dialogue and the cultural and textual references. Besides producing a fansubbed version, the fans reacted so strongly through their blogs, forums and fansites that the Italian copyright holder finally decided to choose a new dubbing team more faithful to the original text (cf. EU 2012, 31). Despite the success of fansubs, the legal and ethical issues of subtitling media products without holding the copyright remain, as well as legal issues regarding the interpretation of copyright laws.
Duolingo is a free language-learning website that doubles as a crowdsourced text translation platform. Users can learn a language and simultaneously help to translate websites and other documents. On the surface, Duolingo looks like a typical language-learning system. The tasks users receive include translating sentences and rating the accuracy of translations made by others. The sentences are pulled from web pages written in the language the user is learning. The system allots the sentences according to the level of the learner, i.e., more complex sentences to more advanced learners. To assure the quality of the translations, each sentence is translated and revised by multiple learners before the translation is declared correct (cf. Giles 2012; EU 2012, 33). When launching Duolingo in November 2011, its creator Luis van Ahn claimed that he could translate the whole content of Wikipedia from English into Spanish in just 80 hours, without having to pay for it (cf. EU 2012, 33).
The first languages available on Duolingo were Spanish and English. Now there are also other languages, including French, Italian and Portuguese. The idea is to provide commercial translations by combining language learning and translation. The concept has proven to be successful, although several concerns have been expressed. Apart from the question of whether or not it is acceptable to make a profit out of free labor, another issue is whether translating a text can be reduced to translating sentences. Additionally, it is questionable if translation by learners is reliable when more complex sentences, idiomatic expressions or nuanced meanings are at stake (cf. Giles 2012; EU 2012, 33s.).
A very specific kind of crowdsourcing translation is the discussion of translations in language forums like the WordReference forums. WordReference provides free online dictionaries for several languages pairing with English, including English–Spanish, English–French, English–Italian, English–Portuguese and English–Romanian. The site was created by Michael Kellogg in 1999 and has been growing in size, popularity and quality ever since. It is consistently ranked in the top 500 most-visited websites in the world and is a top 100 website in Spain, France, Italy and all of Latin America (cf. WordReference 2013). In addition to the dictionaries, the site provides language forums for numerous languages including the major Romance languages as well as Catalan and Latin. The forums are publicly accessible and are considered a useful tool for translators, interpreters, and language students. Following the idea of crowdsourcing, users suggest and discuss translations of sentences, words or phrases in context. By asking the crowd, users can get proposals that traditional dictionaries or Internet resources do not provide (cf. EU 2012, 32s.). WordReference is the most popular, but not the only website offering language forums of that kind. A similar tool is provided by LEO (Link Everything Online) for language pairs with German, including German–French, German–Spanish, German–Italian, and, more recently, German–Portuguese. Social networking sites for translation like ProZ or Translators Café also provide tools in which users can ask questions and help each other with translations or explanations of terms and phrases (cf. Perrino 2009, 65s.).
Crowdsourcing translation provides a number of advantages compared to traditional translation processes performed by professionals. One is the fast turnaround time. Crowdsourced translations are often completed in days or weeks, as opposed to the months required by professionals. The number of people involved in the translation process and the fact that they are generally highly motivated help to considerably accelerate the process. A striking example is the crowdsourced translation of Facebook. The Spanish and the German versions were translated in one week; a draft of the French version was completed in 24 hours (cf. Ellis 2009). Similar speed advantages can be found in fan translation. For example, in 2007, when the final volume of Harry Potter was published in English, the Chinese version was available only two days after the release of the original, translated by 60 volunteers under the direction of a young student (cf. Perrino 2009, 73). However, as crowdsourcing projects often do not have fixed deadlines, there is always the risk that the crowd does not respond, if the project does not succeed in attracting the crowd’s attention.
Yet, crowdsourcing can help to make information available to people who would be excluded from it without the intervention of a crowd. Crowdsourcing translation produces web content in languages other than English and contributes to making the web more multilingual (cf. EU 2012, 36). It can especially support minority languages by helping them become more present on the Internet. Moreover, as professional translation implies high costs and machine translation usually supports only some languages and language combinations, crowdsourcing translation is sometimes the only way of getting web content translated in less spoken and minority languages (cf. Capdevila Fernández 2012). Apart from the availability of information, the crowd’s active participation can also lead to a different perception of translation. Traditional translation activities are often invisible and unattractive to non-professionals. Crowdsourcing projects can help to raise awareness about the importance of translation and multilingualism (cf. EU 2012, 36).
From the perspective of companies using crowdsourcing translation, possible benefits range from cost reduction and faster turnaround-time to increased market reach through additional languages, community involvement and increased brand loyalty (cf. Désilets 2011; Désilets/van der Meer 2011, 31; Mesipuu 2010, 14s.). It has been pointed out by companies that cost reduction is not the main reason why they are implementing crowdsourcing (cf. Mesipuu 2010, 13). In many cases, using volunteer translators still seems to be a promising business solution.
On the other hand, from an ethical point of view, crowdsourcing solutions used for commercial purposes have been criticized for making profit from free labor (cf. Baer 2010; O’Hagan 2011, 15). For-profit companies such as Facebook or Twitter appeal to the user’s sense of community to get them engaged in their translation projects. This has been described as problematic by some scholars (cf. Van Dijck/Nieborg 2009; McDonough Dolmaya 2011a) because the companies actually benefit from the translated user interface. The participants might not realize that they are exploited because they are unable to see through the underlying marketing mechanisms. Dodd (2011) even draws parallels between crowdsourcing and Marxism, describing crowdsourcing as a method of exploitation used by companies to aggregate mass quantities of unpaid labor while retaining profits and property rights for themselves.
Professional translators usually contest crowdsourcing practices, since they see them as a threat to their work (cf. section 8). Crowdsourcing translation is contrary to codes of practice from professional translator associations, among other things, in that it does not respect rates for the translator’s work and presents translation as a task that can be easily accomplished by anybody who speaks more than one language (cf. McDonough Dolmaya 2011a; 2011b). Apart from concerns about the professional status of translators and their recognition as skilled and trained professionals, translation quality is one of the controversial issues often discussed in this context (cf. section 7). One of the problematic points is that texts are usually split up into smaller chunks which are translated by different members of the community. This means that translation is de-contextualized (cf. Désilets/van der Meer 2011, 34), which can affect coherence and consistency. In the case of Facebook, for example, the global coherence and consistency of the crowdsourced translations is controlled by professional translators. Facebook has indicated that crowdsourcing has been successful with small strings of text, but not with entire paragraphs or pages such as “Help” pages (cf. Jiménez-Crespo 2011, 138s.).
Other challenges regarding crowdsourcing translation are the questions of intellectual property, authorship, anonymity, and privacy (cf. Anastasiou/Gupta 2011, 642). For example, fansubbing activities are technically illegal in that the fan communities do not hold the copyrights for the films and series they subtitle and distribute. Nevertheless, most copyright holders accept them as long as no commercial version is available since they often have a positive impact on promoting the respective series. However, several copyright holders have already threatened fansubbers with legal action (cf. Díaz-Cintas/Muñoz Sánchez 2006, 44s.).
One of the key factors in crowdsourcing translation is the participant’s motivation to contribute to the project. One of the most important motivators seems to be the desire to contribute to a meaningful activity. Howe (2008, 29) notes that, with the emergence of the Internet, people have begun to feel overeducated and underfulfilled, causing them to seek more meaningful work outside their workplace. Crowdsourcing projects can provide activities that are perceived as meaningful by the volunteers. In the case of self-organized projects, the translation activity is often driven by the ideological conviction of the participants. In other cases, the meaningful activity can be the act of helping other people by supporting humanitarian goals with their translations, like in the case of the Haiti earthquake, or by translating for organizations like Kiva or Translators without Borders. It can also be a meaningful activity to contribute to the localization of social media websites, helping to make the site available in the user’s native language and creating something that many other people can benefit from. From a less altruistic perspective, the motivating factor can simply be the desire to seize the opportunity to practice translation, as shown by Mesipuu (cf. 2012, 43ss.) in a case study on the motivation of Skype and Facebook translators. Practicing translation is seen as a challenge and a unique learning opportunity. It is a good way to improve one’s linguistic skills and can also be seen as a training environment for novice translators to gain work experience (cf. also O’Hagan 2009, 110; McDonough Dolmaya 2011a, 104). Other motivators within the idea of learning, as shown by Mesipuu (2012, 45) for the case of Skype translators, can be to learn how big companies work, to become aware of all the opportunities and features a program includes, or to be among the first ones to know what is coming up next and to be able to try it out exclusively. Moreover, it has been shown that volunteer translators are often motivated by their enthusiasm for their mother tongue. They are passionate about their own language development as translators as well as about the development of their language in general and are proud to contribute to the creation of web content in their language (cf. Mesipuu 2012, 46).
In the case of organized translation projects, some companies and organizations provide different kinds of incentives for their volunteer translators. There are usually no monetary rewards, but translators are sometimes rewarded with material incentives such as token gifts (e. g., T-shirts), free products or product discounts. It can be assumed that such incentives are an added value, but not the primary motivator for participating in a crowdsourcing translation project. Some companies also organize events to allow crowdsourcing translation community members to meet in person and thus help to sustain the sense of being part of a group of like-minded individuals (cf. Mesipuu 2012, 48). There are also other kinds of recognition such as optional translator badges (e. g., Twitter), links to the translator’s website or profile page (e. g., Facebook, Kiva), blog entries to thank the translators publicly (e. g., Second Life, Open-Office), titles or insignia that translators can add to their profile (e. g., Second Life), or a leader board ranking the best and most active contributors (e. g., Facebook, Second Life). Such forms of recognition provide visibility for active crowd members and contribute to their status within the community (cf. McDonough Dolmaya 2011a, 103; Mesipuu 2012, 48).
A challenging question about crowdsourcing is how quality can be ensured in crowdsourcing environments. There is an assumption, especially among professional translators, that crowdsourcing translation leads to lesser quality output. At least in some contexts, however, the overall quality of the translation output has been evaluated as comparable to professional translations (cf. Kageura et al. 2011, 66) or to non-translated texts of the same kind (cf. Jiménez-Crespo 2013 and 2017). Some assume that crowdsourcing may even lead to higher quality through what has been called the “wisdom of crowds” effect (cf. Surowiecki 2004). Managed crowdsourcing projects sometimes use quality assurance mechanisms such as revision by in-house professional translators, e. g., in the case of Facebook, or pre-selection of the participants through language and translation tests as with Kiva. In other cases, the crowd itself carries out quality evaluation through mechanisms like voting or mutual revision (cf. Désilets/van der Meer 2011, 32).
The crucial question, however, is how translation quality can be assessed. A basic assumption in Translation Studies is that quality assessment is reliable only if it is based on an explicit theoretical model (e. g., House 2001; Williams 2003). Jiménez-Crespo (2011) draws parallels between the crowdsourced quality evaluation model implemented by Facebook and theoretical approaches to quality evaluation as discussed in Translation Studies. For example, he refers to the early reader-response-approach (Nida 1964; Nida/Taber 1969) as the first approach to translation quality that included reader responses. This approach has been criticized in several ways, mainly for the impracticality of its implementation. However, according to Jiménez-Crespo (2011), the Facebook model represents an actual implementation of components of the reader-response-approach. He claims that the potential shortcomings of the approach can be overcome in a crowdsourcing model like the one used by Facebook, where an active community of users with an extensive knowledge of the source texts is willing to devote time to the evaluation of translations proposed by members of the same community. Furthermore, Jiménez-Crespo (2011) argues that the Facebook model can also be associated with functionalist approaches to quality assessment (Nord 1997). Functionalist approaches see translation as a communicative act that must be purposeful with respect to the translator’s readership. Translation quality is thus defined through the translation’s capacity to fulfill this communicative purpose, i.e., its adequacy in the given context. What plays an important role within such an approach are conventions, since they differ within the same genre from one culture to the other. It has, for example, been shown that US websites localized into Spanish show conventions of the source culture such as the use of direct imperative forms of the verb in navigation menus, while Spanish websites rather use infinitives or other non-personal forms (cf. Jiménez-Crespo 2009). In a functionalist framework, the Facebook evaluation model could lead to translations that better match the expectations of the community of users and can therefore be associated with higher levels of quality (cf. Jiménez-Crespo 2011).
Quality assessment becomes even more challenging when factors other than linguistic criteria are at stake. It has, for example, been shown that fan translations like fansubs often lack consistency, which might be seen as a rather serious problem from the perspective of professionals. However, fans usually do not expect professional quality. What is more important to them is a deep understanding of the series, including cultural references, so that its flavor and cultural specificities can be translated in the target culture. Another important criterion for fansubs is speed. In many cases, speed is preferred to quality, and the aim of fansubbers is to publish a new subtitled episode even when it contains grammar mistakes or when idiomatic expressions have been wrongly adapted into the target language (cf. Fernández Costales 2011).
Professional translators consider crowdsourcing as a threat to their profession as it might undermine their professional status and contribute to the devaluation of the translation profession (cf. Baer 2010; García 2010). Protests by translators against crowdsourcing initiatives like the one against LinkedIn (cf. section 2) reflect the concerns of professional translators regarding their role as professionals and the exploitation of their work for commercial purposes. The supporters of crowdsourcing have argued that these concerns are groundless and that crowdsourcing will never be a serious threat to highly qualified professional translators (cf. Kelly 2009). Others argue that crowdsourcing might also be a positive model for professionals, seeding collaboration between amateur and paid professional translators. This could expand the material that gets translated and provide opportunities for new translation graduates to gain work experience (cf. Baer 2010). It seems clear, however, that the practice of crowdsourcing, if it continues to grow, will have some kind of impact on the work of translation professionals, just as the use of machine translation has. One of the scenarios is that professional translators will have to become even more specialized and focus on areas where crowdsourcing cannot replace them, especially when specialization, confidentiality and accountability are required (cf. EU 2012, 46). In other cases, they will have to adapt to the new developments and expand their work to new types of jobs. Crowdsourcing (as well as machine translation) can actually create new professional profiles for translators, including pre-editors and post-editors of texts translated by the crowd (or by machines), in-house project managers, community managers, or quality assurance experts. One of the scenarios is that professional translation could be seen as a hub and the translator as a linguistic consultant and quality assurance expert, advising the client as to which task and at which point involving the crowd and/or machine translation may or may not be advantageous (cf. García 2010). Some translation agencies have already integrated these developments into their business models. They now offer different levels of translation quality, including machine translation, crowdsourced translation and professional translation. Other agencies also offer services that assist companies in implementing and managing crowdsourced translation projects (cf. Austermühl 2011, 18). Some scholars have also claimed that collaboration between professionals and the crowd might be possible, allowing professionals to delegate simple routine parts of the translation to the crowd, and focus on more challenging aspects such as terminology, style and fluidity (cf. Désilets/van der Meer 2011, 33). Another positive aspect might be seen in the fact that crowdsourcing translation, in contrast to machine translation, helps to emphasize the human involvement in translation. While machine translation aims at maximizing automation in translation activity and minimizing the intervention of the human agent, crowdsourcing initiatives lead to a reinvestment of translation technology by humans (cf. Anastasiou/Gupta 2011, 648; Cronin 2013, 102).
Despite the concerns raised by translation professionals, it has to be mentioned that crowdsourcing projects are not restricted to amateurs, but sometimes also involve professionals. A web-based survey among professional translators found that 12% of the translators already contribute to collaborative processes and 40% of them would consider getting involved in one in the future (cf. Gough 2011). An example of a project primarily recruiting professionals is the Kiva translation project. Besides its success in crowdsourcing microfinance, Kiva has developed a translation project in order to spread the stories for which Kiva seeks support. Kiva imposes considerable constraints on volunteer translators, concerning both their qualifications and time requirements. Volunteer translators, for example, have to pass a test to prove their high level of proficiency in the source and the target language, be able to demonstrate translation experience or studies, and commit to a minimum of two hours per week for a minimum of six months. Despite these constraints, Kiva was able to recruit about 300 highly qualified translators to work for free on a project they consider to be meaningful (cf. EU 2012, 28).
The practice of crowdsourcing has been widely ignored by Translation Studies so far. Among the few scholars dealing with theoretical aspects of crowdsourcing translation, the works of Michael Cronin (2010; 2013) have to be mentioned in particular. Cronin examines the effects of digital technology on translation and describes significant shifts in the way translation is carried out in the contemporary world. As a consequence of these shifts, he claims that the conventional understanding of what constitutes translation and the role of the translator need to be systematically reexamined. With regard to crowdsourcing translation, he describes a shift from a production-oriented model of externality to a consumer-oriented model of internality (cf. Cronin 2013, 100). The former is implicit in all models that have discussed the question of source- or target-language orientation in translation during the last decades (Skopos theory, Descriptive Translation Studies, dynamic and formal equivalence, foreignization vs. domestication, etc.). All of these models are based on the notion of an agent who produces a translation for consumption by an audience (production-oriented model). In crowdsourcing translation, however, the consumer becomes an active producer, a prosumer (consumer-oriented model). This challenges traditional distinctions in Translation Studies which generally presuppose active translation agents and passive translation recipients.
Moreover, Cronin (2013, 100s.) describes a change in reading practices and literacy norms. Reading is no longer a steady, cumulative, linear process. It has, for example, been shown that most web pages are viewed for ten seconds or less, even if the page contains a lot of information (cf. Weinreich et al. 2008). Readers of webbased material have a different approach to their engagement with text, namely an instrumentalized, non-linear, and greatly accelerated approach. As literacy expectations change, translation practices will evolve as well. Possible consequences for crowdsourcing translation are, at least in certain contexts, the acceptance of lower quality translation output and the emergence of gist translation.
A third trend described by Cronin (2013, 101s.) is a shift towards pluri-subjectivity. Contrary to machine-human interaction in translation, crowdsourcing initiatives can be seen as a tool of conviviality and an instrument of human political intervention. What is implicit in such a conception of translation is a move away from the monadic subject of a traditional translation agency to a pluri-subjectivity of interaction.
Another study linking crowdsourcing to approaches discussed in Translation Studies is a study by Jiménez-Crespo (2011) examining the impact of crowdsourcing translation on translation quality assessment. Jiménez-Crespo argues that the method used by Facebook embodies aspects of previously proposed theoretical models, namely reader-based, functionalist and corpus-assisted approaches to quality assessment. These approaches have previously been considered difficult to implement, but have now become relevant and useful in crowdsourcing environments like the one used by Facebook (cf. section 7 for the reader-based and functionalist approaches).
Some studies have investigated the use of social networking platforms as a tool for translator training. Desjardins (2011) claims that practices of collaboration and peer-reviewing used by social networking sites are increasingly important to prepare student translators for their future work and that using online social networking as a teaching strategy has a significant impact on translator training. According to Desjardins, social networking sites and crowdsourcing practices are therefore relevant to Translation Studies. However, most scholars have not considered crowdsourcing practices as a relevant topic of Translation Studies so far. Crowdsourcing translation is an emerging phenomenon which still needs to be explored in all its dimensions.
Anastasiou, Dimitra/Gupta, Rajat (2011), Comparison of crowdsourcing translation with Machine Translation, Journal of Information Science 37:6, 637–659.
Austermühl, Frank (2011), On Clouds and Crowds: Current Developments in Translation Technology, in: T21N – Translation in Transition, Article 2011-09, <http://www.t21n.com/homepage/articles/T21N-2011-09-Austermuehl.pdf> (17.11.2016).
Baer, Naomi (2010), Crowdsourcing: Outrage or opportunity?, Translorial – Journal of the Northern California Translators Association, <http://translorial.com/2010/02/01/crowdsourcing-outrage-or-opportunity> (17.11.2016).
Capdevila Fernández, Cristian (2012), Crowdsourcing i traducció/localització: una amenaça o una oportunitat?, Revista Tradumàtica 10, 237–243.
Cronin, Michael (2010), The Translation Crowd, Revista Tradumàtica 8, <http://ddd.uab.cat./pub/tradumatica/15787559n8a4.pdf> (17.11.2016).
Cronin, Michael (2013), Translation in the Digital Age, New York, Routledge.
DePalma, Donald A./Kelly, Nataly (2008), Translation of, by, and for the People. How User-Translated Content Projects Work in Real Life, Lowell (Mass.), Common Sense Advisory.
Désilets, Alain (2011), Wanted: Best Practices for Collaborative Translation, <https://www.taus.net/index.php?option=com_content&view=article&id=449:wanted-best-practices-in-collaborative-translation&catid=147:translate-articles&Itemid=732> (17.11.2016).
Désilets, Alain/van der Meer, Jaap (2011), Co-creating a repository of best-practices for collaborative translation, Linguistica Antverpiensia 10, 27–45.
Desjardins, Renée (2011), Facebook me! Initial insights in favour of using social networking as a tool for translator training, Linguistica Antverpiensia 10, 175–193.
Díaz-Cintas, Jorge (2005), Back to the Future in Subtitling, in: MuTra 2005 – Challenges of Multidimensional Translation, Conference Proceedings, <http://www.euroconferences.info/proceedings/2005_Proceedings/2005_DiazCintas_Jorge.pdf> (17.11.2016).
Díaz-Cintas, Jorge/Muñoz Sánchez, Pablo (2006), Fansubs: Audiovisual Translation in an Amateur Environment, The Journal of Specialised Translation 6, 37–52.
Dodd, Sean Michael (2011), Crowdsourcing: Social[ism] Media 2.0, Translorial – Journal of the Northern California Translators Association, <http://translorial.com/2011/01/01/crowdsourcing-socialism-media-2-0> (17.11.2016).
Ellis, David (2009), A Case Study in Community-Driven Translation of a Fast-Changing Website, in: Nuray Aykin (ed.), Internationalization, Design and Global Development, Berlin/Heidelberg, Springer, 236–244.
EU (2012) = European Union (ed.), Crowdsourcing Translation, Luxembourg, Publications Office of the European Union.
Fernández Costales, Alberto (2011), 2.0: facing the challenges of the global era, in: Tralogy I, Session 4 – Tools for translators, <http:/lodel.irevues.inist.fr/tralogy/index.php?id=120&format=print> (17.11.2016).
Ferrer Simó, María Rosario (2005), Fansubs y scanlations: la influencia del aficionado en los criterios profesionales, Puentes 6, 27–44.
García, Ignacio (2010), The proper place of professionals (and non-professionals and machines) in web translation, Revista Tradumàtica 8, <http://www.raco.cat/index.php/Tradumatica/article/view/225898/307309> (17.11.2016).
Geiger, David, et al. (2011), Managing the Crowd: Towards a Taxonomy of Crowdsourcing Processes, in: Proceedings of the 17th Americas Conference on Information Systems, Detroit, Michigan, August 4–72011, Atlanta, AISeL, Paper 430.
Giles, Jim (2012), Learn a language, translate the web, New Scientist 213:2847, 18–19.
Gough, Joanna (2011), An empirical study of professional translators’ attitudes, use and awareness of Web 2.0 technologies, and implications for the adoption of emerging technologies and trends, Linguistica Antverpiensia 10, 195–225.
House, Juliane (2001), Translation quality assessment: Linguistic description versus social evaluation, Meta 46:2, 243–257.
Howe, Jeff (2006), The rise of crowdsourcing, Wired magazine 14:6, <https://www.wired.com/2006/06/crowds> (17.11.2016).
Howe, Jeff (2008), Crowdsourcing: Why the Power of The Crowd is Driving the Future of Business, London, Random House.
Jiménez-Crespo, Miguel A. (2009), Conventions in localisation: a corpus study of original vs. translated web texts, The Journal of Specialized Translation 12, 79–102.
Jiménez-Crespo, Miguel A. (2011), From many to one: Novel approaches to translation quality in a social network era, Linguistica Antverpiensia 10, 131–152.
Jiménez-Crespo, Miguel A. (2013), Crowdsourcing, corpus use, and the search for translation naturalness, Translation and Interpreting Studies 8:1, 23–49.
Jiménez-Crespo, Miguel A. (2017), Crowdsourcing and Online Collaborative Translations, Amsterdam/Philadelphia, Benjamins.
Kageura, Kyo, et al. (2011), Has translation gone online and collaborative? An experience from Minna no Hon’yaku, Linguistica Antverpiensia 10, 47–72.
Kelly, Nataly (2009), Freelance translators clash with LinkedIn over crowdsourced translation, Common Sense Advisory, <http://commonsenseadvisory.com/Default.aspx?Contenttype=ArticleDetAD&tabID=63&Aid=591&moduleId=391> (17.11.2016).
Kelly, Nataly/Ray, Rebecca/DePalma, Donald A. (2011), From crawling to sprinting: Community translation goes mainstream, Linguistica Antverpiensia 10, 75–94.
McDonough Dolmaya, Julie (2011a), The ethics of crowdsourcing, Linguistica Antverpiensia 10, 97–110.
McDonough Dolmaya, Julie (2011b), Moral ambiguity: Some shortcomings of professional codes of ethics for translators, The Journal of Specialised Translation 15, 28–49.
Mesipuu, Marit (2010), Translation Crowdsourcing – an Insight into Hows and Whys (at the Example of Facebook and Skype), Master’s Thesis, Tallin University, <https://www.e-varamu.ee/item/T43JZ4L46TOW7W45B3DSLAHFIUUCBWIO> (17.11.2016).
Mesipuu, Marit (2012), Translation crowdsourcing and user-translator motivation at Facebook and Skype, Translation Spaces 1, 33–53.
Munro, Robert (2010), Crowdsourced translation for emergency response in Haiti: the global collaboration of local knowledge, in: AMTA Workshop on Collaborative Crowdsourcing for Translation, Denver (Colorado), <http://amta2010.amtaweb.org/AMTA/papers/7-01-01-Munro.pdf> (17.11.2016).
Munro, Robert (2012), Crowdsourcing and the crisis-affected community. Lessons learned and looking forward from Mission 4636, Information Retrieval 16:2, 210–266.
Nida, Eugene A. (1964), Toward a Science of Translating. With Special Reference to the Principles and Procedures Involved in Bible Translating, Leiden, Brill.
Nida, Eugene A./Taber, Charles R. (1969), The Theory and Practice of Translation, Leiden, Brill.
Nord, Christiane (1997), Translating as a Purposeful Activity: Functionalist Approaches Explained, Manchester, St. Jerome.
O’Hagan, Minako (2009), Evolution of user-generated translation: Fansubs, translation hacking and crowdsourcing, The Journal of Internationalization and Localization 1:1, 94–121.
O’Hagan, Minako (2011), Community Translation: Translation as a social activity and its possible consequences in the advent of Web 2.0 and beyond, Linguistica Antverpiensia 10, 11–23.
O’Hagan, Minako (2012), Translation as a new game in the digital era, Translation Spaces 1, 123–141.
Perrino, Saverio (2009), User-generated Translation: The future of translation in a Web 2.0 environment, The Journal of Specialised Translation 12, 55–78.
Pym, Anthony (2011), Translation research terms: a tentative glossary for moments of perplexity and dispute, in: Anthony Pym (ed.), Translation research projects 3, Tarragona, Intercultural Studies Group, 75–110, <http://isg.urv.es/publicity/isg/publications/trp_3_2011/pym.pdf> (17.11.2016).
Ray, Rebecca/Kelly, Nataly (2011), Trends in Crowdsourced Translation. What Every LSP Needs to Know, Lowell (Mass.), Common Sense Advisory.
Surowiecki, James (2004), The Wisdom of Crowds, New York, Doubleday.
Sutherlin, Gwyneth (2013), A voice in the crowd: Broader implications for crowdsourcing translation during crisis, Journal of Information Science 39, 397–409.
Van Dijck, José/Nieborg, David (2009), Wikinomics and its discontents: A critical analysis of Web 2.0 business manifestos, New Media & Society 11:5, 855–874.
Weinreich, Harald, et al. (2008), Not quite the average: An empirical study of Web use, ACM Transactions on the Web 2:1, article 5, <https://ccit.college.columbia.edu/sites/ccit/files/weinreich-web-use-study.pdf> (23.07.2017).
Williams, Malcolm (2003), Translation Quality Assessment: An Argumentation-Centred Approach, Ottawa, Ottawa University Press.
WordReference (2013), About WordReference. com, <http://www.wordreference.com/english/AboutUs.aspx> (17.11.2016).
Zaidan, Omar F./Callison-Burch, Chris (2011), Crowdsourcing Translation: Professional Quality from Non-Professionals, in: Association for Computional Linguistics (ed.), Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, Portland, Oregon, June 19–24, 2011, vol. 2, Red Hook (NY), Curran, 1220–1229.
18.219.45.88