11.5.1 Design
This section of the chapter turns attention from discussions around topics in methodology to those around how the present research was actually carried out.
A first point to note about the DP methodology is that there is no straightforward prescription for analyzing discourse. That is, there is no check box list of actions to take that will lead to the perfect analytical outcome. Derek Edwards and his coworker, in their book introducing DP, synthesize various features of discursive action and the relationships between them in a conceptual scheme referred to as the Discursive Action Model. It is organized into three principal themes: action (e.g., a focus on action rather than cognition), fact and interest (e.g., negotiating the dilemma of stake and interest), and accountability (e.g., the speaker’s displayed sense of accountability in reports). It establishes some useful guiding principles as well as some potential areas on which to focus research (such as, e.g., how speakers construct and manage their remembered accounts of past events as factual and authentic).
More practical as a framework for guiding methodology, Jonathan Potter and Margaret Wetherell map out a 10-stage guide to DA: research questions, sample selection, data collection, interviews, transcription, coding, analysis, validation, report writing, and application. This is not meant as a strict order of business for undertaking DA: the order in which these “actions” are engaged is entirely dependent on each specific research case. Both the Discursive Action Model and the 10-stage guide are used to inform our research, with the latter providing points of discussion here. Note that both sample selection and data collection are addressed in the following subsections, “Research Data” and “Participants and Ethical Considerations,” and that the topic of “interviews” is not relevant to the present study. “Validation,” in the following discussions, addresses how the present study approaches matters of validation procedures. “Report writing” is not relevant to our present purposes, and “application” is addressed in the final chapter of the thesis.
Research Questions
In DP, the use of research questions is more a matter of opinion and preference than prescription. Some, including Robin Wooffitt from the perspective of conversation analysis, are persuaded that even indicative research questions are unnecessary and potentially limiting to the analysis at hand. Where they are used, Carla Willig advises that these should be focused on how accountability and stake are managed in real everyday life. Accordingly, DP asks “what” and “how” questions rather than the “why” questions which are the hallmark of experimental methods as noted earlier.
As with any research project, an important part of formulating research questions is researching relevant literature. This has two practical outcomes: first, it enables the researcher to understand how particular topics are dealt with and to identify any gaps in the literature. Second, it enables the researcher to ground analysis in existing research. Both support the drive for coherence. In other words, a piece of work that contributes to and builds on existing work will likely be seen as more plausible than one that does not. Consequently, based on the themes evident in the KM literature review (e.g., trust), the DA literature and in particular those that are relevant to DP formed the basis of primary research. This led to a relatively broad purview that can largely be categorized as (i) those literatures concerned with DA/DP as a methodology and a theoretical approach and (ii) reports of studies relevant to the matters in hand. To facilitate such a broad field of research, a computer-based research database was created in which details of all papers/books (including those from KM and related fields) were recorded, along with links to the source publication and research notes. To date, this database contains some 500 entries. A further action that was taken was to create a minidatabase of DA terminology (“jargon”) used in research reports, which, during the analysis, greatly facilitated the discovery of studies compatible with, or in other ways supportive of, or indeed contradictory to the analysis and its findings reported here.
Based on the research reported in the previous chapters, the indicative research questions are proposed as:
- In the environment of organizational knowledge sharing, how are matters of identity, trust, risk, and context constructed as live issues and concerns of speakers?
- It is suggested that such matters or themes influence knowledge sharing—how and with what effect for speakers and their business?
- It is also suggested that these themes work corelationally—how is this displayed in discourse in social interaction, and with what effect?
- It is proposed that these matters are accomplished tacitly as psychological phenomena, with the implication that speakers orient to them as live matters consequent to their understanding of what is going on in the environment (analogous to the automatic, unconscious abstraction of structures and patterns in the environment): how is this displayed and oriented to in discourse?
Transcription
Transcription as a preparatory step to analysis involves transforming spoken (as opposed to written, such as online forum contributions) texts into written form suitable for analysis. It also involves annotating the transcript with symbols indicating pauses (often including duration), intakes of breath, rises or falls in tone, increase or decrease in volume, overtalk, laughter, speech repairs, and so on. The aim is to produce a written version that is, as far as the research aims require, as accurate a representation as possible of the spoken words while acknowledging that a literal rendering is impossible. A key to transcription conventions used in the present study is contained in Table 5, which is based on that developed by Gail Jefferson.
Any transcription’s level of detail is determined by the research question. In our case, the process began with a “gist” transcription noting what each meeting recording covered in terms of explicit topic, action (e.g., argument, agreement, persuasion, etc.), speaker, indicative timings, and so forth, as a first step in becoming familiar with the data. The aim was to produce a transactional description of each meeting’s discourse. Each meeting recording was then transcribed in detail, using the appropriate transcription conventions, using the “gist” version to discard any meeting talk considered to be wholly irrelevant to the business at hand, for example, unintelligible talk, pauses for passing traffic, or reference to technical difficulties with IT systems used in the meeting. As well as representing a practical stage in preparation for analysis, the process of transcription is also an invaluable way to get very familiar with the data, to “dwell in it” as Michael Polanyi might have described it.
The following is a short example taken from the analysis contained in the subsequent chapters:
- Steve:Wa::ay. (.) Okay so we have Ade, Damien and Manoj.
- (2) [“yeahs” via conference call]=
- Steve:Yep, yep, yep. Good. Okay. Ummmm. (0.5) Right=
- Bob:= Shall we go through the list ↑first?
- Steve:Ye::ah Mark why don’t you—why don’t you wheel us through the list?
- Bob:Alrighty so starting in alphabetical with (names). So, (project name)?
The “::” in Line 1 indicates the word is elongated, while the bracketed number in Line 2 indicates a length of silence. The “=” shown in Lines 3 and 4 indicates no discernible gap between utterances, while the “↑” displays a rise in intonation.
So far as possible, the aim is to act in the role of “objective observer” in the transcription, coding, and analysis stages, for instance. It is however clear that no such research can ever be immune from the presence of the researcher—even acknowledging that the way in which a particular utterance might be heard is always open to the possibility of being heard differently by another researcher.
Coding
Not to be confused with the application of transcription conventions or with the analysis itself, coding is the process by which the researcher searches for and selects instances in the transcript relevant to the research question or theme under investigation.
The indicative research questions lead to a particular interest in the themes of identity, trust, and risk and what contexts in general speakers make live in their discourse. The research is concerned with whether and if such themes are invoked and oriented to by speakers as psychological phenomena with influence and effect on the scope and content of knowledge sharing actions. Consequently, the process of coding involves trawling through the data—working iteratively between the transcripts and the recordings—to identify the presence of these or related themes as discursive actions: “instances of interest.” In their 10 stages of DA, Potter and Wetherell advise that such a process should be as inclusive as possible—that is, even instances that are considered to be “borderline” in relation to the themes of interest should be included.
Analysis
Analysis is an iterative process in which the researcher must continuously move back and forth between analytical concerns, the corpus of relevant published literature, and the data itself, both the transcripts and the source recordings. It was, for instance, felt necessary to return to the original recording from which an instance of interest was drawn to experience over and over again its whole context and the actual performance of the speakers.
The core principle is that the topic of interest is language itself. While there is no set procedure for doing analysis, there are some key questions to be borne in mind: why am I hearing/reading the text in this way, what are the features that lead to this way of hearing/reading it, what is the recording/text making me feel, and so on. The researcher is specifically looking for both patterns and variation in the data.
Analysis focused specifically, one after the other, on each of the four identified themes related to knowledge sharing. Note that the purpose of the meetings in the dataset, in each case, is understood knowledge sharing: for instance, a routine sales and marketing meeting has the purpose of sharing past, present, and predicted activities and experiences.
The analysis begins by identifying the patterns within the organizational structure of the data: what is its nature—is it agenda driven, for instance? This is followed by a stage that investigates the rhetorical practices evident in the data: what discursive work is being done, and with what effect? Next, the analysis considers matters of construction, evaluation, and function: what is being constructed, how is this negotiated, for instance, and with what function? In examining the instances of interest identified earlier in the data, their general context is addressed: why this extract, what is happening, what precedes it, and what are its major features? Throughout all aspects of the analysis, exceptions or deviant cases are sought, with the analytic purview focused on what speakers themselves orient to or construct as “consistent and different.” Thus, following Potter, analytic descriptions that are “careful and systematic” lend themselves more to constructing claims of a theoretical nature. The analysis also focuses on rhetorical effects and consequences: what are the effects of the discourse on speakers, and has anything changed? Throughout, the analysis is carefully grounded in the relevant literatures where possible.
Analysis is concerned with the ways in which speakers manage issues such as blame and accountability, the action orientation, and rhetorical organization of talk and how people construct particular versions of reality and what these accomplish for the interaction. Specifically, and drawing on the theory of language (discourse) represented in DP, the study considers the question of how people, in everyday organizational settings, go about the business of constructing and accomplishing actions in discourse, with what function, and what consequences. In particular, the analysis is interested in how speakers orient to the identified themes associated with knowledge sharing and with what effect for speakers and recipients. Each instance of interest identified in the coding stage is forensically dissected for action, function, and effect, and compared with other instances, with an iterative process of testing used to identify features of interest to the focus of the study and in turn to compare and contrast these with features in the extant literature.
Validation
Jonathan Potter’s four validation procedures specific to DP, discussed earlier, are applied in the research: (i) analysis pays attention to speakers’ own understanding as displayed in discursive interaction (not just the researcher’s interpretation), (ii) the adequacy of a claim is assessed against any “deviant cases” in the data, (iii) analysis and claims are grounded in previous studies, and (iv) the inclusion of data extracts allows the reader to form their own interpretations and judgments. Taking each of these procedures in turn, we can now look at how these are applied in the present research.
The analysis is concerned with those issues that the speakers themselves make live and relevant in their talk. That is, the analysis attends to what sense speakers and recipients are shown to construct and orient to in their discursive interaction and not just how the researcher might interpret a particular utterance. Following Edwards and Potter, reports and descriptions are “… examined in the context of their occurrence as situated and occasioned constructions whose precise nature makes sense, to participants and analysts alike, in terms of the actions those descriptions accomplish” (1992: 2). So, where, for instance, analysis suggests a particular contextual matter such as “trust” is made live by a speaker, evidence is sought for this as an understanding displayed in subsequent speaker turns: the next turn proof procedure. If such evidence is absent, the researcher’s interpretation is either excluded from the analysis or explicitly marked as potentially speculative. An example of this can be seen in the analysis of “risk” (Chapter 13).
Deviant cases are particularly sought: that is deviant to the perceived dominant pattern, for instance, seen in the data or in a meeting recording as a whole event. An example of this can be seen toward the end of Chapter 13, along with a discussion of its meaning to the analytic claims. Deviant cases can either support analytic claims or serve to weaken them. The objective is not to ignore them as irrelevant to the business at hand but rather to notice these for what they accomplish and how they relate to analytic claims. As Jonathan Potter and Margaret Wetherell advise, exceptions can often “dredge up” important features and problems.
From the outset of the research and analysis, the present study approaches existing DA studies as a major source of knowledge to inform and provide a source of coherence for analytic claims. This can be seen in how the analysis reported in the following chapters is, for instance, grounded in the literature where relevant, showing how the analysis and claims made here either support or vary from existing work.
Extracts (see Table 4 for a summary of these) from the data are included in the reported analytic findings to both support analytic claims and allow the reader to formulate their own interpretation and judgment of the data. This attends particularly to the ever-present possibility that talk and text are open to more than one analysis and conclusion. In the following chapters, extracts are placed alongside detailed descriptions and accounts of how the analysis is grounded and developed. Following Linda Wood and her colleague, the demonstration of analysis in the inclusion of extracts is understood as a key requirement of warrantability.
11.5.2 Research Data
The size and content of any sample selection are driven by the research question. Potter and his coworker emphasize that, in DA, the size of the research sample is not a determinant in a study’s success. The analyst’s priority is an interest in the language itself, how it is used and what it accomplishes, not the speakers. Most DA research generally samples a corpus of data from different sources or from the same source (e.g., see Robin Wooffitt’s intriguing 2001 study of verbal interaction between mediums and their clients finding “reported speech” to be a commonly used linguistic device, which, he claims, works to invoke “favorable assessments” of the psychics’ authenticity).
When collecting data, many of the same principles of conventional research methods are relevant: a consideration of ethics, for instance, and ensuring the appropriate permissions are gained. Preference is always for using naturally occurring language in interaction (i.e., with the complete absence of the researcher). But the use of surreptitious recordings would, for instance, be ethically questionable. This raises a particular question concerning ethics in respect of some of the data used here (public online discussion forum), which is addressed in the following subsection, “Participants,” as part of a general discussion on the ethical approach of the present study.
Firms were selected for the study based on the researcher’s prior relationship with their senior management and the nature of their business as having an emphasis on sharing and developing knowledge. This prior relationship transpired to be an essential factor in gaining the cooperative participation of both organizations. Several other organizations, where no prior connection or relationship existed, were also approached as potential participants but, while expressing support for the research project and its aims, all declined to become involved as participants. This suggests a sobering lesson and potential limitation for future research: the nature of the research methodology is such that, without a prior relationship of trust, potential participating organizations are unlikely to agree to take part.
Individual participants were not selected by me as the researcher: instead, these represent, in effect, an opportunity sample in that they happened to be present in the meetings that took place at times and dates when I, with the agreement of the organizations’ senior management, happened to be present at their respective premises (but not physically present in the meetings themselves). Nor was there any deliberate selection made of the meetings to be recorded or influence upon their topics of discussion. In this sense, while the organizations themselves are actively selected by me as the researcher, the actual participants are not. Nor was I physically present in any of the meetings themselves, not even in the guise of observer. I can claim, then, that this data can be considered as naturally occurring language in interaction.
The principal empirical basis for the present study comprises digital audio recordings of 13 individual meetings, collectively representing more than 15 hours of recordings, taking place in two different London-based organizations during March and May 2013. These meetings are regular, scheduled meetings in each case.
No instruments were used in the data capturing part of the project apart from a small digital audio recording device, which was positioned in meeting rooms in advance of meetings to be recorded. All participants were briefed in advance, verbally or via written instruction, of the nature and purpose of the study.
11.5.3 Participants and Ethical Considerations
This section of the chapter focuses on the ethical conduct of the study, with particular consideration given to the use of online discussion data. This is followed by a description of the participating firms. A description of the online data and its source is given at the start of the analysis in Chapter 15.
In compliance with ethical standards for research, two documents were prepared: a participant consent form and an information/briefing form. All participants were given both documents prior to any recording being undertaken. In most cases, participants were given a short verbal briefing on the nature of the research, in particular of their rights to withdraw from the study at any point. Participants, comprising organizational employees, contractors, and/or associates of two independent London-based private sector consultancy firms, were required to sign individual consent forms. All participants and their organizations are treated as anonymous in all aspects of the research.
With respect to the online discussion forum data, no overt permission was sought by the researcher from forum participants. Participants made their contributions voluntarily to a publicly available discussion forum, which is part of an international professional networking website: that is, access to this forum is not restricted to registered forum members only, although in order to access the site users must first register with the website itself. Note, though, that there is no restriction, fee, or qualification required in order to register with this website. A further important point to note is that this group, which is a networking group, publishes no explicit rules, guidelines, or other considerations in respect of members’ contributions and their use thereof. To post a contribution, one must register with the group, but access to its contents is available to any website member.
Was it ethical to sample and use this data collected in this way? The review of studies using data from computer-mediated communications (Section 6.4.3) gives an idea of the widespread practice of using such data in research. On the advantages of using this type of data, in their book on Discourse and Identity, Bethan Benwell and her colleague describe this as particularly “authentic and pure” because it requires no transcription and places the researcher in the position of a “lurker” as opposed to the traditional perception of the “scientist as observer” where the presence of the researcher can influence that which they observe. They, in fact, make no reference to any ethical issues with using data from such sources. In reality, the general thrust of research is more focused on the values and advantages of using this data rather than on any ethical issues that this might raise (e.g., see a study by Charles Antaki and colleagues, published in 2006, which compares everyday conversation talk with online forum discursive interactions). Moreover, in their account of the “revolution” of “Big Data,” Viktor Mayer-Schonberger and cowriter recently describe in some detail how large Internet businesses including Google, for instance, routinely “scrape” the Internet for content as data for their algorithms. There is no indication of any permission being sought from individual contributors.
To answer the initial question, is it ethical: it is claimed here that the use of such data—because of its circumstances in terms of accessibility to wider audiences and the implied acceptance by contributors that their data may be used for purposes other than they intended, together with the practical problems in attempting to gain permission from individual contributors, and the impact that such a requirements would have on the growing and valuable contributions from research in computer-mediated communications—is ethically acceptable. As a caveat, however, it should be noted that this is a gray area to say the least.
What follows is a brief description of each participating organization. Note that the context of individual meetings, where they form the basis of subsequent analysis, is described in each case of use:
- Organization AOrganization A is located in Central London although the business also has offices elsewhere in the United Kingdom and in mainland Europe. It has a full-time staff of around 80. The company describes itself as a learning and communications specialist, with a particular focus on the design and application of learning technologies and software to facilitate the transformation of client organizations into successful (learning) businesses. While the organization operates in a competitive marketplace that can be described as highly knowledge focused, the organization itself does not have any formal KM policies or practices in place (like Company B). Also similarly to the other participating organization, the work environment is a large, airy open-place space. People largely work at long banks of desks, and there is a lot of “hot desking.” There is a centrally positioned coffee area with a small sitting area where ad hoc meetings take place and where informal chat happens. Both meeting rooms are screened off from the main working area. The working space is relatively quiet and informal.
- Organization BLocated in the center of London, Organization B describes itself as a services innovator and aggregator, which provides specialist professionals on permanent or contract basis to the public sector in the United Kingdom and which has a core staff of around 60. The core business is, in effect, a contract bidding “machine.” The working environment is a large open-plan office, surrounded by spacious, glass-fronted meetings rooms. These are used for formal, scheduled meetings as well as ad hoc ones when available: that is, they are in use virtually all the time. Another noticeable feature of the environment is the low noise level despite the perennial presence of one or more “floorwalkers” talking on a mobile phone. The organization is team driven. There is an interesting contrast between the heavy use of internal email to communicate with colleagues (across the room, for instance) and occasional impromptu problem-solving or idea-generating interactions, which take place by, for instance, the coffee facilities. In fact, the researcher particularly observed that ad hoc meetings often took place around a counter in front of the centrally located kitchen area, which suggests that the design of this area in a prominent position was deliberate.