11
METHODOLOGY

11.1 INTRODUCTION

Why should we be interested in research methodology? In simple terms, methodology is concerned with the way in which the researcher frames questions for research and how that research is carried out using a project designed to provide reasonable answers. However, this simplistic description masks a significant underlying complexity. Setting that aside for the moment, there is one main reason for having an interest in methodology and in developing an understanding of its core concepts, questions, and debates. Here it is: if one does not have even a rudimentary notion of research methodology as the determinant of what is asked, what is researched and how, how findings are arrived at, and how they are interpreted and reported and if one does not understand the view of the world and its contents that the researcher is working within and influenced by, then one is not in the position to do any other than accept research at its face value.

This is, admittedly, what most people do when watching the television news for instance. But, just occasionally, a broadcast news report will prompt the response: “that’s not right!” “that’s not how it is!” and so on. In these cases, the viewer has personal knowledge and understanding of the matter being reported and is consequently in a position to take on the role of critical reviewer. The implication of this analogy is that the reader of research—casual or otherwise—is just as much a participant in the research as the researcher herself. That is the importance of methodology from a research consumer’s perspective, but what of the researcher’s?

For the researcher in particular, the importance of methodology rests on the argument that research findings can be criticized or dismissed out of hand (as noted in Section 7.6.7 in relation to implicit learning theory) on the basis of the researcher’s methodology. Consider for example, the arguments ranged against the methodologies of cognitive and experimental social psychologists by scholars such as Derek Edwards, Jonathan Potter, and David Silverman. Silverman, for instance, describes conventional qualitative research design in social psychology, and its assumption that this involves researchers asking questions, as a “blunder” that risks simply not studying behavior. This raises questions over what they are studying and what their findings really report on (recall the brief discussion around researcher bias in Section 1.4, for instance). On the subject of quantitative methods in general and the pursuit of statistical averages on which a scientist’s rules are based, William Starbuck is particularly scathing, claiming that such an approach is more based on “stylized sensemaking ritual” than science. Of course, others would argue for the complete opposite case.

The many issues associated with methodology are not so much concerned with the technical skills of the researcher in designing and implementing a study but rather with something more fundamental to science as a whole—the underlying complexity referred to earlier. They concern positions on ontology and epistemology as the philosophical foundations to methodology. Like most other aspects of scientific endeavor, these topics are not immune to considerable debate—often heated and personal; for instance, Kenneth Gergen’s condemnation of traditional methods in social psychology for their abandonment of matters of culture and history, Charles Antaki’s criticism of experimental research’s reliance on laboratory-based simulations of real life, Potter and Edwards’ response to criticism on their stance on cognitivism, and Teun Van Dijk’s arguments for a sociocognitive account of context, describing the approach adopted by Edwards and his colleagues as anticognitivist and bordering on being “mindless.” All these examples concern debates over methodology, but what underlies them is the conceptualization of “reality” and “knowledge.” In fact, it may not stretch facts too far to see the field of modern social sciences as largely split by a schism of philosophy.

This chapter proceeds with a brief discussion of ontology and epistemology as determinants in methodology with the aim of locating the present study. A consideration of the nature of the debates reveals two opposing perspectives: positivism and social constructionism. As a qualitative and interpretive methods study, the present work aligns with the latter.

This is followed by an explanation of the research method—discursive psychology (DP)—to add to that already described in Chapter 6. This section includes a consideration of the criticism that has been made concerning discourse analysis (DA) in general. The troubling question of how the quality of qualitative research can be measured, if at all, is given particular consideration. Next, based on Jonathan Potter and coworker’s 10 stages of DA, a detailed description is given of the present research design, data, and participants. The latter includes a brief discussion of the ethics associated with the use of data from publicly available online discussion groups (some of which is used in the present research). This is followed by a consideration of the potential limitations of the methodology proposed here. Discussions conclude with a summary and reprisal of the indicative research questions.

11.2 LOCATING THE PRESENT STUDY

At the root of what scholars Dvora Yanow and her colleague describe as the “paradigmatic wars,” epistemology, the theory of knowledge, and ontology which is concerned with the nature of reality are both characterized by similar tensions. Experimental social psychology typically adopts a positivist epistemological position that understands knowledge as objective, knowable, and discoverable, where reliable facts can be discovered about the social world as it really is. This is congruent with the realist ontology, located in the philosophy of modernism. In contrast, critical social psychology adopts a social constructionist epistemology (see Chapter 5 for an introduction to this topic), located in the postmodernist philosophical perspective that approaches both knowledge and reality as socially constructed in social interaction. Mary Holden and Patrick Lynch of the Waterford Institute of Technology offer a very accessible guide to choosing research methodology: they caution that because philosophical stance dictates methodological choice, the absence of philosophical clarity can lead to use of research methods that are inappropriate to the research questions. Silverman makes exactly the same point from the more pragmatic perspective of the “fit” between the research question and the chosen methodology. The reader is referred to Chapter 5 for earlier discussions on critical social psychology as a reaction and counterreference to, and criticism of, the research methods adopted by the experimentalists. Brief discussions on both the positivist and constructionist positions make the point.

The positivist position rests on three principal assumptions, according to Alan Chalmers: that facts are perceived via the senses through diligent and unbiased observation, that facts precede and are independent of theory, and that facts constitute a firm and reliable basis for scientific knowledge. Its epistemology and ontology consequently assume that facts can exist as objective phenomena and that as such they may be objectively discovered, observed, attended to, and acted upon. This of course relies on the assumption that what one perceives is an accurate mirror of reality. It is also generally associated with an inductionist approach to scientific discovery through its emphasis on facts discovered through observation. This idea is returned to in a moment.

It is precisely all of these assumptions that social constructionism categorically rejects. Kenneth Gergen, whose work we have regularly encountered throughout previous chapters, was among the first in the social sciences to criticize the positivist position in social psychology, later referring to the “crisis over beliefs in objective knowledge.” He argues that whereas positivism relies on observable facts that can be transmuted into laws and that are stable over time, human behavior is not historically stable. Moreover, on the subject of methodology, and as noted in Chapter 6 in the discussions around the role of the discourse analyst, Gergen is skeptical of the social psychologist’s ability to divorce their values from the subject of their research. There cannot be, in consequence, any kind of objective representation of the “truth” in the study of human behavior, and that what we perceive cannot be a mirror image of reality as it is. We have previously engaged with debates on the subject of objectivity (e.g., see Section 5.6): Thomas Kuhn sums up the perspective that is subscribed to here, noting that “(W)what a man sees depends upon what he looks at and also upon what his previous visual-conceptual experience has taught him to see” (1996: 113).

The present study, in drawing on DP, is described as qualitative and interpretive and is located in constructionism. Jonathan Potter, writing in the late 1990s, clarifies this constructionist position with two salient points: first, that speakers’ accounts, reports, and descriptions construct versions of their world and, second, that those accounts are themselves “fabricated in occasions of talk.” The key point is that DP takes an anticognitivist approach (i.e., contrary to the tenets of the cognitive sciences) and in particular takes exception to the cognitivist formulation of language as a superconduit to inner mental thought. According to Potter’s treatise on making psychology relevant, a key difference between DP and other types of DA lies in its focus on psychology: DP treats psychology as practical, accountable, situated, embodied, and displayed.

Positivism is such an influential account of science that a short perspective on its historical roots and development provides some useful insights. (The reader is referred to Chapters 5 and 6 for accounts of the origins of social constructionism and DP.)

11.3 A BRIEF DIGRESS INTO THE POSITIVIST ACCOUNT OF SCIENCE

The idea of science deriving its facts through observation has the longest history of all scientific methods. It is not hard to see why. When the ancients first began attempting to understand and control the world around them, their principal scientific instrument was their eyes. So the observation of phenomena, with the development of more and more sophisticated instruments for investigation rendering these abilities even more powerful, has been more or less the principal delivery channel of what we know: everything from (to pick some highlights) the fact that the earth revolves around the sun in a system of other planets, usually credited to Copernicus (1473–1543) but which Charles van Doren argues is more justly ascribed to Galileo (1564–1646); René Descartes’ (1596–1650) idea that the mind and body mutually interact, with the mind having one single purpose and that is to think, thus introducing the enduring question over physical–psychological duality; and the invention of the laws of motion by Isaac Newton (1642–1727), which came with a downside for countless generations of schoolchildren—the invention of differential and integral calculus. What all of these achievements in human knowledge have in common are (i) the ability to build on the knowledge of those who had gone before, (ii) an unbridled curiosity about the world and its contents, (iii) the ability to observe, and (iv), for whatever motive or reason, the desire to share that knowledge.

The rival to this idea of facts derived through observation, certainly up until (and perhaps even including) Newton’s time, was religion and its emphasis on the written word. This is admittedly something of a sweeping generalization to make, and the reader is recommended to “Further Reading” at the end of this chapter for texts that deal with this topic in detail.

The invention of positivism is credited to the French philosopher Auguste Comte (1798–1857). According to chroniclers of psychology’s history, Duane Schultz and his cowriter, Comte’s systematic survey of all human knowledge rigidly adhered to the rules by which only knowledge that is objectively observable and indisputable could be considered: “(E)everything of a speculative, inferential or metaphysical nature he declared illusory and rejected” (2004 : 44). Comte’s notions of science based on observable facts, and the move away from explanations grounded in religious beliefs, for instance, proved highly influential on European thought. Along with materialism (the notion that the facts of the universe can be described in physical terms and explained in terms of matter and energy) and empiricism (a concern with how the mind acquires knowledge, which can only be acquired through sensory experience or observation), positivism was adopted as the foundation of modern psychology and, indeed, by most other sciences.

A more recent influential paradigm comes in the form of Karl Popper’s The Logic of Scientific Discovery, first published in 1935. It is in this work that he presents the logic of the case for a deductivist approach to scientific research and discovery and introduces the concept of falsification—the idea that in order to be valid, scientific theory must be capable of being tested and falsified. Aware of the potential for criticism that his arguments might attract, Popper notes an inherent problem with his falsifiability criterion and its implication of a program of “ad infinitum” testing: accordingly, a theory could never be said to be valid as it will always be subject to testing. He has a solution to this: “… I do not demand that every scientific statement must have in fact been tested before it is accepted. I demand that every such statement must be capable of being tested” (1959: 26: italics in original). Accordingly, observations that form the basis of scientific knowledge must be both objective and subject to falsification. In essence, Popper’s thesis is a criticism of the inductionism seemly bound to the positivist position.

Without going into further detail, suffice to state that Popper’s logic, in particular his account of falsification, while becoming profoundly influential in the natural and human sciences, also became the target of many critics. Among these, Thomas Kuhn proposes that falsification is simply incompatible with the normal way that science progresses; Paul Feyerabend brands the enterprise as “silly”; and even Michael Polanyi is critical of the insistence that a theory cannot be regarded as a theory unless it can be tested and shown capable of falsification. Arguably, what sits at the center of all of these various debates and criticisms is the question over objectivity. This is a topic that has, I suggest, been adequately covered elsewhere (e.g., see Section 5.6). But the point made here, drawing on this brief account of positivism and the importance of scientific observation, is that even within this there are debates over what is and is not objective truth. The discussions now give a detailed account of the research method adopted in the present study and grounds for criticism.

11.4 RESEARCH METHOD

11.4.1 An Explanation of the Method

To begin with, a reminder of Stainton-Rogers’ useful definition of the term “discourse” from the perspective of constructionist social psychology: “… a discourse is defined as the product of constructing and the means to construct meaning in a particular way” (2003: 81). Stainton-Rogers is not alone when she notes that the field of DA is characterized by numerous methodological types and definitions of discourse. Her version is however ideally suited to the present purposes and is consistent with DP as a methodology for research.

For its core ideas, DP draws on DA, conversation analysis, rhetoric, and ethnomethodology. It takes its theoretical and analytical origins in, for instance, the pioneering works of Michael Gilbert and Nigel Mulkay who were the first sociologists to apply a discourse analytic methodology in their field. They investigated how scientists “do knowledge” in discourse: in an intriguing study, which uses interviews and written texts, they report an asymmetry between scientists’ treatment of “correct belief” as derived unambiguously from experimental evidence (which therefore has no need of explanation), and errors that must be explained away as these are invariably seen as the result of nonscientific influence. Gilbert and his coresearcher’s interest lay particularly in the latter phenomena, finding that scientists used a far more elaborate repertoire in their accounting for error (recall the discussion of interpretive repertoires in Section 6.2.3).

DP is concerned with the action orientation of language (talk and text), specifically the rhetorical construction and organization of versions of affairs, their social organization—how it works—and what it is designed to do. In Derek Edwards’ and his coworker’s own words, DP is “… concerned with the nature of knowledge, cognition and reality: with how events are described and explained, how factual reports are constructed, how cognitive states are attributed” (1992: 2). It is, in other words, a functional approach to the analysis of discourse with a particular interest in epistemology, and its core assumption is that language is constructive/constructed, functional, consequential, and variable.

A simple example will, it is hoped, illustrate these points. The conceptualization of DP, in the most basic sense, draws a distinct difference between the contents of what a speaker utters and what the utterance actually accomplishes as linguistic action:

it is statistically safer to travel by airplane

In this short statement, the speaker, on the “contents level,” is simply reporting a fact. From the analytic perspective, the speaker is using scientific accounting (“statistically”) to persuade the listener of the factuality of their version of affairs. The upshot is to imply that all other forms of travel are risky and that to ignore this version of affairs is to take action (decisions, purchases, intentions, and so forth) which is risk laden—even foolish—with the consequences “on your own head.” Alternatively, as what Jonathan Clifton describes as “competent members of the same community of speakers,” we know that this is a commonly issued statement (Superman said something similar to Lois Lane after a near helicopter disaster in the 1978 film), so it must be true. But—and it is a big “but”—we cannot be sure that either perspective is anything other than the researcher’s own interpretation in the absence of evidence of how the listener formulates their understanding of its contents. This raises two points that are returned to subsequently: the idea of the “next turn proof” and the role of the researcher.

In Wooffitt’s analysis, DP is “… focused on the ways in which cognitive notions can be treated analytically as situated practices which address interactional and inferential concerns in everyday circumstances” (2005: 116). By locating psychology in language, it makes possible the direct study of the processes of thinking. Contrast this with the traditional experimental method that, from this perspective, is reduced to the study of secondary or indirect phenomena in the form of, for instance, reported recollections of past events. This difference is significant: DP studies psychological phenomena as constructed in everyday talk and text and as “noticed” by both speaker and recipient (the idea of “next turn proof” mentioned earlier), while conventional methods treat discourse as the pathway to what a person is “really thinking.” This inevitably treats phenomena as secondhand.

A further significant difference lies in DP’s focus and interest in what is not said. Robin Wooffitt provides the perfect example to illustrate this: he describes a comparison between transcripts of interviews with a mentally ill patient and people talking about their Psi experiences. Both report strange phenomena. But the Psi reporters linguistically work to “display” themselves as normal (“I was just coming into the kitchen when…”: “I was getting out of the car and then I saw …”) as a preface to their account of a weird happening, whereas the mentally ill patient uses no such rhetorical devices: “the god appeared holding a sword and shield” (paraphrased). The omission is suggestive of abnormal behavior.

The kinds of questions that DP focuses on are then: how is the account constructed to appear, for instance, factual and objective, what resources are used, with what function, and how these connect to topics in social psychology. Again, this contrasts with traditional methods that focus on “why” questions.

The DP project draws its data from everyday talk and text that can take the form of audio and/or video recordings, interviews, and any kind of written text. Following Edwards and his colleague, the focus of analysis is on the social and rhetorical organization present in the data, as opposed to linguistic organization, for instance. It is thus an observational science that seeks to describe and document phenomena in order to support broader theoretical claims. But this does not necessarily make it an inductionist approach, as understood in Karl Popper’s interpretation of the term. In summary, DP is a theoretically informed analytical approach that seeks to investigate and understand social psychological phenomena as they are enacted as located (“situated”) in the speakers’ understanding of how talk normatively and progressively unfolds (the “procedural discursive interaction”).

An approach that locates knowledge work in discourse and takes that discourse as the topic of study has the potential to lead to a greater understanding of how knowledge work “works.” As argued at the end of Chapter 9, such an approach simply extends an existing trend to conceptualize knowledge as social action subscribed to by many scholars in the knowledge management (KM) field.

11.4.2 Grounds for Criticism and the Issue of Measuring Quality

It is usual in scientific research reports to discuss matters of bias (both researcher and participant), validity, and reliability (e.g., the extent to which findings can be generalized to wider phenomena in the real world) in the context of the research methodology and its findings. However, the nature of the present research makes such topics invalid—although this status is itself the subject of some debate—with the possible exception of “validity,” which is returned to later in the chapter. Of more relevance is the question over the extent to which the quality of qualitative research methods may be measured, which is what we turn to now, beginning with a brief perspective on the wider grounds for criticism.

In considering what criticism has been made concerning DA, an interesting viewpoint is expressed by Stainton-Rogers: she suggests that studies in critical social psychology have been largely (as of 2003 ) ignored by researchers in the experimental tradition. Consequently, there is a perspective that criticism of DA methodologies are often raised by its own proponents (e.g., see Charles Antaki and coresearchers, 2002, for a discussion of analytic standards: Emanuel Schegloff, 1997, on the issue of context: on the issue of cognition, see Teun van Dijk, 2006; Jonathan Potter and cowriter, 2003; Linda Wood and cowriter, 2000: on the omission of considerations of experience, unconsciousness, subjectivity, etc., see Bethan Benwell and cowriter, 2012: for the subjectivity of analysis, see Maria Stubbe and colleagues, 2003; and see Charles Antaki, 2012, on the subject of using mixed methods). The reader will have already seen how researchers in critical social psychology also frequently direct criticism in the direction of experimental methods research, despite the apparent lack of reciprocation.

Arguably, the most obvious and problematic issue concerning qualitative research in general, and one that can be understood as underlying many other points of criticism, concerns the question of how to measure the quality of qualitative research methodologies. This is the principal focus of the following discussions.

The question of how to measure the quality of qualitative research methods is the topic of a substantial debate among qualitative researchers (see, e.g., an interesting study by Peter Cooper and Alan Branthwaite, published in the 1970s, which argues for the maturity of qualitative methods by comparing the findings of a qualitative and quantitative study of the same phenomena, finding similar overall results). This evidently concerns the issue of how to determine the values of qualitative research. The problem can be condensed into three of its interconnected characteristics: first, the diversity of method; second, the perspective that conventional criteria of measurement such as reliability and validity, so well established in quantitative methodologies, are irrelevant; and third, the profound difference in epistemology between quantitative and qualitative researchers and their methodologies, which some researchers suggest is the source of the problem. In particular, Lucy Yardley, a psychologist at the UK’s University of Southampton, claims that the first two combined have led to a situation in which there is an absence of firm general guidelines relevant to the work of the qualitative researcher. It is this omission that Yardley, and Robert Elliott and his colleagues seek to address with their “evolving” proposed guidelines.

Specific to the field of DA in psychology, there is a further matter that impacts on the quality question, which concerns the use of the term “measure.” While many scholars debate and propose methods for addressing the quality question, the term “measure” does not seem to feature. The term is absent from relevant discussions offered by, for instance, Wood and Kroger, and Potter and Wetherell, in their respective accounts of how to do DA. There is perhaps one simple reason for this: the term “measure” implies a scale or a benchmark, a mark out of ten, which in turn implies “quantification.” Emphasizing the limited role for “quantification” in DA in general, Linda Wood and her colleague propose more appropriate phraseologies (how research claims can be warranted) as does Potter (how research claims can be validated), for instance.

Examples of how one can strategically approach the quality issue in DA methodologies (in general) are detailed by both Wood and Kroger, and Potter and Wetherell, with the former drawing on the latter. Wood and her colleague propose that the issue concerns warranting—how to give justification to and grounds for analytic claims. They question the application of the traditional notion of “validity” on the commonsense grounds that an analytic account can only ever represent one version of many possible versions of affairs, so it can never be considered as either true or false. If validity is traditionally considered to be the measure of research claims’ “fit” with the world as it is, then clearly DA studies cannot be evaluated on this basis. An alternative conceptualization of “validity” is needed.

Wood and her colleague’s solution for “warranting” centers on two principal components: the trustworthiness of the account, which can be addressed through ensuring that a clear and detailed description of all stages of research is included in the account, and the soundness of an account, which principally concerns the analytic section of the research report. A number of factors need to be adhered to include the grounding of analysis in speakers’ orientations (addressing the speaker’s understanding as displayed in discourse vs. the analyst’s interpretation), which refers to the “next turn proof” analytic tool encountered earlier; the coherence of analysis (a claim should satisfactorily account for exceptions and deviants in a discourse); the plausibility of an account in, for instance, how it relates (can be grounded) to other research work in similar areas; and the notion of “fruitfulness” (Potter and Wetherell’s term), which addresses the implications of a study’s findings for other work and what questions it might raise in terms of future research.

Specific to DP, Jonathan Potter, writing in the late 1990s, provides a set of four pragmatic guidelines. Interestingly, Potter describes his guidelines as “validation procedures” and wastes no time on debates around whether this term should be used in this type of research methodology or not: he quite simply reformulates it. Potter’s procedures start with the analyst’s attention to grounding their claims in speakers’ own understandings as displayed in discourse, which also serves as a check for interpretive claims—the “next turn proof’ aspect. This has correspondence to Yardley’s “sensitivity to context” principle. Second, attention to what Potter describes as deviant cases can be useful in assessing the sufficiency of claims—do deviant cases in a discourse, for instance, support an analytic claim or weaken it? This has far less synergy with Yardley’s principles probably because the explicit search for and importance assigned to “deviant cases” is very particular to DA, particularly DP. Potter’s third procedure concerns an account’s coherence with respect to previous studies in similar areas, which can be clearly related to Yardley’s first principle as well as her third, “impact and importance,” but which varies from Wood and her coworker’s understanding of “coherence.”

The last procedure concerns the reader, which Potter describes as the most important of the four. This refers to the inclusion of extracts of data (the “real” data) in research reports so that the reader is able to make their own judgment of the analyst’s interpretation. Although he does not elaborate on this point here, this is an interesting notion: it potentially makes the reader part of the analytic work as an active contributor and implicates the role of the researcher as interpreter. This can be contrasted with Yardley’s emphasis on a study’s impact and importance in the sense of how a study markedly contributes to the knowledge (does it tell us anything new, does it make a difference?). While it is certain that Potter has this as an objective for research, there is a suggestion that this is made more personal to the individual reader rather than the academic research community as a whole.

The present research reported here draws on Potter’s validation procedures, the application of which is discussed in the following section. As a final point, Potter creates a caveat: in his own words, “… none of these procedures guarantee the validity of an analysis. However, work in philosophy and sociology of science over the last 30 years has cast doubt on the possibility of such bottom-line guarantees in science, whether provided by observation, replication or experimentation” (1998a: 241: italics in original). Even “the independent audit” and “interrater reliability approach,” as outlined by Jonathan Smith and Linda Wood and her coworker, for instance, are no guarantee for an account’s warrantability according to this caveat.

11.5 RESEARCH DESIGN

11.5.1 Design

This section of the chapter turns attention from discussions around topics in methodology to those around how the present research was actually carried out.

A first point to note about the DP methodology is that there is no straightforward prescription for analyzing discourse. That is, there is no check box list of actions to take that will lead to the perfect analytical outcome. Derek Edwards and his coworker, in their book introducing DP, synthesize various features of discursive action and the relationships between them in a conceptual scheme referred to as the Discursive Action Model. It is organized into three principal themes: action (e.g., a focus on action rather than cognition), fact and interest (e.g., negotiating the dilemma of stake and interest), and accountability (e.g., the speaker’s displayed sense of accountability in reports). It establishes some useful guiding principles as well as some potential areas on which to focus research (such as, e.g., how speakers construct and manage their remembered accounts of past events as factual and authentic).

More practical as a framework for guiding methodology, Jonathan Potter and Margaret Wetherell map out a 10-stage guide to DA: research questions, sample selection, data collection, interviews, transcription, coding, analysis, validation, report writing, and application. This is not meant as a strict order of business for undertaking DA: the order in which these “actions” are engaged is entirely dependent on each specific research case. Both the Discursive Action Model and the 10-stage guide are used to inform our research, with the latter providing points of discussion here. Note that both sample selection and data collection are addressed in the following subsections, “Research Data” and “Participants and Ethical Considerations,” and that the topic of “interviews” is not relevant to the present study. “Validation,” in the following discussions, addresses how the present study approaches matters of validation procedures. “Report writing” is not relevant to our present purposes, and “application” is addressed in the final chapter of the thesis.

Research Questions

In DP, the use of research questions is more a matter of opinion and preference than prescription. Some, including Robin Wooffitt from the perspective of conversation analysis, are persuaded that even indicative research questions are unnecessary and potentially limiting to the analysis at hand. Where they are used, Carla Willig advises that these should be focused on how accountability and stake are managed in real everyday life. Accordingly, DP asks “what” and “how” questions rather than the “why” questions which are the hallmark of experimental methods as noted earlier.

As with any research project, an important part of formulating research questions is researching relevant literature. This has two practical outcomes: first, it enables the researcher to understand how particular topics are dealt with and to identify any gaps in the literature. Second, it enables the researcher to ground analysis in existing research. Both support the drive for coherence. In other words, a piece of work that contributes to and builds on existing work will likely be seen as more plausible than one that does not. Consequently, based on the themes evident in the KM literature review (e.g., trust), the DA literature and in particular those that are relevant to DP formed the basis of primary research. This led to a relatively broad purview that can largely be categorized as (i) those literatures concerned with DA/DP as a methodology and a theoretical approach and (ii) reports of studies relevant to the matters in hand. To facilitate such a broad field of research, a computer-based research database was created in which details of all papers/books (including those from KM and related fields) were recorded, along with links to the source publication and research notes. To date, this database contains some 500 entries. A further action that was taken was to create a minidatabase of DA terminology (“jargon”) used in research reports, which, during the analysis, greatly facilitated the discovery of studies compatible with, or in other ways supportive of, or indeed contradictory to the analysis and its findings reported here.

Based on the research reported in the previous chapters, the indicative research questions are proposed as:

  • In the environment of organizational knowledge sharing, how are matters of identity, trust, risk, and context constructed as live issues and concerns of speakers?
  • It is suggested that such matters or themes influence knowledge sharing—how and with what effect for speakers and their business?
  • It is also suggested that these themes work corelationally—how is this displayed in discourse in social interaction, and with what effect?
  • It is proposed that these matters are accomplished tacitly as psychological phenomena, with the implication that speakers orient to them as live matters consequent to their understanding of what is going on in the environment (analogous to the automatic, unconscious abstraction of structures and patterns in the environment): how is this displayed and oriented to in discourse?

Transcription

Transcription as a preparatory step to analysis involves transforming spoken (as opposed to written, such as online forum contributions) texts into written form suitable for analysis. It also involves annotating the transcript with symbols indicating pauses (often including duration), intakes of breath, rises or falls in tone, increase or decrease in volume, overtalk, laughter, speech repairs, and so on. The aim is to produce a written version that is, as far as the research aims require, as accurate a representation as possible of the spoken words while acknowledging that a literal rendering is impossible. A key to transcription conventions used in the present study is contained in Table 5, which is based on that developed by Gail Jefferson.

Any transcription’s level of detail is determined by the research question. In our case, the process began with a “gist” transcription noting what each meeting recording covered in terms of explicit topic, action (e.g., argument, agreement, persuasion, etc.), speaker, indicative timings, and so forth, as a first step in becoming familiar with the data. The aim was to produce a transactional description of each meeting’s discourse. Each meeting recording was then transcribed in detail, using the appropriate transcription conventions, using the “gist” version to discard any meeting talk considered to be wholly irrelevant to the business at hand, for example, unintelligible talk, pauses for passing traffic, or reference to technical difficulties with IT systems used in the meeting. As well as representing a practical stage in preparation for analysis, the process of transcription is also an invaluable way to get very familiar with the data, to “dwell in it” as Michael Polanyi might have described it.

The following is a short example taken from the analysis contained in the subsequent chapters:

  1. Steve:Wa::ay. (.) Okay so we have Ade, Damien and Manoj.
  2. (2) [“yeahs” via conference call]=
  3. Steve:Yep, yep, yep. Good. Okay. Ummmm. (0.5) Right=
  4. Bob:= Shall we go through the list ↑first?
  5. Steve:Ye::ah Mark why don’t you—why don’t you wheel us through the list?
  6. Bob:Alrighty so starting in alphabetical with (names). So, (project name)?

The “::” in Line 1 indicates the word is elongated, while the bracketed number in Line 2 indicates a length of silence. The “=” shown in Lines 3 and 4 indicates no discernible gap between utterances, while the “↑” displays a rise in intonation.

So far as possible, the aim is to act in the role of “objective observer” in the transcription, coding, and analysis stages, for instance. It is however clear that no such research can ever be immune from the presence of the researcher—even acknowledging that the way in which a particular utterance might be heard is always open to the possibility of being heard differently by another researcher.

Coding

Not to be confused with the application of transcription conventions or with the analysis itself, coding is the process by which the researcher searches for and selects instances in the transcript relevant to the research question or theme under investigation.

The indicative research questions lead to a particular interest in the themes of identity, trust, and risk and what contexts in general speakers make live in their discourse. The research is concerned with whether and if such themes are invoked and oriented to by speakers as psychological phenomena with influence and effect on the scope and content of knowledge sharing actions. Consequently, the process of coding involves trawling through the data—working iteratively between the transcripts and the recordings—to identify the presence of these or related themes as discursive actions: “instances of interest.” In their 10 stages of DA, Potter and Wetherell advise that such a process should be as inclusive as possible—that is, even instances that are considered to be “borderline” in relation to the themes of interest should be included.

Analysis

Analysis is an iterative process in which the researcher must continuously move back and forth between analytical concerns, the corpus of relevant published literature, and the data itself, both the transcripts and the source recordings. It was, for instance, felt necessary to return to the original recording from which an instance of interest was drawn to experience over and over again its whole context and the actual performance of the speakers.

The core principle is that the topic of interest is language itself. While there is no set procedure for doing analysis, there are some key questions to be borne in mind: why am I hearing/reading the text in this way, what are the features that lead to this way of hearing/reading it, what is the recording/text making me feel, and so on. The researcher is specifically looking for both patterns and variation in the data.

Analysis focused specifically, one after the other, on each of the four identified themes related to knowledge sharing. Note that the purpose of the meetings in the dataset, in each case, is understood knowledge sharing: for instance, a routine sales and marketing meeting has the purpose of sharing past, present, and predicted activities and experiences.

The analysis begins by identifying the patterns within the organizational structure of the data: what is its nature—is it agenda driven, for instance? This is followed by a stage that investigates the rhetorical practices evident in the data: what discursive work is being done, and with what effect? Next, the analysis considers matters of construction, evaluation, and function: what is being constructed, how is this negotiated, for instance, and with what function? In examining the instances of interest identified earlier in the data, their general context is addressed: why this extract, what is happening, what precedes it, and what are its major features? Throughout all aspects of the analysis, exceptions or deviant cases are sought, with the analytic purview focused on what speakers themselves orient to or construct as “consistent and different.” Thus, following Potter, analytic descriptions that are “careful and systematic” lend themselves more to constructing claims of a theoretical nature. The analysis also focuses on rhetorical effects and consequences: what are the effects of the discourse on speakers, and has anything changed? Throughout, the analysis is carefully grounded in the relevant literatures where possible.

Analysis is concerned with the ways in which speakers manage issues such as blame and accountability, the action orientation, and rhetorical organization of talk and how people construct particular versions of reality and what these accomplish for the interaction. Specifically, and drawing on the theory of language (discourse) represented in DP, the study considers the question of how people, in everyday organizational settings, go about the business of constructing and accomplishing actions in discourse, with what function, and what consequences. In particular, the analysis is interested in how speakers orient to the identified themes associated with knowledge sharing and with what effect for speakers and recipients. Each instance of interest identified in the coding stage is forensically dissected for action, function, and effect, and compared with other instances, with an iterative process of testing used to identify features of interest to the focus of the study and in turn to compare and contrast these with features in the extant literature.

Validation

Jonathan Potter’s four validation procedures specific to DP, discussed earlier, are applied in the research: (i) analysis pays attention to speakers’ own understanding as displayed in discursive interaction (not just the researcher’s interpretation), (ii) the adequacy of a claim is assessed against any “deviant cases” in the data, (iii) analysis and claims are grounded in previous studies, and (iv) the inclusion of data extracts allows the reader to form their own interpretations and judgments. Taking each of these procedures in turn, we can now look at how these are applied in the present research.

The analysis is concerned with those issues that the speakers themselves make live and relevant in their talk. That is, the analysis attends to what sense speakers and recipients are shown to construct and orient to in their discursive interaction and not just how the researcher might interpret a particular utterance. Following Edwards and Potter, reports and descriptions are “… examined in the context of their occurrence as situated and occasioned constructions whose precise nature makes sense, to participants and analysts alike, in terms of the actions those descriptions accomplish” (1992: 2). So, where, for instance, analysis suggests a particular contextual matter such as “trust” is made live by a speaker, evidence is sought for this as an understanding displayed in subsequent speaker turns: the next turn proof procedure. If such evidence is absent, the researcher’s interpretation is either excluded from the analysis or explicitly marked as potentially speculative. An example of this can be seen in the analysis of “risk” (Chapter 13).

Deviant cases are particularly sought: that is deviant to the perceived dominant pattern, for instance, seen in the data or in a meeting recording as a whole event. An example of this can be seen toward the end of Chapter 13, along with a discussion of its meaning to the analytic claims. Deviant cases can either support analytic claims or serve to weaken them. The objective is not to ignore them as irrelevant to the business at hand but rather to notice these for what they accomplish and how they relate to analytic claims. As Jonathan Potter and Margaret Wetherell advise, exceptions can often “dredge up” important features and problems.

From the outset of the research and analysis, the present study approaches existing DA studies as a major source of knowledge to inform and provide a source of coherence for analytic claims. This can be seen in how the analysis reported in the following chapters is, for instance, grounded in the literature where relevant, showing how the analysis and claims made here either support or vary from existing work.

Extracts (see Table 4 for a summary of these) from the data are included in the reported analytic findings to both support analytic claims and allow the reader to formulate their own interpretation and judgment of the data. This attends particularly to the ever-present possibility that talk and text are open to more than one analysis and conclusion. In the following chapters, extracts are placed alongside detailed descriptions and accounts of how the analysis is grounded and developed. Following Linda Wood and her colleague, the demonstration of analysis in the inclusion of extracts is understood as a key requirement of warrantability.

11.5.2 Research Data

The size and content of any sample selection are driven by the research question. Potter and his coworker emphasize that, in DA, the size of the research sample is not a determinant in a study’s success. The analyst’s priority is an interest in the language itself, how it is used and what it accomplishes, not the speakers. Most DA research generally samples a corpus of data from different sources or from the same source (e.g., see Robin Wooffitt’s intriguing 2001 study of verbal interaction between mediums and their clients finding “reported speech” to be a commonly used linguistic device, which, he claims, works to invoke “favorable assessments” of the psychics’ authenticity).

When collecting data, many of the same principles of conventional research methods are relevant: a consideration of ethics, for instance, and ensuring the appropriate permissions are gained. Preference is always for using naturally occurring language in interaction (i.e., with the complete absence of the researcher). But the use of surreptitious recordings would, for instance, be ethically questionable. This raises a particular question concerning ethics in respect of some of the data used here (public online discussion forum), which is addressed in the following subsection, “Participants,” as part of a general discussion on the ethical approach of the present study.

Firms were selected for the study based on the researcher’s prior relationship with their senior management and the nature of their business as having an emphasis on sharing and developing knowledge. This prior relationship transpired to be an essential factor in gaining the cooperative participation of both organizations. Several other organizations, where no prior connection or relationship existed, were also approached as potential participants but, while expressing support for the research project and its aims, all declined to become involved as participants. This suggests a sobering lesson and potential limitation for future research: the nature of the research methodology is such that, without a prior relationship of trust, potential participating organizations are unlikely to agree to take part.

Individual participants were not selected by me as the researcher: instead, these represent, in effect, an opportunity sample in that they happened to be present in the meetings that took place at times and dates when I, with the agreement of the organizations’ senior management, happened to be present at their respective premises (but not physically present in the meetings themselves). Nor was there any deliberate selection made of the meetings to be recorded or influence upon their topics of discussion. In this sense, while the organizations themselves are actively selected by me as the researcher, the actual participants are not. Nor was I physically present in any of the meetings themselves, not even in the guise of observer. I can claim, then, that this data can be considered as naturally occurring language in interaction.

The principal empirical basis for the present study comprises digital audio recordings of 13 individual meetings, collectively representing more than 15 hours of recordings, taking place in two different London-based organizations during March and May 2013. These meetings are regular, scheduled meetings in each case.

No instruments were used in the data capturing part of the project apart from a small digital audio recording device, which was positioned in meeting rooms in advance of meetings to be recorded. All participants were briefed in advance, verbally or via written instruction, of the nature and purpose of the study.

11.5.3 Participants and Ethical Considerations

This section of the chapter focuses on the ethical conduct of the study, with particular consideration given to the use of online discussion data. This is followed by a description of the participating firms. A description of the online data and its source is given at the start of the analysis in Chapter 15.

In compliance with ethical standards for research, two documents were prepared: a participant consent form and an information/briefing form. All participants were given both documents prior to any recording being undertaken. In most cases, participants were given a short verbal briefing on the nature of the research, in particular of their rights to withdraw from the study at any point. Participants, comprising organizational employees, contractors, and/or associates of two independent London-based private sector consultancy firms, were required to sign individual consent forms. All participants and their organizations are treated as anonymous in all aspects of the research.

With respect to the online discussion forum data, no overt permission was sought by the researcher from forum participants. Participants made their contributions voluntarily to a publicly available discussion forum, which is part of an international professional networking website: that is, access to this forum is not restricted to registered forum members only, although in order to access the site users must first register with the website itself. Note, though, that there is no restriction, fee, or qualification required in order to register with this website. A further important point to note is that this group, which is a networking group, publishes no explicit rules, guidelines, or other considerations in respect of members’ contributions and their use thereof. To post a contribution, one must register with the group, but access to its contents is available to any website member.

Was it ethical to sample and use this data collected in this way? The review of studies using data from computer-mediated communications (Section 6.4.3) gives an idea of the widespread practice of using such data in research. On the advantages of using this type of data, in their book on Discourse and Identity, Bethan Benwell and her colleague describe this as particularly “authentic and pure” because it requires no transcription and places the researcher in the position of a “lurker” as opposed to the traditional perception of the “scientist as observer” where the presence of the researcher can influence that which they observe. They, in fact, make no reference to any ethical issues with using data from such sources. In reality, the general thrust of research is more focused on the values and advantages of using this data rather than on any ethical issues that this might raise (e.g., see a study by Charles Antaki and colleagues, published in 2006, which compares everyday conversation talk with online forum discursive interactions). Moreover, in their account of the “revolution” of “Big Data,” Viktor Mayer-Schonberger and cowriter recently describe in some detail how large Internet businesses including Google, for instance, routinely “scrape” the Internet for content as data for their algorithms. There is no indication of any permission being sought from individual contributors.

To answer the initial question, is it ethical: it is claimed here that the use of such data—because of its circumstances in terms of accessibility to wider audiences and the implied acceptance by contributors that their data may be used for purposes other than they intended, together with the practical problems in attempting to gain permission from individual contributors, and the impact that such a requirements would have on the growing and valuable contributions from research in computer-mediated communications—is ethically acceptable. As a caveat, however, it should be noted that this is a gray area to say the least.

What follows is a brief description of each participating organization. Note that the context of individual meetings, where they form the basis of subsequent analysis, is described in each case of use:

  • Organization AOrganization A is located in Central London although the business also has offices elsewhere in the United Kingdom and in mainland Europe. It has a full-time staff of around 80. The company describes itself as a learning and communications specialist, with a particular focus on the design and application of learning technologies and software to facilitate the transformation of client organizations into successful (learning) businesses. While the organization operates in a competitive marketplace that can be described as highly knowledge focused, the organization itself does not have any formal KM policies or practices in place (like Company B). Also similarly to the other participating organization, the work environment is a large, airy open-place space. People largely work at long banks of desks, and there is a lot of “hot desking.” There is a centrally positioned coffee area with a small sitting area where ad hoc meetings take place and where informal chat happens. Both meeting rooms are screened off from the main working area. The working space is relatively quiet and informal.
  • Organization BLocated in the center of London, Organization B describes itself as a services innovator and aggregator, which provides specialist professionals on permanent or contract basis to the public sector in the United Kingdom and which has a core staff of around 60. The core business is, in effect, a contract bidding “machine.” The working environment is a large open-plan office, surrounded by spacious, glass-fronted meetings rooms. These are used for formal, scheduled meetings as well as ad hoc ones when available: that is, they are in use virtually all the time. Another noticeable feature of the environment is the low noise level despite the perennial presence of one or more “floorwalkers” talking on a mobile phone. The organization is team driven. There is an interesting contrast between the heavy use of internal email to communicate with colleagues (across the room, for instance) and occasional impromptu problem-solving or idea-generating interactions, which take place by, for instance, the coffee facilities. In fact, the researcher particularly observed that ad hoc meetings often took place around a counter in front of the centrally located kitchen area, which suggests that the design of this area in a prominent position was deliberate.

11.6 POINTS OF LIMITATION

As noted earlier, an obvious and very practical limitation to the present work is the difficulty in gaining the trust and agreement of organizations to take part in studies of this type. This clearly has ramifications for future research. This issue is not so much concerned with sample size, but rather with sample variety. The present study is limited to two organizations, and while they operate in quite different fields, they are both involved in the private service sector. It would have been preferable to have been able to include, for instance, organizations from the public sector and some from radically different business sectors.

A further limitation (of necessity) of the present study is that the research and analysis is entirely done by one person. Experience suggests that research of this nature would benefit from the involvement of more than one researcher in order to be able to bring different perspectives and knowledge to bear (ironically, much as Dorothy Leonard and her colleague claim in their theory of creative abrasion in the KM field). These matters draw attention to the interpretive nature of the study and its methodology. It is always possible that another researcher might arrive at different findings and conclusions, particularly if using a different analytical methodology. There again, that topic of objectivity versus subjectivity reemerges. Suffice to state that the adherence to Potter’s validation procedures mediates, as far as is possible, the consequences of these types of research limitations.

We have elsewhere discussed potential limitations specific to DP (see, e.g., Section 7.8.2). One point is worth reemphasizing, and this concerns the relevance of this type of research and its findings specific to the business world in general and the KM practitioner in particular. In Chapter 5 and the discussions around social constructionism, we noted the warning given recently by Christian Madsbjerg and his colleague that studies in the human sciences are often seen by those in the business world as of little practical relevance, as being academic and notoriously difficult to understand. The onus is consequently on the shoulders of the researcher to find and promote the relevance of research work specific to the audiences to whom it may or should be of interest.

There is one last issue to address in terms of potential limitations, and this concerns the decision to focus on the thematic categories of knowledge sharing—trust, identity, risk, and context. Is it possible that in looking for how these matters are invoked in discourse, one simply finds what one seeks? That the act of looking brings these interpretations to the fore? When looking for instances of how and what speakers invoke as context, is the analyst unawaredly conjuring a context by applying her own categories? In mitigation, careful and methodical attention was given to seek, not just for instances where speakers invoke this or that context, but also for evidence of cospeakers orienting to the same phenomena, thus displaying shared understanding, as next turn proof.

To complete this chapter, a brief summary and a reprise of the research questions of interest are given in the following.

11.7 SUMMARY AND INDICATIVE RESEARCH QUESTIONS

A consideration of a complex tangle of issues and questions encountered in all of the fields of interest in Part One’s discussions leads to some indicative research questions. These are included in their “long version” in the earlier discussions on “Research questions” (Section 11.5.1). These can be shortened to:

Using the DP approach, discourse in organizational settings is analyzed for how the thematic categories of trust, identity, risk, and context are made live and with what, if any, influence and effect on what are understood as knowledge sharing meeting/forum contexts. In particular, will such an analysis inform an understanding of these thematic categories as tacitly invoked phenomena, and can these themes be shown to be corelational?

To summarize, then, the present research is a qualitative and interpretive methods study located in constructionism, which approaches knowledge and versions of reality as socially constructed in everyday discourse. The methodology draws on DP, with its design informed by Jonathan Potter and his coworker’s ten stages of DA, Derek Edwards and Jonathan Potter’s Discursive Action Model, and Potter’s four procedures of validation. In addition, matters of measuring the quality of DA studies and ethical concerns have been addressed, and the limitations of the present study noted. The following analytic chapters report the study’s analytical findings, based on the preceding description of method and design, with analysis particularly focused on knowledge sharing activities.

On a final note, it is conjectured that by extending the directions and boundaries already taken by many in the KM domain, the present study has the potential to contribute alternative ways of conceptualizing KM, knowledge work, and knowledge sharing in particular.

FURTHER READING

  1. Buchanan, D. and Bryman, A. (Eds). (2009). The Sage Handbook of Organizational Research Methods. London: Sage.
  2. Chalmers, A. (1999). What is this thing called Science? 3rd Edn. Maidenhead: Open University Press.
  3. Holden, M. and Lynch, P. (2004). Choosing the appropriate methodology: understanding research philosophy. The Marketing Review, 4: 397–409.
  4. Mulkay, M. and Gilbert, G. (1982). Accounting for error: how scientists construct their social world when they account for correct and incorrect belief. Sociology, 16: 164–183.
  5. Potter, J. and Wetherell, M. (1987). Discourse and Social Psychology: Beyond Attitudes and Behaviour. London: Sage.
  6. Silverman, D. (2007). A Very Short, Fairly Interesting and Reasonably Cheap Book about Qualitative Research. London: Sage.
  7. Wood, L. and Kroger, R. (2000). Doing Discourse Analysis: Methods for Studying Action in Talk and Text. London: Sage.
  8. Yardley, L. (2000). Dilemmas in qualitative health research. Psychology and Health, 15: 215–228.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.116.42.149