3
What is Technology?

Martin Oliver

3.1 Introduction

Research in learning technology makes many claims about technology’s effects, but rarely asks what technology is. This is a dangerous oversight; it leaves us with inadequate accounts of the role of technology, and we risk simply cataloguing a series of outcomes without really understanding what is happening or why.

In this chapter, this issue will be explored by relating work in the field of learning technology to traditions of research where theories of technology are better developed. What this chapter will not do is simply provide lists, offering taxonomies of technologies or effects as if these solved the problem. Taxonomies of technology are either based on specific conceptions—in which case they follow from, rather than offer a basis for an understanding of technology—or else they rely on claims that are at best “common sense,” and at worst, simply naïve. Exploring the philosophical foundations of the field provides an opportunity to step back from the problem, examining the object of study from a range of perspectives in order to provide a more thoughtful basis for the chapters that follow. This enables us to stand back from specific fashions—whether they be for iPads or massive open online courses (MOOCs), Facebook or CDs—and ask why we think that any of these things is being considered as a learning technology in the first place.

To do this, first, accounts of the current field of learning technology research will be provided. Then, different historical and disciplinary traditions of work will be introduced. The chapter will conclude by drawing out issues from this review and pointing to implications for work in the field.

3.2 The Absence of Thinking about Technology Within Learning Technology

Theories of learning are widely debated; anyone wishing to make claims about whether someone has learnt something has an array of different positions available to them to work with. By contrast, it seems unusual to even talk about theories of technology. This is ironic, given the central position of “technology” as a term naming this field of research. It is also a risk: remaining caught up in for/against assertions about something we do not fully understand is extremely limiting:

Everywhere we remain unfree and chained to technology, whether we passionately affirm or deny it. But we are delivered over to it in the worst possible way when we regard it as something neutral; for this conception of it, to which today we particularly like to do homage, makes us utterly blind to the essence of technology.

(Heidegger 2004, 3)

This lack of theorization has left us with a poorly conceptualized field, one that is unable to learn lessons from past work. Variations in terminology abound, without necessarily advancing our understanding. Indeed, the field seems to reinvent itself every few years, resulting in a proliferation of related terms: learning technology, educational technology, computer-based learning, computer-assisted learning, multimedia learning, communication and information technology, information and communication technology, e-Learning, online learning, blended learning, technology enhanced learning, and so on. Needless to say, this makes it hard to discover prior work by conventional searching, contributing to a sense of churn and a feeling of constant reinvention. This is not a new problem; Mayes lamented it two decades ago:

In the film Groundhog Day, the protagonist is forced to experience the events of a single day over and over again. He is free to act in any way he chooses, but whatever he does the day always finishes in the same way. […] People who have been involved over any length of time with educational technology will recognize this experience, which seems characterized by a cyclical failure to learn from the past. We are frequently excited by the promise of a revolution in education, through the implementation of technology. We have the technology today, and tomorrow we confidently expect to see the widespread effects of its implementation. Yet, curiously, tomorrow never comes.

(Mayes 1995, 28)

This perception that there has been a lack of progress is compounded by the vagueness of the terminology used. What, for example, do people actually mean by “e-Learning”?

If someone is learning in a way that uses information and communication technologies (ICTs), they are using e-Learning. They could be a pre-school child playing an interactive game; they could be a group of pupils collaborating on a history project with pupils in another country via the Internet; they could be geography students watching an animated diagram of a volcanic eruption their lecturer has just downloaded; they could be a nurse taking her driving theory test online with a reading aid to help her dyslexia—it all counts as e-Learning.

(Department for Education and Skills 2003, 4)

The definition is drawn from policy rather than research, but illustrates neatly the kinds of difficulties that characterize the field. The definition is inclusive rather than exclusive or specific, it ranges across specific examples rather than characterizing what technology itself is, and it relies on reference to other undefined terms (in this case, information and communication technologies). So, conceptually, it does little to provide anyone with any focus or precision.

This tendency to accrete examples, rather than develop an explicit conception of technology, is commonplace; it underpins even the most widespread of frameworks. For example, the Technological Pedagogical Content Knowledge (TPACK) framework (Mishra and Koehler 2006) has become widely used as a point of reference in the last decade. Mishra and Koehler explicitly criticize the lack of theory in the area, and explain how TPACK was developed through a series of design experiments in response to this, but nevertheless it remains a theory of technology integration. It refers explicitly to a definition of teaching (Mishra and Koehler 2006, 1020), but offers no equivalent conception of technology per se. Like the earlier example, it relies on a series of examples: technologies that are now commonplace (textbooks, typewriters, charts, and periodic tables), together with the contemporary “usage of technology [that] refers to digital computers and computer software, artifacts and mechanisms that are new and not yet a part of the mainstream” (Mishra and Koehler 2006, 1023). While Voogt et al. (2013) found 243 published references to TPACK within a period of six years, showing how widely adopted it has become, they note that the concept of technology remains “fuzzy,” being explained self-referentially in terms of “all kinds of technologies,” “emerging technologies,” “digital technologies,” or just as lists of specific hardware, software and services. Current explanations of TPACK (e.g., Koehler et al. 2014, 102) still rely on formulations such as “traditional and new technologies that can be integrated into curriculum”; there is no indication that the “fuzziness” has yet been resolved.

Such vagueness has clearly done little to slow the speed of research in the area, but it does raise concerns about its coherence. Halverson et al.’s analysis (2012) of publications about blended learning over the last 13 years, for example, suggests that this work remains largely ungrounded and lacking clarity; it is still struggling to move beyond discussions of “potential” and towards work that is “more empirical, more grounded theoretically” (Halverson 2012, 398). This is in spite of the fact that this precise problem was identified seven years earlier in one of the articles they found had been most highly cited in the field (Oliver and Trigwell 2005).

It is not easy to establish just how widespread this problem actually is, since what is being searched for is a theoretical gap, an absence of ideas rather than a specific term. However, a review of work in the field (Oliver 2012) found only ten articles in a decade’s worth of publications in leading journals that made any attempt to theorize technology. Even these included borderline cases, such as work focused on design-based research (wherein technology was viewed as a way of instantiating and developing learning theory), or research that addressed technology as part of systems of distributed cognition or distributed learning. Only six articles directly addressed technology in its own right. Five of these explored it in terms of affordances, and one in terms of the social shaping of technology. (These perspectives will be revisited in the following sections.)

This hardly constitutes a coherent or systemic basis for research in this area, and echoes the conclusions from Czerniewicz’s (2010) analysis of the field: there is no systematic theoretical basis that gives it coherence. Instead, Czerniewicz proposes, it may best be understood as a multiplicity of languages and perspectives that coexist in complex, fragmented but interacting ways. Unfortunately for discussions about what technology is, and how we might understand it, many languages that might have made contributions have either fallen silent or have never been widely heard at all.

3.3 Foundational Discussions of Technology

There are various ways in which perspectives on technology can be organized and reported. De Vries (2013), for example, differentiates between philosophical traditions, separating out conceptions of technology as artifacts, as knowledge, as activities, and as values. These conceptions provide a useful map of the philosophy of technology, although not necessarily of the way these conceptions relate to learning technology.

De Vries argues (2013, 26) that the “technology as artifacts” conception dominates common understanding, and for many people is indeed the only way in which they think of technology. However, this conception is markedly different from foundational definitions of the term. For example, dictionary definitions of “technology” point to its Greek origins as technologia, a combination of techné (frequently translated as “craft”) and logia (interpreted variously as “ordering” or “arranging”). Interestingly, given De Vries’ observations, the materials of the craft—the “artifacts,” the “stuff’ of which technology is made”—are conspicuously absent from this practice-based definition. However, it does reflect early philosophical discussions of technology and particularly Plato’s accounts of Socrates’ disagreements with the Sophists. The Sophists held technique in high esteem; Plato in contrast felt that this was unworthy, particularly in relation to the pursuit of the nature of virtue, and this led to suspicion about technical matters (Saettler 2004, 24–6). This opposition is often illustrated by reference to Socrates’ mistrust (in Plato’s Phaedrus) of one of the earliest “learning technologies,” writing.

Stiegler (1998, 1) suggests that this origin established a pattern in which philosophy devalued technical knowledge, characterizing it purely in terms of ends and means, in contrast with the pursuit of knowledge (understood as “justified true belief”). He further describes technology’s association with the technicization of science and society, and thence to its association with “instrumental reason,” which closes off opportunities for communicative action and leads towards technocracy (Stiegler 1998, 11–12).

Schummer (2001) argues that similar conclusions have been drawn from Aristotle’s discussion of technology, particularly in relation to his distinction between artifacts and natural things. In Physics, Aristotle sides with Plato in asserting that art (which, in this discussion, covers the creation of “artifacts”) imitates nature; he also differentiates these in terms of the motives and changes attributed to each, proposing that natural things “have in themselves a principle of motion or change” (Physics II:1), whereas artifacts are motivated by external, human purposes. Such distinctions, Schummer notes, have frequently been assumed to imply a very conservative account of technology, one that rules out authentic human creativity. In contrast, he points out, Aristotle discusses examples where technology completes things that, whilst natural, are less than perfect at serving human purposes, such as house-building. Moreover, he argues that Aristotle’s distinction between nature and artifact is a matter of perspective, not ontology: a hedge is a naturally grown plant, and so natural, but if planted to act as a windbreak it can be viewed as an artifact. This phenomenological distinction has frequently been overlooked, but was eventually developed by philosophers such as Heidegger.

Whether or not they were strictly warranted, distinctions between natural and artificial things remained influential until the late Middle Ages, which saw growing interest in the idea of technology as improving on nature. Francis Bacon took these ideas up as an important theme in his work, envisaging technological wonders that might transform society and improve lives, for example in his utopian novel New Atlantis. He lamented the “obscure and inglorious” discovery of influential technologies such as the printing press, compass, and gunpowder, discoveries that he believed were dependent on chance. Instead, he advocated an interplay between practical experimentation and rational analysis as a better way to pursue the development of new technologies, and saw the “mechanical arts” as the paradigmatic site for advancing what he described as “natural philosophy” (Bacon 1620 I: XCV). He went so far as to propose that “truth, therefore, and utility are here the very same things; and works themselves are of greater value as pledges of truth than as contributing to the comforts of life” (Bacon 1620, I: CXXIV).

This proposal, which entailed a rejection of the Greek distinction between techné and the virtuous pursuit of truth, links to another aspect of Bacon’s philosophy. Bacon also differentiated between the four kinds of “cause” that Aristotle had established (efficient cause, material cause, final cause, and form), attributing questions of materiality and effects to physics (understood in the contemporary rather than Aristotelian sense), and of ideal (Platonic) form and “final cause” (“that for the sake of which a thing is done”; Aristotle, Physics II, 3) to metaphysics (Advancement of Learning VII, 3). He added further that Platonic ideals were problematic, but nonetheless established the important principle of “abridge[ing] the infinity of individual experience […] by uniting the notions and conceptions of sciences” (Advancement of Learning VII, 5–6), but roundly criticized the study of ‘final causes’, proposing that:

The handling of final causes, mixed with the rest in physical inquiries, hath intercepted the severe and diligent inquiry of all real and physical causes, and given men the occasion to stay upon these satisfactory and specious causes, to the great arrest and prejudice of further discovery

(Bacon 1605, VII, 7).

These arguments established further distinctions in thinking about technology, between operation and purpose. Whilst Bacon never denied the existence of “final causes,” only decried their derailment of disinterested science, this laid the groundwork for the assertion that technology itself is “neutral,” an idea that remains politically important to this day.

The shift from viewing this separation as a liberation from muddled thinking to a social problem can be traced, in part, to Marx’s analysis of the means of production in industrialized capitalist society. Arguably, Marx did not object to technology per se, since he saw value in tools, which were a necessary part of crafts. What distinguishes tools from machines, for Marx, is that “the machine proper is therefore a mechanism that, after being set in motion, performs with its tools the same operations that were formerly done by the workman with similar tools” (Marx 1867, Ch. 15, 1). Industry then enacted this on a society-wide scale, with science co-opted to support a process which, “through the division of labour, […] gradually transforms the workers’ operations into more and more mechanical ones, so that at a certain point a mechanism can step into their places” (Marx 1861). This substitution meant that production could increase whilst expensive craftsmen were replaced by fewer machine attendants, leading to de-professionalization, lower wages, longer working days, and an associated rise of child labor in factories (Marx 1867, Ch.15, 2–3). In so doing, it contributed to what Marx described as workers’ alienation: their sense of estrangement from the process of work, the value of its outputs, from other workers, and from society as a whole (Marx 1844, I).

Concerns about technology’s instrumental orientation and de-humanizing effects remained important throughout the 20th century. Heidegger, in particular, was influential in shaping debates around technology and instrumentality. He argued (2004) that modern technology “enframes” the world not as something to bring forth through creative representation, but as a “standing-reserve,” understood only as resources that can be exploited, together with the ends to which they can be put. Modern technology in this account focuses on efficiency through challenging and consuming natural resources (as energy or inputs), including people (as “human resources”), leading to an impoverished and dangerous way of understanding our relationship with the world.

In this discussion, Heidegger maintained earlier distinctions between tools and advanced technologies, but he interpreted this phenomenologically, differentiating the “readiness-to-hand” of the tools we use and take for granted as part of our lives from the “present-at-hand” of things that require making sense of, fixing, or which get in the way, and so distract from purposeful action (Heidegger 2008). This distinction draws attention to the ways in which technology is encountered and used. Rather than technology per se being a problem, danger arises from the calculative ways of thinking associated with it, and from focusing on questions of efficient instrumentality to the neglect of those about purpose, relationship, and being.

3.4 Contemporary Discussions of Technology

Impoverished understandings of technology, of the kind condemned by Heidegger, remain very visible in contemporary educational policy, as well as within research. Buckingham, for example, has argued that contemporary discussions of technology in education are frequently over-simplistic, viewing learning as simply a matter of information transfer and “progress” simply as a means of making this more efficient: “computers are largely seen here as delivery mechanisms—as neutral means of accessing ‘information’ that will somehow automatically bring about learning” (Buckingham 2003, 174).

Indeed, learning technology research has positively celebrated, rather than critiqued, such instrumental orientations, with some authors framing the field as the example par excellence of an interventionist, problem-solving area. Friesen (2009, 6–7) challenges assertions that e-Learning should be “applied, practical, and technological,” and that it should focus on changing the world, rather than understanding it. His riposte to the assertion that the proper end of learning technology research should be “efficiency, effectiveness, or accessibility” (Friesen 2009, 7) is that such a narrow framing simply ignores important alternatives; it focuses on instrumental concerns at the expense of practical concerns (about how people interpret and understand things) or emancipatory concerns (addressing power structures and oppression).

It should be acknowledged that this conception of modern technology as essentially instrumental has been challenged. Latour, for example, calls into question the kinds of technologies considered in such arguments in a playful way.

The problem with philosophers is that because their jobs are so hard they drink a lot of coffee and thus use in their arguments an inordinate quantity of pots, mugs, and jugs—to which, sometimes, they might add the occasional rock. But […] their objects are never complicated enough; more precisely, they are never simultaneously made through a complex history and new, real, and interesting participants in the universe.

(Latour 2004, 233–4)

Rose (2003) develops similar challenges, arguing that much technology critique has tended to construct the general population homogeneously as dull victims of the techno-elite, culture, as something fine that must be defended from technology’s corruption, critics, as people who must be aloof from technology in order to study it dispassionately, and technology as something abstract rather than addressing the diversity of its forms and uses. In such “essentialist” accounts there is little or nothing that can be done to reform technology or its use; it can only be accepted or rejected (Peters, 2006).

Shifting the focus away from “technology” in the abstract and towards specific uses of technology has provided one way to reorient work away from questions of efficient resource use and towards considerations of people and their practices. Interestingly, this perspective used to be more prevalent than is currently the case. For example, Saettler’s (2004) historical account of educational technology explores technology with explicit reference to techné. In this account “technology” is understood as including formalizations of practice, often using the idea of applying science to nature as an archetype. Such accounts draw on Dewey (1916), who used Plato’s discussion of the knowledge and skills of artists and craftsmen to inform his ideas about the democratic curricula.

This conception implies an understanding that links tools and practices, researching not just devices, but the forms of practice in which they are taken up and used. The research described in Saettler’s history demonstrates this orientation, challenging purely instrumental orientations. For example, work in instructional science has built on Piaget’s constructivist theories by exploring technologies as objects-to-think-with. This is not the same as de Vries’ account of “technologies as knowledge” (De Vries 2013, 19–22), which focused on technology as an object of study, a thing about which knowledge claims can be made, nor does it fit with his discussion of “technology as activities,” which is oriented to “means-ends reasoning” (De Vries 2013, 22) of an instrumental kind. Arguably, it lies closest to some of what De Vries’ classifies as “technology as values” in that it concerns the way in which meanings develop and are valued. Papert (1987), for example, explicitly frames technology as part of a culture and rejects “technocentric” accounts that ignore practices, values, and cultures:

Technocentrism refers to the tendency to give […] centrality to a technical object, for example computers or Logo. This tendency shows up in questions like “What is the effect of the computer on cognitive development?” or “Does Logo work?” […] Such turns of phrase often betray a tendency to think of “computers” and of “Logo” as agents that act directly on thinking and learning; they betray a tendency to reduce what are really the most important components of educational situations—people and cultures—to a secondary, facilitating role. […] But if you want to understand (or influence) the change, you have to center your attention on the culture—not on the computer.

(Papert 1987, 23)

Whilst this shows that work has adopted culturally informed conceptions of technology, it also shows that such conceptions were not the norm, but had to be constantly defended. Whilst value-based accounts have been part of learning technology research for a long period of time, the dominant position remains, now as then, an instrumental one.

This initial overview shows how fundamentally conceptions of technology influence the kinds of claims that we make. If technology is viewed instrumentally, work orients towards questions of efficiency using a simple, causal model; if it is viewed in terms of practices or culture, questions of meaning, experience, and value open up. As Peters has argued (2006), these different orientations result in contrasting ways of mapping and understanding the field; this makes them key points of reference in a review such as this. The sections that follow will therefore explore how work in the field relates to these contrasting positions.

3.5 Technology as Cause

Whilst the history of the term “technology” is grounded in culture and practice, these elements frequently vanish from contemporary research discussions. Many of these adopt what Peters (2006) characterizes as a technicist, instrumental, and deterministic orientation to understanding technology.

The most common framing of technology in recent learning technology research has been in terms of affordance. This concept was part of a psychological theory developed to explain how people understood their environment (Gibson 1979). Central to the idea was a relational model, in which action was understood in terms of the interaction between animals and their environment:

The affordances of the environment are what it offers the animal, what it provides or furnishes, either for good or ill.

(Gibson 1979, 115)

This led to the characterization of this position as “ecological psychology”. Central to this original definition, however, is the characterization of things as agentive: as expressed, they (when they constitute the environment of an animal) are what offer or provide possibilities for action. Questions then arise about whether people can perceive these possibilities, and whether they act on them.

Importantly, part of Gibson’s agenda in developing this account was to avoid what he called “mentalism”; he positioned meaning as being “directly perceived” (Gibson 1979, 127), not the process of interpretation or sense making. He explicitly sought to rule out “subjective” experience and the world of “consciousness” (Gibson 1979, 129) from his account. To achieve this, he defined “meaning” in terms of “possibilities for action;” arguably, not the sense in which the word is commonly understood. Individuals were conceptualized as being able to “pick up” cues from their environment, but their agency was not discussed. Consequently, while he conceded that people might become more attuned to “meaning potentials,” learning was not convincingly addressed. As a result, his account of direct perception worked better for problems such as noticing edges or sharpness than for explaining social or cultural achievements such as art, language, knowledge, or professional practice that require a sense of intentionality or value (Oliver 2005).

In spite of its shortcomings in explaining learning, this term has become widespread in literature concerned with the design of technology, largely through Norman’s work (1988). Norman used the term affordance to describe the kinds of action that a designed technology permitted or prevented; he framed this in terms of the technology making certain patterns of action “natural.” However, while he built on Gibson’s concept, he did reintroduce the idea of interpretation:

The term affordance refers to the perceived and actual properties of the thing, primarily those fundamental properties that determine just how the thing could possibly be used.

(Norman 1988, 9)

This differentiation between perception and “actual” properties led to confusion about the status of affordances. It positioned these as properties of the artifact, no longer relational qualities that arose from the relative positions of animal and environment. This resulted in ambiguity, and a more positivist distinction between what were seen as objective and subjective sets of properties. (This positivist position did have precedent in Gibson’s definitions; although he proposed that affordances should be understood relationally, he proposed that these were shaped by properties—called “invariants”—that existed independently of the observer.) The consequence of this was, again, to underplay the ways in which people interpret or act with technology; it reassured designers that they could control users (Oliver 2005), rather than revealing the complex ways in which people always negotiate technology use in specific contexts (Feenberg 1999).

In spite of this confusion, Norman’s reinterpretation of the concept appealed to many designers, including those working with learning technologies. For example, Conole and Dyke (2004) proposed that it could form the basis for an explanation of technology’s effects, mapping technologies in terms of their affordances as the basis for design decisions. However, accounts that attempt to relate affordances to learning or to educational practice (e.g., Wijekumar et al. 2006) do so in a ways that bear little resemblance to Gibson’s accounts of things like being able to see edges or walk up steps (Oliver 2005), arguably over-playing the way in which technology design causes specific user behaviors.

Affordances are not the only account that positions technology as the exclusive object of study, downplaying the agentive role of people. Arthur (2009), for example, has attempted to develop an account of the “evolution” of technology in a way that actively hides any human involvement. Whilst he does concede that “people are required at every step of the processes that create technology” (Arthur 2009, 6), he proposes an account that “is not a discussion of the human side of creating technology [… but] the logic that drives these purposes” (Arthur 2009, 6). He does begin with definitions of technology that reflect human agency: “technology is a means to fulfill a human purpose […,] an assemblage of practices and components […, and] the entire collection of devices and engineering practice available to a culture” (Arthur 2009, 28). However, he does not focus on these, but instead builds the narrative that follows the idea of technologies as “a phenomenon captured and put to use” (Arthur 2009, 50), where phenomena “are simply natural effects, and as such they exist independently of humans or technology” (Arthur 2009, 49), once more pushing towards a purely positivist account of technology. To achieve this, his narrative uses an evolutionary metaphor to describe the way in which such primitives are combined to create more complex and sophisticated effects, mapping what he describes as patterns of common descent within families of technology (Arthur 2009, 15). In this, the messy and uncontrollable involvement of willful individuals is pushed aside in order to create simpler, more elegant accounts, valorizing artifacts and sidelining users.

The consequence of such conceptions of technology is that they create accounts in which technology simply has effects, including social effects. The absence of social or relational considerations results in simple, deterministic accounts in which technology “inevitably” effects changes to society, to learning, or to learners’ brains (Oliver 2011). As Standish argues, the way in which the workings of technology are hidden away in the service of ergonomics stops users having to think about them, repositioning technology as a “fetish for effect” (Standish 2000, 151).

Deterministic accounts such as these can be seen as a fantasy of control on the part of designers: the artifacts they design are positioned as exercising control over users (Oliver 2005). Yet studies of how users learn to use technology (e.g., Grint and Woolgar 1997) show that things are rarely this simple, as will be explored in more detail below. Thus deterministic accounts, such as those around affordances, overplay the importance of the appearance of devices, and underplay the ways in which meaning and learning shape technology use (Derry 2007). Part of the reason for this overlooking is, Feenberg has argued, that technologies enact taken-for-granted, hegemonic assumptions, making them appear natural because they embodying social conventions.

What I call the “technical code” of the object […] responds to the cultural horizon of the society at the level of technical design. Quite down-to-earth technical parameters such as the choice and processing of materials are socially specified by the code. The illusion of technical necessity arises from the fact that the code is thus literally “cast in iron,” at least in the case of boilers.

(Feenberg 2010, 22)

This kind of reconceptualization suggests that it is not technology alone that “permits” or “enables”; the potential user’s understanding of social conventions plays an important part in the dynamic, one that is not reflected in the artifact-centric accounts that have built on this concept. However, even if the account of technology as cause is not convincing, the idea that technology may be able to engender social effects is a persistent one, and remains important in contemporary policy debates.

3.6 Technology as Social Intervention

If technology is seen as “enabling,” “constraining,” or “permitting” in some way, even if the operation of this is unclear, then it would be reasonable to look for evidence of this in terms of changes in social practice. There is a long tradition of work that views technology precisely as an intervention in practice, even if the mechanisms for this are not fully understood.

For example, the metaphor of technology as a Trojan horse for educational change was introduced at least as far back as 1992 (Hammond and Trapp 1992). This metaphor was later modified, with Soloway (1997) referring to the “Trojan mouse,” which arrives innocuously but later requires teachers to rethink their entire practice, not just which tools they use (Sharpe and Oliver 2007, 49).

However, although this account has endured and spread, and whilst its invasive metaphor draws attention to a phenomenon of interest, it does not offer any kind of developed theory of organizational change that would explain how it happens.

Looking more widely, educational policy in recent years has been strongly influenced by ideas of evidence-based practice (Fitz-Gibbon 2000; Evans and Benefield 2001). This approach to linking research and policy originated within a positivist tradition of social inquiry (Nutley and Webb 2000), drawing on medical research paradigms. This approach has “enshrined” (Davies, Nutley, and Tilley 2000, 251) randomized control trials as the gold standard for research, encouraging use of the experimental paradigm to frame research around technology use. By focusing on the attribution of learners to conditions, rather than explaining the mechanisms through which technologies may contribute to social effects, these approaches sidestep the issue of how any effects have been achieved. It does, however, imply some kind of causal link between the introduction of technology and observed social change.

This framing of technology seems to hold great appeal for policy makers. Technologies, after all, can be bought, distributed, and counted; they are also frequently associated with ideas of progress and creativity. The problems facing education seem, by contrast, far too messy and intractable. Pelletier (2009), for example, discusses how digital games have been characterized in policy as a panacea or “magic bullet” for educational systems that are assumed to be failing.

Games and game play tend to be treated as “out there,” beyond the school gate, in some better, more authentic, more democratic, more meaningful place, other than the current and failing educational regime. By bringing games into educational practice and theory, the hope is, it often seems, that the diseased, geriatric body of education can be treated through the rejuvenating, botox-like effect of educational game play.

(Pelletier 2009, 84)

Such approaches have a long tradition in education. In 1987, for example, Parlett and Hamilton described the “agricultural-botany paradigm” of experimental interventions relying on standardized interventions analogous to chemical fertilizers being used to influence crop growth (Parlett and Hamilton 1987). This approach had arisen from psychology and the measurement sciences at the turn of the 20th century, but argued even then that this provided a poor account of educational practices.

No matter how well established this tradition is, this approach—framed around simple, direct questions such as “What works?”—has had questionable success in explaining technology’s role in relation to learning. In spite of decades’ worth of hype and expectation, as characterized by Mayes (1995), meta-reviews such as Russell’s (1999) that have drawn together empirical studies have shown little or no systematic benefit to the presence of technology in education, a situation characterized as the “no significant difference phenomenon.” These comparative studies seem unable to discern any clear relationship between technology and learning, whether that be the educational films of the 1920s, programmed instruction, instructional television, or computer-based instruction (Reeves 2005, 298).

Even where differences have been found, careful examination calls into question whether the technological element is really the cause of the change. For example, one meta-analysis that found students learning modestly better with online resources noted that “these blended conditions often included additional learning time and instructional elements not received by students in control conditions […which] suggests that the positive effects associated with blended learning should not be attributed to the media, per se” (US Department of Education 2010, ix).

It has been suggested, however, that the problem with research of this kind is not the medical model that was adopted, but that this research model has not been properly implemented (Alsop and Tompsett 2007). Alsop and Tompsett describe a series of study types that steadily relax the controls necessary for randomized control trials in order to take increasing account of culture and practices. This approach still presumes a causal effect, but is intended to explore how resilient the effects of the technology intervention are, and how such interventions can be adapted to different contexts (Table 3.1).

Table 3.1 A series of foci for studies of technology interventions in courses

EffectCan a new technology be shown to (1) have an effect (2) have an effect on learning, within a limited number of students in advantageous conditions?
EfficacyCan a new technology in a course be shown to have a positive effect on learning across a suitably large, selected range of students who study properly?
EffectivenessCan a new technology in a course be shown to have a positive effect on learning across a suitably large range of students where no control is maintained on how it is used?
EfficiencyDoes the introduction of a new effective technology as part of a course with a limited set of resources, for a specific group of students, represent the best use of resources?
Side-effectsWhat otherwise unknown side effects result from full-scale use of a new technology component in a course?

Whilst this doubtless offers a more sophisticated research model, and poses genuinely interesting questions, it still rests on the idea of technology as a standardized intervention. This is an assumption that has been called into question even in medical contexts, with critics arguing that the move from randomized control trials to standardized guidelines or to a technological implementation assumes that social and structural details will remain similar. However, “‘technological guidelines’ can be problematic if they are posited to be universal while the practice they are meant to guide is very place and culture specific” (Johnson and Berner 2010, 77).

Such concerns have been taken up by Feenberg, who has analyzed the ways in which technology is used to organize practices. He concludes that no intervention can rule out variations in practice, precisely because contexts vary.

No plan is perfect; all implementation involves unplanned actions in what I call the “margin of maneuver” of those charged with carrying it out. In all technically mediated organizations margin of maneuver is at work, modifying work pace, misappropriating resources, improvising solutions to problems and so on.

(Feenberg 1999, 113)

This undermines the idea that technology should be understood simply as an intervention in practice, emphasizing the improvised, negotiated character of such practices. Indeed, research in the field of science and technology studies has shown the great lengths that designers need to go to in order to try and make users behave in the ways that they want them to. For example, Grint and Woolgar (1997) show how a computer company used manuals, training, and physical reminders when “configuring the user,” for example the use of stickers on computer cases that threaten to void warranties if the case is opened in order to try and stop users “tinkering” with the “black box” of their desktop PC. In such studies, the conclusions that can be drawn are less about what the technology can do (or can permit), and more about what users need to do (or stop doing) in order to make the technology work as the designers hoped.

The reciprocal pattern—that users reconfigure the technology, or even designers’ intentions—is seen much less frequently. Feenberg argues (Feenberg 1999, 105) that this imbalance should be challenged; he proposes an alternative approach called “democratic rationalization,” in which technology is no longer viewed as a way of controlling users, but as a site for negotiation around social practices, in which users, participants, neighbors affected by outputs or by-products, and so on may all have a legitimate political stake in how technologies are designed, created, and used.

However, such challenges are rarely made, and the medical model of the standard intervention remains a powerful point of reference, which researchers repeatedly feel the need to fend off. Papert, for example, directly spoke out against such conceptions a quarter of a century ago:

Pea's negative result is moderately compelling if you believe that Logo is a well defined entity (like drug X) that either has an effect or does not have an effect (the technocentric vision). However, the finding as stated has no force whatsoever if you see Logo not as a treatment but as a cultural element—something that can be powerful when it is integrated into a culture but is simply isolated technical knowledge when it is not.

(Papert 1987, 24)

This led him to propose re-framing “What works?” questions to shift the focus away from technology as a self-contained thing with effects, and back to people and culture: “do not ask what Logo can do to people, but what people can do with Logo” (Papert 1987, 25).

3.7 Technology as Social Effect

A consistent challenge to the definitions above has been that they under-play the importance of social considerations. Outside of the field of learning technology, alternative conceptions have been developed that take a very different position.

The field of science and technology studies, for example, views technology as a site of social struggle. During the development of technology, various social groups will have interests in shaping its design; traditions such as SCOT (Social Construction of Technology) explore how such political maneuvering influences the eventual stabilization of how technologies are used and understood (Pinch and Bijker 1978, 38–9). Their archetypal example—the development of the bicycle—illustrates how a series of non-technical considerations (such as whether women could wear trousers rather than skirts on high wheelers, or the relative importance of speed versus safety) played an important role in the repeated refinement and eventual stabilization of the device (and its position in society) that we now recognize.

Accounts that view technology purely as a consequence of social considerations—technology as socially determined—are rare; most incorporate balances that explore the inter-relationships between technology and society. However, it is worth drawing out the instances where the assumed causality of technologically determined accounts is reversed.

In Wenger’s (1998) work on communities of practice, for example, technology is one example of a reification of practice. Wenger’s focus is on social practice, which he describes as embodied and active. It is a “complex process that combines doing, talking, thinking, feeling, and belonging. It involves our whole person including our bodies, minds, emotions, and social relations” (Wenger 1998, 56), but it is situated and ephemeral, which makes it hard to share across times or places. Consequently there is a need for reification, the process of “giving form to our experience by producing objects that congeal this experience into thingness” (Wenger 1998, 58), whether that thingness is a term, concept, artifact, or device. Reifications such as technologies are seen as a necessary complement to practices, and are understood in terms of their social function.

From a theoretical point of view to talk about artifacts in terms of reification is precisely viewing the artifact not just as a physical object but as a process of attributing meaning through time and through space. If an artifact travels across boundaries from one community to another, the process of reification by which it becomes part of a practice changes substantially across those boundaries.

(Wenger, in Binder 1996, 101)

Reifications still have social consequences, although their effects are not seen as inherent to the artifact, but instead as something that communities negotiate as they encounter and make sense of them. In accounts such as this, technologies are no longer positioned as the cause of practice, but instead as its residue; it lingers after the ephemeral practices that produced it have ended, but only regains meaning as it is incorporated into new practices (Oliver 2013).

3.8 Technology as the Instantiation of Theory

Design research is one tradition of work that does assume that technology both influences, and is influenced by, social considerations. It also draws upon the tradition of techné, in that technique is instantiated in artifacts, but its attention to social contexts has more akin with Papert’s culturally informed approach than with technocentric orientations.

Nevertheless, design research is sometimes framed as a purely problem-solving orientation, reminiscent of the positions criticized by Friesen (2009) as being unnecessarily narrow and dismissing work that is critical or focused on developing our understanding.

Educational technology is a design field, and thus, our paramount goal of research should be solving teaching, learning, and performance problems, and deriving design principles that can inform future decisions. Our goal should not be to develop esoteric theoretical knowledge that we expect practitioners to apply. This has not worked since the dawn of educational technology, and it won’t work in the future.

(Reeves 2005, 304)

Positions such as this do link theory to practice, but narrow down what is eligible to count as theory. Other researchers working in this tradition take a broader view, seeking to incorporate other kinds of theory beyond the immediately practical. Barab and Squire (2004, 5–6), for example, argue that whilst “providing credible evidence for local gains as a result of a particular design may be necessary, it is not sufficient,” and that researchers must aim to generate evidence-based claims about learning that develop the theoretical knowledge of the field.

Whatever the scope of this work, however, the common element is that the phenomena of interest are instantiated in and enacted with the technologies that researchers develop. In this tradition, these different elements—theories, technologies, and evidence—can be inter-related, so that developments in one area can lead to developments elsewhere; software can instantiate theories, theories and technologies can be studied empirically, and new theories can be derived from data or from technology developments (Cook 2002). In such work, developments and data can be used to “talk back” to theory as the relationship between ideas and social practices is studied (Bennett and Oliver 2011). When it does so, research in learning technology moves beyond viewing technology as either simply a cause or an effect, and towards a relational understanding of technology.

3.9 Technology as a System Within Systems

One relational approach to understanding technology is to view it as a technical system within a social system. Approaches that adopt this view include cybernetics and systems theory, both of which, like the previously described conception of techné, are concerned with social as well as material “technologies” (Banathy 1991).

The cybernetic approach is rarely explicitly foregrounded in the field of learning technology, but it has become prevalent implicitly, through the influence of authors such as Laurillard (1993). Laurillard’s conversational framework has been cited several thousand times within the field and is derived from the cybernetic theories developed by Pask (Scott 2001). It posits a series of exchanges between and within educational actors, a teacher and a learner in the first edition, later revised to differentiate between teacher, learner, and their peers. These exchanges take place at the level of conversations, at the level of actions, and within each actor. In all versions, the framework is represented as a closed system, with educational processes flowing within the system, although the later revisions that incorporate a peer are intended to signal wider notional communities of learners. In this account, learning can be understood in terms of adaptation and the role of technology is either to enable or replace specific flows within the system.

Such accounts of learning with technology often appeal to biological or environmental metaphors, emphasizing the responsiveness of such systems to external factors. However, Friesen (2010) points out that not all systems are the same, and identifies quite different overtones in the ways in which such conceptions frame technologies and their users. He describes, for example, how cognitive science reframes the human user as a computational component between a computer system’s input and output devices, in a manner that echoes the language used by military researchers and historians (Friesen 2010, 75–6). His critique of “learning as a weapon system” contrasts the “open” approach to systems thinking with closed, technical accounts that he sees as prevalent within the field of learning technology.

The metaphors and the discourse of the Cold War-closed world are not difficult to recognize in the ADL’s and others’ descriptions of “total”’ scientific, technological solutions—solutions that, in effect, use the power of computers and networks to vanquish the “evils” of ignorance and inefficient learning. It is also not difficult to see how US military thinking or values—for example, its prioritization of technological and engineering approaches, its emphasis on “absolute” solutions to human problems—are articulated as a kind of technical code in the standards and systems of SCORM and ADL. Not only do these standards and systems involve total, technical solutions to complex problems though high-tech command and control, but also include the extension of these solutions globally, ideally to all educational sectors.

(Friesen 2010, 79)

There are, however, other approaches that adopt systemic approaches that are more concerned with understanding situated cases, rather than developing monolithic, instrumental, technical solutions. Cultural-historical activity theory, for example, builds on Vygotsky’s mediated view of human action (Kuutti 1997). Activity theory’s unit of analysis is meaningful tool use, later amended to meaningful tool use within specific communities (Matusov 2007). “Tools” in activity theory “can be anything used in the transformation process, including both material tools and tools for thinking” (Kuutti 1997, 14), including technologies, but also concepts and other artifacts. Harking back to ideas of techné, a tool in this tradition is seen as part of creative human practice; it is whatever mediates human activity.

As well as avoiding instrumental orientations to technology, this holistic approach fits well with the predominance of case-based studies in learning technology (Issroff and Scanlon 2002). However, critics do see issues with the way in which claims about activity systems operate. For example, analyzing a system relies on being able to identify its elements (subjects who act, tools that mediate their actions, the communities they are part of, etc.); these are taken as “given” and unproblematic:

The figures represent them as actors without subjective reasons to act, separated from their own interpretive horizons, biographies, and social positions or status.

(Langemeier and Roth 2006, 32–3)

Further, even when such elements can be identified, it may not be possible to make claims about them. For example, a specific interest for activity theoretic analyses are breakdowns in the system, and the “expansion” of the system as it is adapted to cope with these. These can include the substitution or development of individual elements within the system, or changes to their relationships (Engeström 2001). However, the systemic nature of the unit of analysis means that conclusions must be drawn about situations as a whole, rather than about (say) specific technologies (Lektorsky 2009).

What this means is that the process of “expansive learning” as systems adapt can explain how a specific technology develops, but only from a historical point of view; it does not allow normative claims to be made about what the technology will carry on doing “to” users.

3.10 Technology as Network Effect

Within sociology there is a well-established tradition of critique that draws attention to the materiality of social practice.

If you can, with a straight face, maintain that hitting a nail with and without a hammer, boiling water with and without a kettle […] are exactly the same activities, that the introduction of these mundane implements change ‘nothing important’ to the realisation of tasks, then you are ready to transmigrate to the Far Land of the Social and disappear from this lowly one.

(Latour 2005, 71)

Such ideas have been taken up and developed within educational work through sociomaterial critiques, including actor-network theory (ANT) and related post-ANT work. This research has drawn attention to the way that the materiality of educational work is often neglected. However, as Fenwick, Edwards, and Sawchuk have argued (2011, vii), “humans, and what they take to be their learning and social process, do not float, distinct, in container-like contexts of education, such as classrooms or community sites, that can be conceptualized and dismissed as simply a wash of material stuff and spaces” because these material assemblages contribute to the success (or otherwise) of practices in important ways.

Such analyses adopt a relational view of technology. Rather than assuming that either technology or society are the determining power, sociomaterial analyses propose that technology’s effects arise from the ways in which they are incorporated into networks, rather than being inherent. Instead of assuming that objects or people determine the character of social change, “it does not celebrate the idea that there is a difference in kind between people on the one hand, and objects on the other. It denies that people are necessarily special. Indeed it raises a basic question about what we mean when we talk of people.” (Law 1992, 3)

The corollary of this is that ANT also raises a basic question about what we mean when we talk of “technology.” In this sense, it does not provide a general answer to the question of what technology is. However, this “flat” ontology does provide a useful basis for questioning how social arrangements are achieved. For example, rather than assuming that technology has an effect, it enables exploration of how something has been made to work as a technology (i.e. as a singular thing), whether this has any effect on other things, and if it has, what else was necessary for this to happen (Latour 2005, 103).

Law describes this world-building as “heterogeneous engineering” (Law 1992, 2), in that it brings together people, things, ideas, and so on. In a move analogous to Arthur (2009), this framing allows an exploration of how things are combined to produce complex effects: “how it is that networks may come to look like single point actors: how it is, in other words, we are sometimes able to talk of ‘the British Government’ rather than all the bits and pieces that make it up” (Law 1992, 2). However, unlike Arthur, he does not assume that the constituent parts of technology are simple primitives, but instead that each is a stabilized point only until it in turn breaks down or is subjected to scrutiny and destabilized—rather that, “if a network acts as a single block, then it disappears […] so it is that something much simpler—a working television, a well-managed bank or a healthy body—comes for a time, to mask the networks that produce it” (Law 1992, 5).

As a consequence, sociomaterial analyses of technology cannot draw simple, general conclusions about devices, nor about their social “impact” or ability to “enhance,” in the way that educational policy desires (Enriquez 2009). Instead, they analyze the relationships between technologies and other “actants” (such as people), although they are able to identify how such patterns of practice can be rendered more or less stable (e.g., Latour 1987). For example, Orlikowski (1992) has used ethnographic approaches to generate evidence about organizational change in order to understand the roles that technologies have played in this.

As Enriquez demonstrates, even such technology as a commercially standardized virtual learning environment is variable; it is enacted differently as specific features are taken up or ignored, as different people work with it to pursue different ends, and so on. As a result, it can be understood at different times as a “closed” product, an open and extensible system, a course site, a communication medium, and so on. This fluidity makes it extremely hard to make singular, monolithic claims about what it can achieve.

“Impact” usually implies that a technology is a “thing” that has clear boundaries in terms of functions and how it is supposed to work. Under investigation, Blackboard is articulated as something less bounded and, perhaps, as something “soft” within which agency flows.

(Enriquez 2009, 385–6)

These alternative readings also allow radical re-framings of taken-for-granted ideas and forms within education. It has been suggested that technology forms part of an “ecology” within which people now operate (e.g., Nardi and O’Day 1999); other authors have used such ideas to develop accounts of how learners create technological contexts for their work (Luckin 2010). However, sociomaterial analysis allows conventional binaries (such as material/virtual, here/not here, digital/analogue) to be undermined by showing how, for example, lectures are not simple face-to-face presentations, but involve the incorporation of previously created digital resources, are permeated by the use of mobile devices, draw from and are distributed through virtual learning environments, and generate resources (recordings, texts, etc.) that persist and are dispersed after the scheduled session ends (Gourlay 2012). The implication of this is that trying to understand learning or education without such mediation by technology may make little sense.

While these ways of framing technology open up opportunities for interpretation, they have been argued to have their shortcomings. Their focus is on studying how things have been achieved; to quote one of Latour’s titles (2005), the focus is on “reassembling the social”. Critics such as Winner have argued that this is politically naïve, bringing with it “an almost total disregard for the social consequences of technical choice. […] What the introduction of new artifacts means for people’s sense of self, for the texture of human communities, for qualities of everyday living, and for the broader distribution of power in society—these are not matters of explicit concern” (Winner 1993, 368).

Winner further points out the lack of an evaluative stance, or any moral or political principles (Winner 1993, 371). Subsequent authors have begun to engage with such questions. Mol (2002), for example, has drawn attention to what she calls the ontological politics. She has explored how contrasting relational, sociomaterial ways of making the world come into contact and also into conflict, and what happens when particular views win out over others. However, such issues remain less visible within this tradition than work that draws on ideas of political struggle (e.g., Marxist accounts), democracy (such as Feenberg’s work on democratizing design) or even the instrumental tradition of work that places value on technical questions of efficiency.

3.11 Conclusions

As the review above demonstrates, there are many ways in which technology can be understood. It can be conceptualized in terms of artifacts, knowledge, activities, or values. It can be seen as essentially about challenging the world and enframing it purely as a standing reserve of resources. It can be understood as a causal force, making learning happen, as a site of political struggle. It can also be understood as the material trace of social action, as part of the heterogeneous networks that make society, and so on. Unfortunately, research in the field of learning technology rarely draws on any of these positions. This has several consequences.

Much research within the field remains well-meaning but naïve in relation to the way in which it talks about technology. Claims typically rely on common-sense conceptions of technology—what De Vries (2013) describes as a “technology as artifacts” view—but without great consistency; the result is an inconsistent babble of claim and counter-claim that simply cannot be reconciled because they do not really refer to the same thing. This also makes it problematic to relate claims about technology to ideas about learning.

Even when work does move beyond this, causality is attributed to devices in simplistic ways. The idea that iPads or MOOCs “cause” better learning might be appealing, particularly for policy makers or those responsible for resource allocation, but the lack of evidence supporting a “media effects” model and the complexity of implementation undercut the credibility of such claims.

Getting past these issues requires a more explicit discussion of technology within the field, a clearer commitment within research work to one or other account of technology, so that the assumptions being made can be understood and the work can be critiqued appropriately. This will require a more robust response to demands for “what works” answers, which risk over-simplifying learning and teaching.

Better developed accounts can be found in most of the traditions of work outlined above. A common movement across most of these is the shift away from generic, essentialized accounts of technology, towards more situated, nuanced, and specific analyses. This is a movement that learning technology needs to engage with. Closely related fields such as human-computer interaction (Grudin 1990) or sociocultural work within education and psychology (Matusov 2007) have already led the way in this, grappling with foundational questions about their “unit of analysis” and aspiring to provide more holistic accounts, even if the idealized endpoint of such developments remains “an impossible methodological task” (Matusov 2007, 323).

Another common movement in these accounts involves viewing technology use as political, not merely a neutral, technical matter. This does not mean abandoning concerns about efficiency or effectiveness, instead it implies that only asking such questions does not go far enough. New questions need to be asked, for example, about how the use of technology changes relationships between people and who such changes benefit. As Winner argues, such concerns are well established within the field of philosophy of technology; they are also needed here.

As well as moving on debates within learning technology research, such developments provide a chance to widen its relevance and influence. Currently, the field seems to have little to offer back to the related areas where people are studying technology. It remains caught within what Selwyn (2010) has described as the “Ed Tech bubble,” with researchers seemingly more interested in sustaining inward-focused discussions rather than entering into productive dialogue with work in other areas. Learning technology offers a rich and politically important field within which questions of value, design, and practice can be explored. Areas such as design-based research, for example, clearly connect in interesting ways to wider debates about technology and society. Learning technology research has a contribution to make, but needs to engage more broadly if it is to make it.

In summary, then, when faced with the basic question “What is technology?”, learning technology research seems to have a less clear, less developed answer now than it did 25 years ago. Research currently seems to fixate on each new technology that comes out, rather than relating each one to wider concerns. It relies on common sense ways of conceptualizing technology, and consequently it has been dominated by simplistic, instrumental questions, paying little attention to values or to developing our understanding of learning or education. It is fair to say that the way forward is complicated: there is no single, dominant account of technology to which the field as a whole ought to orient, since each alternative has its own distinctive focus and, with that, its critics. However, that does not mean progress is impossible. Even if we accept the diversity Czerniewicz (2010) noted as a fair characterization of the field as it stands, Friesen’s (2009) call for more purposeful, varied conceptions of research can still be pursued. Purposeful, deliberate choices can be made. This would help to move research beyond current common-sense accounts and allow it to make more credible, more meaningful and more valuable contributions in the future.

References

  1. Alsop, Graham and Chris Tompsett. 2007. “From Effect to Effectiveness: the Missing Research Questions.” Journal of Educational Technology and Society 10 1: 28–39.
  2. Arthur, W. Brian. 2009. The Nature of Technology: What it is and how it evolves. London: Penguin.
  3. Bacon, Francis. 1605. The Advancement of Learning. London: Random House. ISBN 0307824047, 9780307824042.
  4. Bacon, Francis. 1620. Novum Organum Scientarium (New Instrument of Science). Cambridge: Cambridge University Press. ISBN 0521564832, 9780521564830.
  5. Banathy, Bela H. 1991. “Comprehensive Systems Design in Education: Who Should Be the Designers?” Educational Technology, 31 9: 49–51.
  6. Barab, Sasha and Kurt Squire. 2004. “Design-based research: Putting a stake in the ground.” Journal of the Learning Sciences, 13 1: 1–14. doi:10.1207/s15327809jls1301_1.
  7. Bennett, Sue and Martin Oliver. 2011. “Talking back to theory: the missed opportunities in learning technology research.” Research in Learning Technology 19 3: 179–89. doi:10.1080/21567069.2011.624997.
  8. Binder, Thomas. 1996. “Participation and reification in the design of artifacts: an interview with Etienne Wenger.” AI and Society, 10 1: 101–06. doi:10.1007/BF02716759.
  9. Buckingham, David. 2003. Media Education: Literacy, Learning and Contemporary Culture. Cambridge: Polity Press.
  10. Conole, Grainne and Martin Dyke. 2004. “What are the affordances of information and communication technologies?” Association for Learning Technology Journal, Research in Learning Technology 12 2: 113–24. doi:10.1080/0968776042000216183.
  11. Cook, John. 2002. “The role of dialogue in computer-based learning and observing learning: an evolutionary approach to theory.” Journal of Interactive Media in Education 2002 5. Available online: www-jime.open.ac.uk/2002/5.
  12. Czerniewicz, Laura. 2010. “Educational technology—mapping the terrain with Bernstein as cartographer.” Journal of Computer Assisted Learning 26 6: 523–34. doi:10.1111/j.1365-2729.2010.00359.x.
  13. Davies, Huw, Sandra Nutley and Nick Tilley. 2000. “Debates on the role of experimentation.” In What works? Evidence-based Policy and Practice in Public Services, edited by Huw T.O. Davies, Sandra M. Nutley, and Peter C. Smith: pp. 251–276. Bristol: Policy Press.
  14. Department for Education and Skills. 2003. Towards a Unified e-Learning Strategy. Bristol: Department for Education and Skills. Available online: http://www.education.gov.uk/consultations/downloadableDocs/towards%20a%20unified%20e-learning%20strategy.pdf.
  15. Derry, Jan. 2007. “Epistemology and conceptual resources for the development of learning technologies.” Journal of Computer Assisted Learning 23 6: 503–10. doi:10.1111/j.1365-2729.2007.00246.x.
  16. De Vries, Marc J. 2013. “Philosophy of Technology.” In Technology Education for Teachers, edited by P. John Williams: pp. 15-34. Rotterdam: Sense.
  17. Dewey, John. 1916. Democracy and Education: An introduction to the philosophy of education, 1966 edition. New York: Free Press.
  18. Engeström, Yrjö. 2001. “Expansive Learning at Work: toward an activity theoretical reconceptualization.” Journal of Education and Work 14 1: 133–56. doi:10.1080/13639080020028747.
  19. Enriquez, Judith Guevarra. 2009. “From Bush Pump to Blackboard: the fluid workings of a virtual environment.” E-learning, 6 4: 385–99. doi:10.2304/elea.2009.6.4.385.
  20. Evans, Jennifer and Pauline Benefield. 2001. “Systematic Reviews of Educational Research: does the medical model fit?” British Educational Research Journal 27 5: 527–41. doi:10.1080/01411920120095717.
  21. Feenberg, Andrew. 1999. Questioning Technology. London: Routledge.
  22. Feenberg, Andrew. 2010. Between Reason and Experience Essays in Technology and Modernity. Cambridge, MA: MIT Press.
  23. Fenwick, Tara, Richard Edwards, and Peter Sawchuk. 2011. Emerging Approaches to Educational Research: Tracing the Sociomaterial. London: Routledge.
  24. Fitz-Gibbon, Carol. 2000. “Education: Realising the potential.” In What works? Evidence-based Policy and Practice in Public Services, edited by Huw T.O. Davies, Sandra M. Nutley, and Peter C. Smith: pp. 69–92. Bristol: Policy Press.
  25. Friesen, Norm. 2009. Re-Thinking E-Learning Research: Foundations, Methods and Practices. New York: Peter Lang.
  26. Friesen, Norm. 2010. “Ethics and the technologies of empire: e-learning and the US military.” AI and Society 25 1: 71–81. doi:10.1007/s00146-009-0244-z.
  27. Gibson, James J. 1979. The Ecological Approach to Visual Perception. Boston: Houghton Mifflin.
  28. Gourlay, Lesley. 2012. “Cyborg ontologies and the lecturer's voice: a posthuman reading of the ‘face-to-face’.” Learning, Media and Technology 37 2: 198–211. doi:10.1080/17439884.2012.671773.
  29. Grint, Keith and Steve Woolgar. 1997. The Machine at Work: technology, organisation and work. Cambridge: Polity Press.
  30. Grudin, Jonathan. 1990. The computer reaches out: the historical continuity of interface design. Proceedings of the SIGCHI conference on Human factors in computing systems: Empowering people: 261–268. Available online: http://research.microsoft.com/en-us/um/redmond/groups/coet/Grudin/papers/CHI1990.pdf. doi:10.1145/97243.97284.
  31. Halverson, Lisa R., Charles R. Graham, Kristian J. Spring, and Jeffery S. Drysdale. 2012. “An analysis of high impact scholarship and publication trends in blended learning.” Distance Education, 33 3: 381–413. doi:10.1080/01587919.2012.723166.
  32. Hammond, Nick and Annie Trapp. 1992. “CAL as a Trojan Horse for educational change: the case of psychology.” Computers and Education 19 1–2: 87–95. doi:10.1016/0360-1315(92)90014-V.
  33. Heidegger, Martin. 2004. “Question Concerning Technology”. In Readings in the Philosophy of Technology, edited by David M. Kaplan: pp. 35–51. Oxford: Rowman and Littlefield.
  34. Heidegger, Martin. 2008. Being and Time. Oxford: Blackwell.
  35. Issroff, Kim and E. Scanlon. 2002. “Using technology in Higher Education: an Activity Theory perspective.” Journal of Computer Assisted Learning 18 1: 77–83. doi:10.1046/j.0266-4909.2001.00213.x.
  36. Johnson, Ericka and Boel Berner. 2010. “Simulating Bodies.” In Technology and Medical Practice: Blood, Guts and Machines, edited by Ericka Johnson and Boel Berner: pp. 75–8. Fareham: Ashgate.
  37. Koehler, Matthew J., Punya Mishra, Kristen Kereluik, Tae Seob Shin, and Charles R. Graham. 2014. “The Technological Pedagogical Content Knowledge Framework.” In Handbook of Research on Educational Communications and Technology: pp. 101–11. New York: Springer.
  38. Kuutti, Kari. 1997. “Activity theory as a potential framework for human–computer interaction research.” In Context and consciousness: Activity theory and human–computer interaction, edited by Bonnie A. Nardi: pp. 17–44. Cambridge, MA: MIT Press.
  39. Langemeier, Ines and Wolff-Michael Roth. 2006. “Is Cultural-Historical Activity Theory Threatened to Fall Short of its Own Principles and Possibilities as a Dialectical Social Science?” Critical Social Studies 3 2: 20–42.
  40. Latour, Bruno. 1987. “Science In Action: How to Follow Scientists and Engineers Through Society.” Cambridge, MA: Harvard University Press.
  41. Latour, Bruno. 2004. “Why has critique run out of steam? From matters of fact to matters of concern.” Critical Inquiry 30 2: 225–48. doi:10.1086/421123.
  42. Latour, Bruno. 2005. Reassembling the Social. Oxford: Oxford University Press.
  43. Laurillard, Diana. 1993. Rethinking University Teaching: A Framework for the Effective Use of Educational Technology. London: Routledge.
  44. Law, John. 1992. Notes on the Theory of the Actor Network: Ordering, Strategy and Heterogeneity. Lancaster: Centre for Science Studies, Lancaster University. Available online: http://www.comp.lancs.ac.uk/sociology/papers/Law-Notes-on-ANT.pdf. doi:10.1007/BF01059830.
  45. Lektorsky, Vladislav A. 2009. “Mediation as a means of collective activity.” In Learning and Expanding with Activity Theory, edited by Annalisa Sannino, Harry Daniels, and Kris D. Gutierrez: pp. 75–87. Cambridge: Cambridge University Press. doi:10.1017/CBO9780511809989.006.
  46. Luckin, Rosemary. 2010. Re-designing Learning Contexts: Technology-rich, learner-centred ecologies. Abingdon: Routledge.
  47. Marx, Karl. 1844. Economic and Philosophical Manuscripts of 1844. Moscow: Progress Publishers. Accessed 13 March 2015: https://www.marxists.org/archive/marx/works/1844/manuscripts/preface.htm.
  48. Marx, Karl. 1861. Grundrisse der Kritik der Politischen Ökonomie (Outlines of the Critique of Political Economy). Moscow: Foreign Language Publishers. Accessed 13 March 2015: https://www.marxists.org/archive/marx/works/1857/grundrisse/.
  49. Marx, Karl. 1867. Capital: A Critique of Political Economy. Moscow: Progress Publishers. Accessed 13 March 2015: https://www.marxists.org/archive/marx/works/1867-c1/.
  50. Matusov, Eugene. 2007. “In search of the appropriate unit of analysis for sociocultural research.” Culture and Psychology, 13 3: 307–33. doi:10.1177/1354067X07079887.
  51. Mayes, Terry. 1995. “Learning Technology and Groundhog Day.” In Hypermedia at Work: Practice and Theory in Higher Education, edited by W. Strang, V. Simpson, and D. Slater: pp. 28–37. Canterbury: University of Kent Press.
  52. Mishra, Punya and Matthew J. Koehler. 2006. “Technological Pedagogical Content Knowledge: A Framework for Teacher Knowledge.” Teachers College Record 108 6: 1017–54. doi:10.1111/j.1467-9620.2006.00684.x.
  53. Mol, Annemarie. 2002. The body multiple: ontology in medical practice. Durham: Duke University Press. doi:10.1215/9780822384151.
  54. Nardi, Bonnie A. and Vicki O’Day. 1999. Information Ecology: Using Technology with Heart. Cambridge, MA: MIT Press.
  55. Norman, Donald A. 1988. The Psychology of Everyday Things. New York: Basic Books.
  56. Nutley, Sandra M. and Jeff Webb. 2000. “Evidence and the Policy Process.” In What works? Evidence-based Policy and Practice in Public Services, edited by Huw T.O. Davies, Sandra M. Nutley, and Peter C. Smith: pp. 13–41. Bristol: Policy Press.
  57. Oliver, Martin. 2005. “The problem with affordance.” E-Learning Journal, 2 4: 402–13. doi:10.2304/elea.2005.2.4.402.
  58. Oliver, Martin. 2011. “Technological determinism in educational technology research: some alternative ways of thinking about the relationship between learning and technology.” Journal of Computer Assisted Learning 27 5: 373–84. doi:10.1111/j.1365-2729.2011.00406.x.
  59. Oliver, Martin. 2013. “Learning technology: theorising the tools we study.” British Journal of Educational Technology 44 1: 31–43. doi:10.1111/j.1467-8535.2011.01283.x.
  60. Oliver, Martin and Keith Trigwell. 2005. “Can “blended learning” be redeemed?” E-learning 2 1: 17–26. doi:10.2304/elea.2005.2.1.17.
  61. Orlikowski, Wanda J. 1992. “The duality of technology: Rethinking the concept of technology in organizations.” Organization Science 3 3: 398–427. doi:10.1287/orsc.3.3.398.
  62. Papert, Seymour. 1987. “Information Technology and Education: Computer Criticism vs. Technocentric Thinking.” Educational Researcher 16 1: 22–30.
  63. Parlett, Malcolm and David Hamilton. 1987. “Evaluation as Illumination: a new approach to the study of innovatory programmes.” In Evaluating education: issues and methods, edited by Roger Murphy and Harry Torrance. London: Harper and Row.
  64. Pelletier, Caroline. 2009. “‘Games and Learning: what's the connection'” International Journal of Learning and Media 1 1: 83–101. doi:10.1162/ijlm.2009.0006.
  65. Peters, Michelle A. 2006. “Towards Philosophy of Technology in Education: Mapping the Field.” In The International Handbook of Virtual Learning Environments, edited by Joel Weiss et al.: pp. 95–116. New York: Springer.
  66. Pinch, Trevor J. and Wiebe Bijker. 1987. “The social construction of facts and artifacts': or how the Sociology of Science and the Sociology of Technology might benefit each other.” In The social construction of technological systems, edited by Wiebe Bijker, Thomas P. Hughes, and Trevor Pinch: pp. 17–50. Cambridge, MA: MIT Press.
  67. Reeves, Thomas. C. 2005. “No significant differences revisited: A historical perspective on the research informing contemporary online learning.” In Online learning: Personal reflections on the transformation of education, edited by Greg Kearsley: pp. 296–305. Englewood Cliffs, NJ: Educational Technology Publications.
  68. Rose, Ellen. 2003. “The Errors of Thamus: An Analysis of Technology Critique.” Bulletin of Science, Technology and Society 23 3: 147–56. doi:10.1177/0270467603023003001.
  69. Russell, Thomas L. 1999. No Significant Difference Phenomenon. Raleigh, NC: North Carolina State University.
  70. Saettler, Paul. 2004. The Evolution of American Educational Technology. 2nd ed. Charlotte, NC: Information Age Publishing.
  71. Schummer, Joachim. 2001. “Aristotle on Technology and Nature.” Philosophia Naturalis 38:105–20.
  72. Scott, Bernard. 2001. “Gordon Pask’s Conversation Theory: A Domain Independent Constructivist Model of Human Knowing.” Foundations of Science 6 4: 343–60. doi:10.1023/A:1011667022540.
  73. Selwyn, Neil. 2010. “The educational significance of social media—a critical perspective.” Keynote debate at Ed-Media conference 2010, Toronto, 28th June–2nd July. Available online: http://www.scribd.com/doc/33693537/The-educational-significance-of-social-media-a-critical-perspective.
  74. Sharpe, Rhona and Martin Oliver. 2007. “Designing courses for e-learning.” In Rethinking Pedagogy for a Digital Age: Designing and delivering e-learning, edited by Helen Beetham and Rhona Sharpe: pp. 41–51. London: Routledge.
  75. Soloway, Elliot. 1997. “Scaffolding Learning and Addressing Diversity: Technology as the Trojan Mouse.” Proceedings of the SC97 Education Program, San Jose, CA, November 15–19.
  76. Standish, Paul. 2000. “Fetish for effect.” Journal of Philosophy of Education 34 1: 151–68. doi:10.1111/1467-9752.00162.
  77. Stiegler, Bernard. 1998. Technics and Time, 1: The Fault of Epimetheus, trans. By R. Beardsworth and G. Collins. Stanford: Stanford University Press.
  78. US Department of Education. 2010. “Evaluation of Evidence-Based Practices in Online Learning: A Meta-Analysis and Review of Online Learning Studies.” Washington, DC: US Department of Education, Office of Planning, Evaluation, and Policy Development. Available online: http://www2.ed.gov/rschstat/eval/tech/evidence-based-practices/finalreport.pdf.
  79. Voogt, Joke, P. Fisser, N. Pareja Roblin, J. Tondeur, and J. van Braak. 2013. “Technological pedagogical content knowledge—a review of the literature.” Journal of Computer Assisted Learning 29 2: 109–21. doi:10.1111/j.1365-2729.2012.00487.x.
  80. Wenger, Etienne. 1998. Communities of Practice: Learning, Meaning and Identity. Cambridge: Cambridge University Press. doi:10.1017/CBO9780511803932.
  81. Wijekumar, Kay J., Bonnie J. F. Meyer, Diane Wagoner, and Lon Ferguson. 2006. “Technology affordances: the ‘real story’ in research with K-12 and undergraduate learners.” British Journal of Educational Technology 37 2: 191–209. doi:10.1111/j.1467-8535.2005.00528.x.
  82. Winner, Langdon. 1993. “Upon Opening the Black Box and Finding it Empty: Social Constructivism and the Philosophy of Technology.” Science, Technology and Human Values 18 3: 362–78. doi:10.1177/016224399301800306.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.226.4.191