It is difficult to develop a sturdy sense of collective identity without a shared memory and a common attachment to conventions or customs that are rooted in the past. Collective identities are inter-generational accomplishments that are cultivated through the absorption of a common cultural inheritance. For socialisation to occur successfully, adults draw on the experience of previous generations to provide young people with a meaningful account of adulthood. Erikson remarked that the values with which children are trained ‘persist because the cultural ethos continues to consider them “natural” and does not admit of alternatives’. He observed that:
They persist because they have become an essential part of the individual’s sense of identity, which he must preserve as a core of sanity and efficiency. But values do not persist unless they work, economically, psychologically, and spiritually; and I argue that to this end they must continue to be anchored, generation after generation, in early child training; while child training, to remain consistent, must be embedded in a system of continued economic and cultural synthesis.1
Through transmitting the legacy of the past, socialisation is integral to an inter-generational transaction whereby moral norms are communicated by authoritative adults to the young.
Although adulthood and childhood are often discussed as separate and stand-alone concepts, they exist and thrive as part of an inter-generational community where their relationship is mediated through a common web of meaning. Because they are heirs to a common past, adults are able to transfer to the young the cultural resources that they will need to make their way in this world. Through this generational continuity – which is not just biological but also cultural – the organic relationship between a community’s present and past is reproduced and reinforced.
In this chapter we discuss the loss of the sense of the past and explain that this development has both undermined the status of adult authority and contributed to the emergence of the twin problems of socialisation and identity. Without a meaningful link to a past, conveyed through adults and institutions, the development of individual identity risks becoming destabilised and estranged from a common world.
The depreciation of the status of adulthood, discussed in the previous chapter, was closely linked to the devaluation of the normative legitimacy of the past. Once the past is regarded as irrelevant or, worse still, as the ‘bad old days’, the experiences of the older generations are also cast in a negative light. Adulthood becomes compromised by its association with the past, and instead of being able to serve as a model to the young it ceases to effectively serve that role. That is why the concurrent erosion of the status of adulthood and of the past was not a coincidence. Historically, the status of adulthood was linked to its capacity to transmit the wisdom and experience of the past to the younger generations. Erikson’s reference to the ‘collective sense of identity’ which adults communicate to young people has as its premise the capacity of the older generation to communicate a model of identity to their offspring. However, with the loss of what the sociologist and critic Philip Rieff described as the ‘sense of the past’, cultural continuity became disrupted and the capacity of adults to serve as models to the young diminished.2
The term ‘sense of the past’ should not be confused with that of ‘nostalgia for the past’. Nostalgia communicates a feeling of sentimentality towards a past that can never return. Its wistful affection for days gone by is often coupled with an impulse to avoid the challenges of the here and now by retreating into an imagined and idealised world of a previous era. The possession of a sense of the past is according to the literary critic Lionel Trilling, an ‘actual faculty of the mind, “a sixth sense”’, through which we become conscious of history and our place in it.3 This sensibility does not mean obsessively looking back towards a distant land but a form of consciousness that regards cultural continuity as relevant for illuminating the human predicament.
Historically, generational relations were underpinned by an important element of cultural continuity. Throughout history the authority of the older generation over the young was taken for granted by all cultures. That did not mean that the young passively embraced the wisdom of the elders. What it meant was that even when they reacted against members of the older generations, young people still accepted the authoritative status of adulthood. Until modern times, the revolt of the youth was directed at the elders rather than cultural values that they personified. Often the elders were accused of not living up to the ideals of the past. With the rise of modernity in the 18th century, the young often expressed their ambition through a distinct form of generational consciousness that emphasised their rejection of the old. Their reaction to the old ways was integral to a wider mood of psychic distancing from the past.
During the 19th century, the past ceased to be seen by many as offering a pattern for the present and innovation was recognised as both ‘inescapable and socially desirable’.4 At some point in the 20th century, the western world became estranged from the authoritative status of the past and often adopted the attitude of rejecting it altogether. Its obituary was captured by the title of the historian J.H. Plumb’s book, The Death of the Past (1969). Though Plumb was sympathetic to the loss of authority of the past, he was sensitive to the fact that something important had been lost. He observed that ‘whenever we look, in all areas of social and personal life, the hold of the past is weakening’.5
Until the early 20th century the socialisation of young people was assisted by a sense of cultural continuity which still prevailed despite the emergence of the tendency to question the value of the past. It was widely recognised that the transformation of young people into adults required their introduction to the ways of their culture. By the late 1930s many observers commented that society could not take its capacity to socialise the younger generations for granted. However, the relationship between this problem and the loss of the sense of the past was not always explicitly acknowledged.
Writing in 1939, the psychologist John Dollard explained that ‘socialization is the process of training a human animal from birth on for social participation in his group’. Dollard added that ‘he is socialized when he is capable of playing the role destined for him as an adult’.6 Dollard and other experts dealing with the transition from adolescence to adulthood assumed that socialisation could not be taken for granted. Dollard observed that it ‘seems clear from present data that socialization is a process full of conflict between the child and its trainers’ and indicated that ‘growing up is not a smooth automatic process of assimilating the folkways and mores’.7 The folkways and mores of the past were less and less observed and what was acceptable to previous generations had in many situations ceased to motivate many members of the younger generations.
Kingsley Davis echoed Dollard’s sentiments and asserted that ‘too often the child-and-society problem has been visualized as simply that of transmitting the cultural heritage’. He warned that ‘society does not depend solely on transmitting its heritage but also on absorbing each new generation into its structure’.8 Nevertheless, Davis understood that the transmitting of a cultural heritage was an important challenge facing society that other commentators ignored.
Unlike many of his colleagues who dismissed the importance of transmitting the values of the past, Davis feared that in its absence it would be difficult to forge the collective sense of identity that young people needed to give meaning to their lives. He believed that ‘the integrating principle would diminish’, and as the traditions of the past became dismissed as irrelevant ‘the central values and common ends’ of society would ‘tend to crumble’.9 By 1940, and certainly by the end of the Second World War, the unstated consensus among social science experts dealing with socialisation appeared to be; ‘let it crumble’. Only a minority of social scientists appreciated what the loss of cultural continuity would entail for inter-generational relations. In opposition to his anti-traditionalist colleagues, Davis wrote that if ‘many fundamental customs’, which are dismissed as ‘anomalous and worthless customs’, were ‘eliminated’, society would be left ‘strangely incapable of maintaining itself’.10
The problem highlighted by Davis did not simply relate to the impact of a loss of cultural continuity on society. The transmission of cultural heritage is not an abstract process that only affects institutions or high culture. To a child, adults personify culture and a model of life. When adults become uncertain about who they are and what it is that they should transmit, children become confused about the values with which they should identify. The difficulty of resolving the crisis of identity is an indirect outcome of the disruption of cultural continuity.
Baumeister claimed that ‘the identity crisis became a feature of adolescence sometime around the end of the 19th century’.11 He pointed out that previously parents and adult society had ‘defined adult identity for individual’ – but ‘rather abruptly, however, the adult identity was left mostly up to the adolescent to decide and define’.12 It is unlikely that the reluctance of adults to define the meaning of their identity for adolescents happened as abruptly as Baumeister suggests. However, Baumeister is right to underline the significance of the erosion of adult confidence in its capacity to socialise young people. He also rightly claims that the lack of clarity about how adulthood should be defined will eventually compromise the ability of young people to resolve their crisis of identity. Once it becomes problematic for identity formation to draw on a pre-existing model, identity itself is likely to become more and more of an issue. That is why in the first instance a crisis of identity should be understood as a symptom of the adult society’s confusions about how to socialise.
The difficulty that adult society had in setting ideals for the young tended to be resolved – particularly by middle-class parents − through adopting techniques of psychological validation. By the late 1950s even professional supporters of such techniques began to worry that avant-garde parents were going too far in their reliance on psychological validation. In 1958, at the annual conference of the Child Study Association of America, Dr Harold Taylor, President of Sarah Lawrence College, pointed out that the ‘first generation of “understood children” is now of college age’ and warned that ‘they bring with them a pathology of their own’.13 By ‘understood children’ Taylor referred to young people who were constantly validated through techniques designed to make them feel good about themselves. Since this discussion on the college age generation of 1958, the reluctance of adult society to set ideals for the young has become a far more pervasive fact of life.
Socialisation only became an object of study in its own right a half a century after the ‘discovery’ of adolescence. In the modern and contemporary sense of this term, socialisation gained usage in the late 1930s and early 1940s. It was also during this period that Erikson’s concern with identity gradually led to his coining of the term ‘identity crisis’. Danziger has drawn attention to the suddenness with which the study of socialisation became embraced by three disciplines: sociology, anthropology and psychology. As illustration of the prominence acquired by this concept, he cited two different papers, one written by Robert Park and the other by John Dollard, published in 1939 by the American Journal of Sociology, which had the term ‘socialization’ in their titles.14 Danziger also pointed to the growing interest of psychologists in the concept. In 1937, the publication of Experimental Social Psychology: An Interpretation of Research Upon the Socialization of the Individual marked an important milestone in a new phase of interest in this subject.15
Danziger remarked that in view of the ‘rare occurrence of the term in earlier writings’, its sudden ‘prominence suggests the operation of a powerful undercurrent of ideas’.16 Until the 1930s the concept ‘socialisation’ conveyed a very different meaning from its subsequent usage. The term originally referred to the wider process of rendering economic and institutional activities social.17 Until the 1930s the term did not merely refer to individuals but mainly to wider social processes. Left-wing narratives often advocated the ‘socialisation of production’, according to which industries would be owned by society. Some conceptualised socialisation as the counterpoint to that of individuation.18 In 1909, the German sociologist Georg Simmel referred to socialisation as ‘the form, in which the content of social organization clothes itself’. Simmel used the term as a variant of society and as a general form of social influence.19 During the interwar decades this meaning was gradually set aside as socialisation was ‘reconceptualized as a process occurring within individuals’, whereby they acquire the ‘facility to function as competent and cooperative members of society’.20 How to turn adolescents into adults was now frequently portrayed as a problem.
Until the 1930s, the socialisation of the individual was rarely a specific focus of psychological or sociological study. It was ‘a relatively uncommon term in psychology before 1940’ but became ‘a social scientific hit after the war’.21 From the 1950s, socialisation became more and more a psychological concept and less and less connected to the workings of wider social trends. As one study noted, ‘although social scientists linked socialization to an array of institutions, they causally connected it, both directly and centrally, to then central psychological constructs of personality, adjustment, pathology, identity, and achievement’.22 In many scholarly discussions of this subject, socialisation appeared disconnected from its relation to the norms and values of culture. The traditional association of socialisation with connecting young people to their past was considered less and less to be a subject worthy of investigation.
With the reconceptualisation of socialisation, the concept became increasingly individuated, internalised and psychologised. As principally a psychological phenomenon, the focus of interest turned towards its process rather than its social or moral content. Discussions of socialisation could not avoid referring to the transmission of norms and values altogether, but looking back on the evolution of the discussion one is struck by the relative absence of serious commentary on the content of socialisation. Yet, the issue at stake was not just the process of how adult society would socialise the young but also the norms and values to be transmitted to them. By the 1950s it appeared that many adults were not sure if they had any stories to transmit to the young. Writing in 1954, Arendt feared that many adults had given up their responsibility for socialising children.23 With the disruption of cultural continuity and a loss of the sense of the past, both the process but importantly the content of socialisation became an issue.
The loss of the sense of the past was interpreted by social scientists and educators through the magnification and the objectification of the psychic distance between the present and the past. As we noted previously, from the late 19th century onwards change was often experienced and presented in a dramatic and mechanistic manner that exaggerated breaks, ruptures and the decoupling of the present from the past. This response is entirely understandable at a time when the scale of social transformation made it difficult for many – including social scientists − to draw on the resources of the past to make sense of a new world. The author of a study of ‘The sense of the past and the origins of sociology’ argued:
The generation that gave birth to sociology was probably the first generation of human beings ever to have experienced within the span of their own lifetime socially induced social change of a totally transformative nature − change which could not be identified, explained and accommodated as a limited historical variation within the encompassing order of the past.24
The influence of this cultural conjuncture on the subsequent evolution of the social sciences is evident to this day. One manifestation of this trend is the ‘academic and intellectual dissociation of history and sociology’.25 Many social scientists concluded that if indeed the experience of the past could no longer illuminate the present, there was little point in studying it.
The estrangement of social scientists and educators from the past was not simply a direct reaction to the scale of social transformation but also an outcome of the technocratic vision of their project. As Dorothy Ross pointed out, by the turn of the 20th century social scientists felt ‘deeply alienated from the past’ and driven to an ‘aggressive effort to control history through positivist science’.26
Progressive educators and commentators in particular insisted that in a constantly changing world, it was pointless to socialise children to embrace values that would very soon become outdated. They claimed that since children had to be adaptable and flexible, they should not be weighed down by the burden of old dogmas. The conviction that people inhabited a world that was qualitatively different from that of their parents served as the premise for the claim that the knowledge and insights acquired over previous centuries had lost their relevance. These views have endured to this day and continue to influence the work of experts charged with the task of engineering the curriculum.
In education it was and continues to be frequently asserted that old ways of teaching are outdated precisely because they are old. Knowledge itself was called into question because apparently in a world of constant flux it has a short shelf-life and is continually overtaken by events. Consequently, what’s important is not what we know in the here and now but our preparedness to adapt to change. Dewey claimed that because of the rapid pace of change, when a child’s ‘school course is completed he will be just about a decade behind the march of progress’.27 The implication of this statement was that no sooner did a child gain knowledge in the classroom then it reached its sell-by date.
Dewey, who was one of the leaders of the American progressive movement, self-consciously adopted a philosophical orientation that sought to challenge the influence of the traditions of the past in education. Dewey regarded change as a potential source of progress which also called into question the meaning of traditional and transcendental values.28 His argument for educational reforms rested on his diagnosis of rapid change, which he believed demanded the ‘relaxation of social discipline and control’.29
In the United States, education reformers directed their fire against the traditional curriculum on the grounds that it was dated and likely to be irrelevant to the needs of society. Journals that catered for a reform-conscious middle-class audience published articles with titles such as ‘Our Medieval High Schools – Shall We Educate Children for the Twelfth or The Twentieth Century’ and ‘Medieval Methods for Modern Children’.30 These sentiments pervaded one of the key statements of progressive educators, the Cardinal Principles of Secondary Education (1918). The statement warned that while ‘society is always in the process of development’ institutions of education are ‘conservative and therefore … [tend] … to resist modification’.31 The belief that education was far too medieval and resistant to change was not only held by progressive reformers but also by the modernist liberal technocrats committed to efficiency and the rationalisation of society. Writing in this vein, the steel baron Andrew Carnegie demanded educational reforms and warned those who send their sons to colleges, they ‘waste energies upon obtaining a knowledge of such languages as Greek and Latin, which are of no more practical use to them than Choctaw’.32 An informal alliance between technocrats devoted to the promotion of economic efficiency and of progressive reformers played an important role in displacing the traditional curriculum with a supposedly modern one, more attuned to a changing world.
In Britain, too, the fetishisation of change influenced attitudes towards education. The Fabian socialist and economist G.D.H. Cole was in no doubt that the traditional curriculum had to go. He wrote in 1931:
One thing is clear: the traditional approach will no longer do. Textbooks of political theory or science that were written only a few years ago − I am not forgetting that I wrote one myself – seem already quite out of date. For the issues in men’s minds and the practical problems they are invoking political theory to help them solve, have undergone a rapid change.33
There is something performative about Cole’s projection of a vision that called everything into question. His statement was not simply a reflection on the nature of rapid change and its impact on education but was also a declaration of passivity in the face of forces beyond human control. It also implicitly signalled an unwillingness to take responsibility for the ideas that he himself wrote ‘a few years ago’.
Cole’s dramatised account of the scale of change was echoed by numerous commentators. In his essay ‘Education for change’ (1938), the intercultural educator Stewart Cole left no one in doubt about the scale of change. Citing Dewey, he wrote:
Great as have been the changes in our educational system in the last hundred years, and especially in the last thirty, they are nevertheless slight in comparison with those which must be undertaken in the next generation. How can education stand still when society itself is rapidly changing under our very eyes?34
It is important to note that the conviction that rapid change renders much of the knowledge of the past redundant has continued to influence educational policy to this day. This sentiment was widely promoted within pedagogic theory in the post-Second World War period. According to one account published in 1949, since the ‘social order (including the form of government, the ways of life, the organization and management of business and industry) in the United States is in a constant state of change’, it followed that schools ‘should prepare young people to make adjustments to changes in life about them and to take part as leaders and bring about the desired change as rapidly as possible, but must [themselves] be in a constant state of readjustment to new and changing conditions in all areas of life’.35 The argument that ceaseless change has rendered irrelevant the disciplinary knowledge of the past continues to influence policy makers to this day.
In the interwar era, a new concept – the culture lag − was invented to capture the supposed tension between unchanging custom and attitudes and constantly changing reality. This concept, developed by the sociologist William Ogburn, was according to Dorothy Ross a ‘refinement of the most pervasive historical idea of his era; namely, that American society was lagging in response to increasingly rapid economic change’.36 According to this hypothesis, which was in vogue in the 1920s and 1930s, the ‘material conditions of life changed more quickly than values and attitudes’.37 The conclusion suggested by the notion of the culture lag was that what was needed was to leave behind outdated cultural attitudes and institutions and adjust to the demands of a new scientific and technological age.
The conservatism of educational institutions was often criticised as a prime example of the recently invented culture-lag hypothesis. The gap between the needs of a rapidly changing society and prevailing customs and values was frequently advanced as justification for insulating young people from the (often harmful) influence of the past. To close this gap, it was necessary to constantly update society’s values and customs and ensure that young people were not burdened by archaic and useless knowledge. It often seemed as if advocates of the culture gap had become captives of a dogma that dictated that novelty was intrinsically superior to what preceded it. References to the cultural lag were frequently made in relation to the lag between material conditions and forms of family life and childrearing that were steeped in outdated customs. Supporters of the culture-lag concept seemed to believe that the need for reform was even more urgent in the sphere of socialisation than in the domain of formal education.
The assertion that rapid change rendered previous forms of socialisation obsolete was constantly repeated during the course of deliberation on this subject. Frequently problems associated with childrearing and inter-generational relations were attributed to the use of old – and therefore outdated – forms of socialisation. Lawrence Frank was critical of the ‘family as a cultural agent’ because ‘most of the ideas and beliefs that are taught to the children with respect to these basic organizing concepts of life are obsolete and no longer credible except to those who have dedicated their lives to the perpetuation of the archaic and the anachronistic’.38
Most commentaries on socialisation were not as explicit and crude as Frank’s in their condemnation of the old practices but they nevertheless shared the view that the rapid pace of change rendered these customs irrelevant. Kingsley Davis stated that ‘extremely rapid change in modern civilization, in contrast to most societies, tends to increase parent−youth conflict, for within a fast-changing social order the time-interval between generations, ordinarily but a mere moment in the life of a social system, becomes historically significant, thereby creating a hiatus between one generation and the next’. He concluded that ‘inevitably, under such a condition, youth is reared in a milieu different from that of the parents; hence the parents become old-fashioned’.39
Writing at the same time as Davis, Mead adopted the comparative approach of a cultural anthropologist to explain why traditional methods of socialisation were inappropriate to the needs of modern communities. She wrote that ‘very few cultures have attempted the kind of explicit internalization of parental standards upon which ours depends’. By this she meant a system in which ‘the child is expected to become like the parent’ and is therefore ‘expected to take the parent as a model for his own life style’. Mead claimed that the transmission of norms and values from one generation ‘might work quite smoothly in a stable culture which was changing very slowly’. But in ‘periods of rapid change, and especially when these are accompanied by migrations and political revolutions, this requirement of the system is unattainable’.40
In effect, Mead called into question the capacity of parents to socialise their children. In her writing the culture gap turned into an ever-widening chasm that left inter-generational relations problematic. Referring to American society, she wrote that ‘all adults are to some extent out of touch with the newest patterns of behavior as they most particularly affect the behavior of adolescents’.41 In Mead’s account, the passing of cultural stability called into question the feasibility of a form of socialisation that relied on the older generation transmitting its values to the young. Mead eventually drew the conclusion that, rather than relying on the traditional approach to the transmission of values, it would be preferable to reverse this process so that the young (with the assistance of professional support) played an active role in their own socialisation and that of their elders.
Looking back on the 20th century, it is difficult to avoid the conclusion that social scientists and educators often appeared to adopt an obsessive fascination with change. From the turn of the 20th century onwards and especially during the interwar era, change was portrayed as an omnipotent and autonomous force that rendered irrelevant the customs and cultural legacy of the past. Typically, change was presented in a dramatic and mechanistic manner that exaggerated breaks, ruptures and the decoupling of the present from the past. In this drama, the past appears as a passive victim of forces that continually highlight its irrelevance, ridicule its superstitious pretensions and expose its backward morality to a superior modern world. The assertion that we live in a qualitatively different world from that of our ancestors serves as a premise for the claim that the knowledge and insights acquired in the past have only a minor historical significance. From this standpoint, education is relieved of its responsibility to conserve the legacy of the past; on the contrary, children are encouraged to react against it.
Social scientists like Mead were prepared to acknowledge that in previous ‘stable cultures’ it was possible to transmit the legacy of the past. However, they insisted that was then and not in their new, fundamentally different world of unprecedented change. The futility of maintaining cultural continuity in the sphere of socialisation and education acquired the character of conventional wisdom in pedagogy and socialisation in the interwar period. When in 1932, the sociologist and social philosopher Helen Lynd rhetorically asked ‘would a reorientation in terms of personality development be possible in a system so deeply committed to passing on a knowledge of the past as a basis for education’, it was obvious that the answer would be a resounding, No!42
Lynd reported that within the American college system ‘there is a growing feeling that acquiring knowledge of the past experience of mankind is inadequate training for an unknown and largely unpredictable future’. Like Mead, she contrasted a stable past with an ever-changing present.
In the past, education has laid its emphasis on things of permanence and stability; if not ‘underneath are the everlasting arms,’ at least ‘until death do us part,’ economic verities, and the laws of Euclid. But the one thing we can know about the institutional world in which the new generation will find itself is that it will wear a very different aspect from that of today.43
In effect Lynd dismissed the relevance of the collective inheritance of human culture. Even Euclid, whose insights continue to be relevant for educating children in mathematics, was dismissed as a has-been. And yet it can be argued that it is precisely in a changing world that young people require the sense of meaning that comes from an understanding of where they come from and where they stand in relation to their cultural legacy.
Pronouncements on the death of the past were often coupled with the dismissal of the intellectual capital derived from previous times. The philosopher Alfred North Whitehead’s Adventures of Ideas (1933) was strident in its dismissal of what he characterised as the ‘vicious assumption’ that ‘each generation will live substantially amid the conditions governing the lives of its fathers and will transmit those conditions to mould with equal force the lives of its children’.44 He wrote that such conditions have irrevocably changed and ‘we are living in the first period of human history for which this assumption is false’. Whitehead concluded that the relations between human life and the rate of change had fundamentally altered. ‘In the past the time-span of important change was considerably longer than that of a single human life’; however, ‘today this time-span is considerably shorter than that of human life, and accordingly our training must prepare individuals to face a novelty of condition’.45
From the 1930s onwards, the mantra that we live in a moment of unprecedented change that renders the ways of the past obsolete was repeated with ever greater intensity. During her long career, Mead herself regularly repeated this sentiment. Writing in the 1940s she highlighted this development. Two decades later she wrote that ‘within two decades, 1940−60, events occurred that have irrevocably altered men’s relationship to other men and to the natural world’. Consequently, ‘the older generations will never see repeated in the lives of young people their own unprecedented experience of sequentially emerging change’.46
In the 1950s and 1960s it became difficult to encounter any serious attempts to defend traditional approaches to childrearing, education and socialisation. They bore the stigma of an association with the old. In his 1962 BBC Reith Lecture, the psychiatrist George Carstairs re-raised the problem posed by the cultural lag for childrearing practices. As usual he drew a mechanistic contrast between the stable ways of the past and the changing world of the 1960s.
In primitive societies, patterns of child rearing are slow to change. Each aspect of tribal custom is regarded as the only proper way to behave; often the wrath of the gods is believed to be incurred if traditional habits are broken. To some extent, the same is true of childrearing in our own society. This has always been the domain of mothers and grandmothers, who have tended to cling to old familiar ways because until recent years they had relatively little education or experience of the wider world, certainly less than their daughters of the war and post-war years. It is, I believe, because of this time-lag in the modification of child-rearing practices that our emotional attitudes are sometimes anachronistic and ill-adapted to the changing realities of our society.47
Mothers and grandmothers clinging to ‘old familiar ways’ were cast into the role of villains, held responsible for the miseducation of young people. As it turned out, experts like Carstairs were far more articulate in their critique of supposed ‘ill-adapted’ childrearing practices than about outlining an effective way of socialising the younger generations.
Expert discourse advocating the necessity for abandoning the socialisation methods of the past did not directly influence the behaviour of the public. Attitudes towards culture do not get absorbed through reading experts’ commentaries or attending academic lectures. Progressive educators continually felt frustrated about their inability to rid schools of practices that they deemed outdated and misleading.48 Yet, with the passing of time the discrediting of the past and the call to abandon its values and customs had gained cultural hegemony. As one study published in 2010 noted, ‘the notion that we are living in a permanently changing society has created a context in which parents feel that they no longer “know” what is good or bad for their children’.49
The elevation of change into an omnipotent power that demands that society breaks with its past is often interpreted as an illustration of the irrelevance of pre-existing knowledge and cultural practices. However, the way that change is perceived is not simply a physical or objective fact. Perceptions of change are mediated through cultural attitudes towards human experience. The degree of trust in the prevailing system of meaning plays an important role in influencing how society’s relation to its past and future is perceived. So, if a community feels overwhelmed by change and distant from its past, its sensibility is not simply an expression of an acceleration of physical motion but a loss of cultural connection with the customs and values of the past.
The dramatisation of change often conveyed a sense of fatalism towards forces that continually rendered everything that humanity achieved irrelevant. This sensibility did not merely highlight transience as the defining feature of the human condition. It also drew attention to the obsolescence of prevailing customs and institutions and, by implication, what humanity had achieved so far. In 1970, Keniston summarised what he perceived as the features of the unstable ‘postindustrial society’ of his time. He cited:
a rate of social change so rapid that it threatens to make obsolete all institutions, values, methodologies and technologies within the lifetime of each generation; a technology that has created not only prosperity and longevity, but power to destroy the planet, whether through warfare or violation of nature’s balance; a world of extraordinarily complex social organization, instantaneous communication and constant revolution.50
The dramatisation of change did not merely serve as an argument for fundamentally re-engineering education and socialisation. As we discuss in the chapters to follow, it was also used to legitimise the activity of social engineering by professionals who could help people to adjust to, and live with, a rapidly changing world.
A society’s sense of temporality is an outcome of its relationship to its history and is particularly influenced by the way it regards its capacity to influence it. It is not the rapidity of change that has led to the loss of the sense of the past but the inability of significant sections of society to gain meaning from the values into which it was socialised. Although traces of this attitude are evident from the late 19th century onwards, it was the catastrophic experience of the First World War that served as a catalyst for what became in effect a dramatic cultural rupture with the previous era.
A consciousness of change and a sense of the transience of social arrangements need not encourage an attitude of hostility towards the past. It can coexist with a critical but still respectful sensibility towards custom and tradition. It is when the attitude of uncritical rejection of what precedes the present prevails that the fetishisation of change encourages the immature reaction of attempting to start the world anew.
The willingness to change, adapt and embrace uncertainty is one of the important and positive attributes of the modern era. However, these attributes become a caricature of themselves when they acquire the character of a dogmatic rejection of everything that precedes the present. Arendt warned against the tendency of modern man to rebel ‘against human existence as it has been given, a free gift from nowhere’ and ‘which he wishes to exchange, as it were, for something he has made himself’.51
The loss of the sense of the past was not simply an outcome of society’s perception of hyper-change. The sensibility of an ever-widening psychic distance between the present and the past was shaped by dramatic historical experiences – the most important of which was the First World War.52 It is at this point that the phenomenon known today as the ‘generation gap’ acquired a powerful cultural significance. The cultural gap that opened up between the post-World War world and the pre-war era would in the decades to follow be experienced through generational tensions as the problem of identity.
The Great War fundamentally undermined the cultural continuity of the West. For many Europeans it appeared that their relationship with their past had become fatally undermined. Millions of people – especially the elderly − lamented the loss of the old order. It became apparent even to those who possessed a strong conservative impulse that there was no obvious road back to the past. Attempts to maintain a sense of the past were marginalised and overwhelmed by a zeitgeist that sought to create the world anew. The cultural influence of novelty captured the temper of the times. For many intellectuals and artists, the end of the war marked the beginning of a new cultural Year Zero. The sensibility of epochal rupture and disdain for the past dominated the modernist intellectual and artistic imagination. A more radical version of this sentiment was communicated by interwar radicalism, which fervently believed that a break with the past was both possible and necessary.
One of the most momentous and durable legacies of the Great War was that it disrupted and disorganised the prevailing web of meaning through which western societies made sense of their world. Suddenly the key values and ideals into which the early 20th century elites were socialised appeared to be denuded of meaning. In historical moments when people are confused about their beliefs, they also become disoriented about who they are and where they stand in relation to others. The psychiatrist Patrick Bracken writes about the ‘dread brought on by a struggle with meaning’. In circumstances when the ‘meaningfulness of our lives is called into question’, people become painfully aware that they lack the moral and intellectual resources to give direction to their lives.53 ‘Europe was exhausted, not just physically, but also morally’, states a study of the ‘crisis of confidence among European elites after the war’.54
The existential and moral crisis that unfolded in the aftermath of the First World War ruptured a sense of continuity with the past and disrupted people’s sense of who they were. Consequently, it forced society and its individuals to ask the question of ‘Who are we?’ The sense of continuity across time is, as Baumeister stated, one of the defining criteria of identity. ‘That criterion is hard to satisfy if the continuity is that of process of change rather than that of a stable component’, he wrote.55 As the sense of discontinuity prevailed over the sense of continuity, the conditions were created for the historical emergence of what would be referred to as a crisis of identity in the 1940s.
Suddenly, the taken-for-granted assumptions about civilisation, progress and the nature of change lost their capacity to illuminate human experience. As the prominent English historian H.A.L. Fisher acknowledged in 1934, he could no longer discern in history the ‘plot’, the ‘rhythm’ and ‘predetermined pattern’ that had appeared so obvious to observers in the past.56 The cultural historian Paul Fussell claims that after the First World War it is difficult, if not impossible, to imagine the future as the continuation of the past; ‘the Great War was perhaps the last to be conceived as taking place within a seamless, purposeful “history” involving a coherent stream of time running from past to future’.57 A dramatic shift in the western world’s sense of temporality had altered people’s relationship to their past.
Although western society was already in the late 19th century predisposed towards detaching itself from its past, it was under the spell of the calamitous impact of the Great War that this sentiment came to capture the popular imagination. Cast adrift from the certainties provided by the taken-for-granted ways of doing things, a significant section of society found themselves asking the question of ‘Who are we?’ It was in this moment, when the unravelling of cultural continuity exposed the illusions of the past, that artists and intellectuals began to show an awareness of identity as a problem. In his study of the history of the idea of identity, Izenberg observes that in all the literary works he examined, ‘the turning point for the idea of identity is World War I’.58
In one of the earliest explorations of the subject of identity, the English philosopher John Locke claimed in the 16th century that identity reflected the continuity of memory through the changes undergone by the self. Since, as Locke explained, memory and individual identity are closely bound together, the loss of the sense of the past meant that many people found they could not take their personal identity for granted.59
Frequently the post-war years were labelled as an ‘age of disillusionment’. Although rarely elaborated, the term ‘disillusionment’ referred to the loss of illusions in the norms and values of the pre-war order. The appellation ‘illusion’ served to communicate the sentiment that the values associated with the pre-war outlook were at best a product of self-deception and at worst of cynicism and dishonesty. In this way the past was not just condemned as irrelevant but also dispossessed of any redeeming qualities. Insulating the young generations from its malevolent influence became one of the goals of progressive educators. Many of them felt that they were in no position to provide moral guidance to the young. As a study of progressive educators in England pointed out, they regarded adults as ‘unworthy models’ who were likely to exercise a corrupting influence on children.60
For most historians the interwar era is best understood as an age of ideologies where new totalitarian regimes threatened to overturn the global order. However, while this hideous drama unfolded – leading to the Second World War and the Cold War – cultural authority became a constant focus of contestation. It was this veritable crisis of normativity that rendered problematic the transmission of values to young people.
In 1930, Winston Churchill drew attention to the crisis of normativity, which he experienced as the estrangement of his society from the values of the past. He observed:
I wonder often whether any other generation has seen such astounding revolutions of data and values as those through which we have lived. Scarcely anything, material or established, which I was brought up to believe was permanent and vital, has lasted. Everything I was sure or was taught to be sure, was impossible has happened.61
Lord Eustace Perry echoed Churchill when he wrote in 1934 that there was ‘no natural idea in which we any longer believe’. He added that ‘we have lost the easy self-confidence which distinguished our Victorian grandfathers’.62
That the values into which Churchill was socialised in the late 19th century had lost much of their cultural influence was echoed by significant sections of the British Establishment. This sentiment was particularly influential among intellectuals and the teaching profession. Like many sections of the cultural Establishment, teachers felt reluctant and uncomfortable about educating young people to embrace the values of the pre-First World War era. Confusions about the normative foundation of authority were internalised by educators – many of whom believed that the traditional modes of classroom interaction needed to be radically revised.
The philosopher of education Geoffrey Bantock recalled the ‘widespread revolt against authority’ after the First World War and the ‘waning confidence in adult values among the liberal “enlightened”’.63 Many educators believed that their role was to protect children from being contaminated by unworthy adult values.64 The clearest expression of the waning of confidence in adult values was a perceptible hesitancy and reluctance to take responsibility for the socialisation of the younger generations. This reluctance to transmit the experience and legacy of the past to the young was widespread among progressive educators in the interwar era. This group of educators were ‘distressed and alienated’ by the values that prevailed at the time and ‘they shied away from imprinting the future generation with the marks of the present’.65 This sentiment was forcefully articulated by J.H. Nicholson, a Professor of Education at Newcastle University. He lamented that ‘we are an uneasy generation, most of us to some extent ill-adjusted to present conditions’ and ‘should therefore beware of passing on our own prejudices and maladjustments to those we educate’.66
Scepticism about the moral status of the prevailing values had important implications about the conduct of inter-generational relations. It had a particularly direct impact on education. Once adult society had lost the capacity to recognise itself through the values to which it was socialised, its capacity to educate children into a new system of meaning became compromised. Instead of confronting the question of how to conduct essential inter-generational transactions both within and outside education, the post-First World War decades saw a growing tendency to evade the problem. In many cases the erosion of the consensus about what kind of ideas to transmit to young people was and continues to be perceived as proof that adults do not have an authoritative role to play in this domain.
Sections of progressive educators attempted to make a virtue of the hands-off attitude towards socialising young people. Their advocacy of ‘child-centred education’ was to a significant extent motivated by their disenchantment with the exercise of adult authority. As one study of this form of pedagogy explained; ‘Some adherents of the child-centred tradition believe that, in the interests of creativity and our needs for innovation in a rapidly changing world, the values of adults should not be imposed upon a child’.67
Writing in 1943 during the Second World War, Margaret Mead highlighted the reluctance of the interwar generation of parents to give clear moral guidance to their children. She noted that ‘millions of young Americans were the first generation to be reared by parents who did not present themselves as moral role models’.68 She added:
[I]t is sufficient to point out that men who are twenty in 1942 were reared by members of a generation which betrayed a Cause which they had believed to be worth fighting for, a generation which spent twenty years heaping obloquy on those who had been fools enough to believe in it – especially on themselves. For the first time in American history, we have had a generation reared by parents who did not see themselves as knights of a shining cause.69
Although advocates of child-centred education and parenting did not go so far as to endorse the abdication of responsibility for socialising children, they continually called for the restraint of the exercise of adult authority.
Many social scientists and professionals dealing with family life saw little problem with the erosion of adult authority. When in 1950 the sociologist David Riesman drew attention to what he saw as the abdication of adult authority by parents and teachers, he was sharply criticised by Talcott Parsons. Parsons claimed that what Riesman characterised as the abdication of adult responsibility was actually a new and enlightened way of preparing the young for ‘high levels of independence, competence and responsibility’. He rebutted Riesman by claiming that
What Riesman interprets as the abdication of the parents from their socializing responsibility can therefore be interpreted in exactly the opposite way. If parents attempted to impose their role-patterns in a detailed way on their children, they would be failing in their responsibilities in the light of the American value system.70
Parsons, along with many of his colleagues, praised adults for not ‘imposing’ their views on the young. By the time he wrote his critique of Riesman, the view that adult values should not be imposed on children had migrated from a small circle of progressive educators to middle-class society.
Now and again questions were raised about the reluctance of adult society to transmit a normative outlook to the younger generations. Writing in 1952, the psychoanalyst Hilde Bruch questioned the wisdom of avoiding the exercise of adult authority:
It has become fashionable in the world of psychiatry and psychology, not only in its immediate relation to child-rearing practices, to speak in sweeping, dramatic terms of the crushing effect of authority and tradition. The failure to recognise the essentially valid and sustaining aspects of traditional ways and of differentiating them from outmoded harmful and overrestrictive measures has resulted in a demoralized confusion of modern parents and this has had a disastrous effect on children.71
Though Bruch’s criticism of her colleagues’ normalisation of adult irresponsibility was well observed, it overlooked a more fundamental problem, which was that the older generations were far from clear about what values to transmit to the young. Since adults could no longer represent the past and its legacy to the young, there was little consensus about the stories they should transmit. In such circumstances they found it difficult to give meaning to their authority as adults.
By the 1960s Erikson had become aware that the decline of adult responsibility posed a problem for young people attempting to resolve their crisis of identity.72 He was in no doubt that young people relied on the normative outlook passed on to them by their elders to make their way in the world. Those adults who opted out of their generational responsibility were in effect choosing to ‘remain juvenile’. Those who shirked responsibility ‘for the generational process’ had in effect become ‘advocates’ of ‘an abortive human identity’. He added that:
We have learned from the study of lives that beyond childhood, which provides the moral basis for identity, and beyond the ideology of youth, only an adult ethics can guarantee to the next generation an equal chance to experience the full cycle of humaneness – to become as truly individual as he will ever be, and as truly beyond all individuality.73
Unfortunately, by this time the reluctance to face up to this challenge was no longer confined to a small number of slothful parents or groups of zealous child-centred progressive educators. Adult ethics had acquired a superficial existence. Disconnected from the past, emptied of moral content, it existed as a series of ‘how to …’ statements, instrumentally cobbled together from self-help books by professionals. Socialisation was increasingly perceived as a technique or a process that appeared independent of educating the young in the legacy of the past.