2
Research essentials for community participation

2.1 Introduction

This chapter is intended to equip designers with the essential research principles needed for effective community participation. My aim is to get designers to start thinking like researchers, and paying attention to these basics – introduced in this chapter – will go a long way towards that. We look at putting these principles into practice in designing an engagement programme/research project (I’ll treat these as synonymous). There are three big questions to ask when planning a programme:

  1. What do we need to know?
  2. Who can provide the information we need?
  3. How shall we gather and analyse this information?

The answers form the basis of the research strategy, which will be a key document guiding the project. We round off by looking at the field of usability and user experience (UX) research where rigorous research protocols and testing methods inform user-centred design, and considering what placemakers could learn from these practices.

2.2 Thinking Like a Researcher

Impartiality, ethical practice, reliability and validity are the fundamental principles underpinning all aspects of social research. They apply in any research field where the principal subject of study is human activity – including gathering material to help design places that will be used by people. Attending to these principles will raise the methodological rigour of participation programmes in spatial design. This means gathering better information, and from that gaining more useful knowledge to work into a scheme, which means better design outcomes. I’d like to stress that research, information-gathering and engagement should be integral to the design process. The point is to have reliable, valid, useful knowledge to work with from the earliest possible stage, which is built up as the project evolves. Selecting the right methods to provide the right information at the right time should be ongoing work that’s part and parcel of design development. So with that in mind, what do these principles mean?

Impartiality

The impartiality principle refers to a researcher’s obligation to operate from as neutral and objective a position as possible, without favouring any group or viewpoint. Clearly, this may not always translate comfortably into the built environment sector; designers often consult on their own designs, clients want to muster local support for a proposal, and commercial pressures and political agendas can also muddy the waters. As this book is about working effectively in the real world, it focuses on objectively gathering information and working within these boundaries, rather than achieving methodological nirvana.

Impartiality is essential, particularly in communications. It means being clear about the programme’s aims and outcomes, and open as to what can and cannot be changed in the proposal. All public communications should be worded in neutral, easily understood language, giving people the information they need to make up their own minds. The designer-researcher’s role within this context is to understand local needs and wishes, to listen to opinions and feelings without trying to influence them, to view the data objectively, and to produce plans based on that evidence offering the best deal for the community.

Ethical practice

Ethical practice is all about the rights and responsibilities of researchers and participants (designers and communities in this context). The key elements of research ethics that apply in community participation are treating participants with respect and dignity, and maintaining high standards of integrity and professionalism; again, treating the community as the client. Ethical practice is at the heart of inclusive placemaking. Consider ethical issues at every stage of the research strategy, from setting the objectives to deciding who to involve and how to communicate findings. Think of ethics not as rules on what not to do but pointing to what could be done better to promote inclusion and maximise participation. How this translates into practice depends on the nature of the proposals and the local community, but should be an integral part of the thinking on every aspect of a participation programme in any type of project.

Ethical practice has particular relevance in designing with, rather than for, marginalised groups. There’s more in Chapter 9 about working with people with disabilities and sensory or cognitive impairments, as well as social and cultural exclusion issues that a wide range of groups experience. There’s also specific guidance on working with children and young people, where ethical practice is non-negotiable, particularly in issues around informed consent and safeguarding requirements.

Reliability

Moving away from community relationships now, reliability and validity are about data quality and analysis. The measure of validity is that if a study was run again by another team the results and findings would be pretty much the same. This requires that information is collected and analysed without bias as far as possible. ‘Bias’ usually refers to prejudice or favouritism in everyday parlance, but in a research context it denotes any kind of skew that potentially impacts on accuracy. And skew is inevitable. There are two main types, which are as relevant to a community participation programme as to an anthropological study or psychological experiment:

  1. Researcher bias is when a researcher’s preconceptions may lead them to misinterpret or misrepresent data. This also includes methodological bias, which refers to aspects of the research process itself that could affect the data collection or analysis.
  2. Participant bias is the influence of factors on participants that consciously or unconsciously affects their responses, such that they may not be a true reflection of their values or behaviour.

Awareness of bias and a commitment to minimising it are integral to thinking like a researcher. In a spatial design context, participation programmes are commonly skewed by the questions asked (methodological bias), the people asked (participant bias) and a lack of impartiality (researcher bias). Each research method chapter (Chapters 3 to 8) includes a section on bias issues associated with that approach, with an extensive list of biases defined in the Appendix. Forewarned is forearmed.

So how can designers gather reliable information from relevant people with minimal bias?

Firstly, the research strategy should be appropriate to the type of development and local communities. This means relevant, accessible approaches which provide the greatest opportunity for participation; a mix of quantitative and qualitative methods maximises participation opportunities.

Secondly, reliable research requires good sampling, which can mean aiming for quality rather than quantity of input. Getting feedback from as many people as possible might seem like the obvious goal, but capturing a full spectrum of views and hearing from a wide range of groups is just as important – which a high volume of responses doesn’t necessarily provide. It’s good practice to proactively contact groups who are known to be less likely to engage and go to meet them in the early stages to hear about issues affecting them, discuss the proposals, and encourage them to contribute. Identify any barriers that could discourage or prevent people in so-called ‘hard-to-reach’ groups from participating. In all this pre-launch work, it pays dividends to start discussions with the community as early as possible.

Hard-to-reach groups

Many take issue with the term ‘hard to reach’; it has connotations of blaming excluded groups for failing to engage rather than the systems that exclude them. I refer to ‘marginalised groups’ here instead, highlighting the ethical responsibility to include and serve these communities.

Examples, but not an exhaustive list, of groups who may be marginalised with particular regard to public space and the urban environment include (in no particular order): people with mobility, cognitive or sensory impairments; people with chronic health problems or physical disabilities; young people; older people; LGBT people; ethnic or cultural minorities; refugees and asylum seekers; people with learning disabilities; people with mental health problems; homeless people; travellers; people in insecure employment or housing, and people with low literacy.

Validity

Validity is about translating the research objectives into effective, actionable questions, and finding valid ways to gain the information required to answer those questions – or operationalisation for short. It’s also about correctly interpreting data and correctly applying it when there are conclusions to be drawn, findings to be reported and recommendations to be made.

In a community participation context, validity means asking the right questions and measuring the right things to get the relevant data, then looking objectively at what it’s saying and drawing findings from this evidence that reflect the setting and community at that time and place as accurately as possible. With quantitative material from surveys, this might mean identifying correlations and patterns, for instance. Qualitative material may be more challenging, as it doesn’t always offer up clear answers to the research questions. This makes impartiality all the more important in putting aside preconceived ideas or hoped-for results and aiming for good validity.

2.3 Creating a Research Strategy

Equipped with these eternal verities of social research practice, we can now use them as the foundation of a project research strategy. A research strategy sets objectives and research questions at the outset, and gives clarity and purpose to the project’s information-gathering work; see Figure 2.1. This is common practice in UX and product design research, and many built environment practices already do the same thing in their community engagement work too, in different ways and under various names. I’ll continue with ‘research strategy’ and ‘research objectives’ here, however, as they carry a reminder of the methodological planning that underpins data gathering and the need for a structured approach to working with the information.

A research strategy document guides a project and mainly consists of answers to the questions: ‘What do we need to know?’, ‘Who can provide the information we need?’ and ‘How shall we gather and analyse this information?’, which should be addressed in that order. These are discussed in the next sections, and there’s a sample document template bringing them together (see p. 22). The research strategy defines a project’s scope and rationale. Talking about product design, for instance, Gyoko Muratovski makes a forceful case for getting the strategy right from the start: ‘At the heart of every design project lays a problem. The ability to understand this problem is paramount to the success of the design outcome. If you do not understand exactly what this problem is, you will not be able to design a solution that can address this problem.’ (Muratovski, 2016, p. 29).

Figure 2.1 Developing the research strategy

Figure 2.1 Developing the research strategy

First question: What do we need to know?

To establish the research objectives, just complete the sentence ‘This research will …’ in broad terms. There should be no more than three objectives. In relation to a public space, as a very simplistic example, they might be something like:

  1. Understand the site context and functions that the site currently serves.
  2. Find out what people want/need/would like to see there.
  3. Identify some possible options for the site.

The next stage is to draft six to ten questions for the research to answer, which operationalise the objectives into specific areas of study. Each research question should have a defined focus, not overlap with any others and, obvious though it may sound, be answerable from the information likely to be available. In the UX field, Tomer Sharon advises asking not only ‘What do we want to know?’ but also ‘Why?’ (Sharon, 2015). This means considering whether questions will actually yield the information required (in other words, do they have validity?) As the project progresses it’s worth noting the work that’s been done on each research question, to ensure there’s a sufficient amount of material for each.

Finally, build an evaluation strategy into the programme from the start. Evaluation should measure the quantity and quality of engagement. Getting plenty of people involved is good, but less so if only the ‘usual suspects’ get to express their views. Monitoring participant feedback is essential, so gather responses to events, activities, websites, surveys and communications through quick evaluation questionnaires and by simply asking people informally how they are finding the process. The final element to the evaluation strategy is sharing and building on the knowledge gained, so that the team, participants and wider public and professional audiences can learn from the experience.

Second question: Who can provide the information we need?

With the research objectives, research questions and evaluation strategy defining what we want to know, it’s now time to identify who can best provide the information we need. Sampling, in research terms, is selecting data sources from which to draw in a research project. These could be individuals, households, groups, locations or any other defined source. The nature of the project, the context, the client’s requirements and logistical considerations will all influence sampling decisions, which may have to reflect what can realistically be achieved rather than gold-standard methodology. Nonetheless, aim for a wide range of sources. Getting the sampling right isn’t just a matter of methodological quality: it can determine whether people view the programme as lip service or genuine. Some types of development will require contacting every household affected and inviting them to give their views; this can be the case for schemes in residential areas, for instance. As a general rule, if it’s feasible to contact everyone affected, then do. Public realm projects are less clear-cut; for example, it won’t always be known who will use a proposed space, in which case it’s better to seek a cross-section of community opinion instead.

Participation specialists advise a little caution in working with community groups. Leaders or spokespeople don’t always speak for everyone, or necessarily even the majority. And the people who can offer the most valuable insights aren’t always the first to come forward. It’s better to talk to a few people and assess the range of views, so start informal conversations before the formal programme launches wherever possible, to gain more understanding of the group and their issues, and ensure they’re well informed about the project. As these groups are likely to be volunteer-run, with scant resources and other priorities, starting discussions early can make their participation easier and more likely, by identifying aspects of the proposals relevant to them, discussing any particular concerns, and agreeing how they’ll give feedback.

Third question: How shall we gather and analyse this information?

The final stage is deciding how to gather data from the required sources that will yield the required information: in other words, methodology. Only consider methodology after defining the objectives, research questions and who to involve. Never start with a preferred approach; the research aims will point to the best ways. A reverse-engineering approach can help to decide appropriate methods; that is, start by identifying the outputs that are needed and then work back to see how best to deliver them. The essential thing is to choose approaches that produce information that will help answer the research questions and that will allow all relevant groups to contribute.

The subsequent chapters cover a selection of established methods – by no means an exhaustive list, as new approaches are developing all the time, especially as technology evolves – but sufficient for most situations. Researchers often favour a mix of methods: quantitative methods providing statistics on when, what, where and how many, and qualitative for insights on why and how. Using mixed methods has the added benefit of offering people a range of ways to participate. Remember only to source material that will be usable. After collecting hours of video recordings or observation notes, for instance, staff need to have the time and skills to review, transcribe, analyse and interpret it all.

Before putting these questions together to form the research strategy, I want to stress that a participation programme isn’t merely a data-mining exercise. All this information-gathering should be taking place in the context of collaborative and mutually beneficial relationships with the community. It’s important to give local

Figure 2.2 Research strategy template

Figure 2.2 Research strategy template

groups space to talk to each other, to develop relationships and to build capacity. Look for opportunities that could attract funders, sponsors and organisations that could contribute in other ways to the programme and/or the community. If designers see themselves as enablers rather than providers, the emphasis shifts to helping local communities realise their ambitions, instead of designing for them. This is why gaining and demonstrating an understanding of local concerns and agendas is such an important starting point: participation then has relevance and benefits from the outset.

Putting the strategy together

Having finalised the research objectives, research questions, sampling approach and methodology, we now have the main elements of the research strategy. This suggested template at Figure 2.2 brings them together and should work with small or large-scale projects, adapted as required. (The Background and Context information can be gathered from existing project documentation and desk research.)

Creating a data library

A participation programme will generate a considerable amount of material as it progresses. There will be content produced by participants such as survey responses, consultation feedback, diaries, outputs from workshops and community events, and evaluation forms. There will be material produced by the team, such as mapping and counting datasets, photos, videos, transcripts and record sheets from public meetings and events. And there will be general content like correspondence, publicity materials, reports and notes from meetings. So much material in so many formats can be a lot to keep track of, so, if possible, set up a project data library to manage it all. Create a spreadsheet or some central record to catalogue the material, recording details like the date created, the type of material, a short content description, keywords and any other details to help identify or locate material: see Figure 2.3. How this is implemented depends on the practice set-up and the nature of the project; whatever adaptations are required, the bottom line is to log everything in a way that allows a search across the whole collection of material.

Figure 2.3 The data library

Figure 2.3 The data library

A second essential resource that’s needed from the start is a contacts database. This will hold details of local stakeholders to be kept informed and invited to events, such as statutory consultees, key individuals, councillors, businesses, civic societies, residents’ groups and community organisations. At the same time, set up a mailing list to send a news email and other updates to these contacts and anyone else who wants to receive it, and encourage people to sign up for email updates at public events and online.

2.4 Lessons from the User Experience (UX) Field

The UX field is way ahead of the built environment sector in terms of user-centred design. Research processes are relatively simple but they give designers good evidence of users’ needs and wants, so that they can design accordingly. I mentioned in the previous chapter how product design and UX research put new products through rigorously evaluated research cycles of prototyping, testing, redesign and re-testing. The built environment’s a different world, of course, but spatial designers can learn much from these user-focused fields. Surely if developers of mobile apps or websites attend so closely to users’ responses to these relatively ephemeral creations, shouldn’t designers of spaces intended to last for years to come listen more carefully to their users?

Usability and user experience research

Research processes in product design and UX point the way to improvements in spatial design practice in two areas in particular: starting with the user in mind, and Involving users early on in the planning and design process. The point of UX research is to enable informed design decisions, which means greater user satisfaction, more customers and a healthy bottom line. UX is considered to pay serious dividends in some design fields. At IBM, for example, every $1 invested in user research reportedly generates $10–100 in profit, because usable products are the most efficient and the most commercially successful (Muratovski, 2016). User-centred design involves research at each stage of the design process and beyond, measuring usability and gathering feedback post-launch – just as important in the research cycle as pre-launch testing.

Research in consumer products starts by gathering data on users’ priorities, needs and wants: asking how and when a product will be used, its must-have functions and other desirable features. The UX pioneer Jakob Nielsen recommends a seven-step process for usability testing (Nielsen, 2012), summarised below, which although aimed at designers in technology and digital sectors, has clear relevance to spatial design. He argues that user testing shouldn’t wait until detailed designs are ready, because by this point most critical usability problems are likely to require major redesign to rectify. So involving users early on is essential. Note that in Nielsen’s seven steps, design development only starts halfway through the process.

Nielsen’s seven steps of usability testing, from inception to completion:

  1. Before starting a new design, identify good features in the previous design that are worth keeping, as well as features that are problematic for users.
  2. Look at relevant designs by others to see what works and what doesn’t.
  3. Conduct field research to understand how users behave in their natural habitat.
  4. Create very basic prototypes of new design ideas and begin usability testing.
  5. Identify which design ideas test best, and develop more detailed iterations, testing at each stage.
  6. Assess the designs against established usability guidelines.
  7. Decide and implement the final design. Test again once it’s live; unforeseen usability problems still often occur after completion.

Prototypes can be tested on-screen and/or as a basic physical model. Researchers draw on a combination of methods, usually including observation, diary studies, questionnaires, focus groups, individual interviews and task-based exercises to study users’ responses, the features they liked, those they didn’t need or understand, ease of use, and so on. (It is no coincidence that many of those research methods appear in this book’s chapter listing). The design is then refined and re-tested, informed by test results at each stage, until it reaches optimal functionality and user satisfaction. Once the product does everything that users need and want, the final ‘look and feel’ can be decided. Yes, aesthetic considerations come last.

Another approach worth considering is heuristic evaluation, where designers assess an initial concept design against mutually agreed usability criteria before testing. This allows many usability problems to be resolved at the outset, and then each iteration evolves the design in response to user test feedback. Could designers in the built environment adopt similar processes to allow greater certainty as to whether their designs will be well received? Could it become standard placemaking procedure to start by creating prototype models that allow potential users to explore features and functionality, collecting detailed feedback, adjusting the model to incorporate their responses, testing again, getting more feedback, finally achieving maximum usability and functionality – and then working on the design details?

2.5 Key Points Summary

> Applying the key social research principles of impartiality, ethical practice, reliability and validity improves community participation programmes. Greater methodological rigour in these processes benefits everyone.

These principles can translate readily into good practice in working with local communities on spatial design, especially in terms of gathering and analysing data, and communications.

There are three questions to ask at the start of a project, in this order.

  1. ‘What do we need to know?’ defines the objectives for the project, which can then be operationalised into research questions.
  2. ‘Who can provide the information we need?’ means considering the whole range of people who could be affected by a development, being clear about whose voices need to be heard – whether the ‘silent majority’ or marginalised groups – and doing everything possible to bring them into the process.
  3. Then, and only then, consider ‘How should we gather and analyse that information?’ A mix of qualitative and quantitative methods produces a more rounded picture and greater understanding, but requires more time and analytical work.

Having decided on appropriate research methods, draft a research strategy that summarises aims, methods and required outputs.

There are two essential information management resources to create at the start: a central data library to hold all the project material, and a contacts database.

Usability testing methods could benefit spatial design, particularly by adopting processes of prototyping, testing and refining usability and user satisfaction before starting detailed design.

Meaningful participation programmes start with an understanding of where people are coming from, building trust and rapport, and communicating clearly and honestly.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.221.165.246