CHAPTER 11

The Application of Standards and Best Practices in Research and Evaluation for Public Relations1

The Current State of Public Relations Measurement

Companies specializing in public relations measurement and evaluation have traditionally focused on evaluating only the outcomes of public relations. These outcomes are most commonly the media or press coverage that is a direct result of media relations activities (outputs). The primary limitation of these companies is their limited focus on an intermediary in the public relations process—the media—rather than on the target audience for these communication activities.

Relying strictly on evaluations of intermediaries in the communication process fails to create effective measurement and evaluation systems that provide a diagnostic appraisal of communication activities which, in turn, can lead to enhanced communication performance. The failure to include diagnostic measures ignores one of the fundamental best practices in communication research and is the key reason why public relations measurement and evaluation has failed to progress significantly over the past 30 years.

Standards in Public Relations Research

Setting standards for public relations research—ethical, measurement, evaluation standards—should be the first thing a client looks for when hiring a research firm. As noted in Chapter 2, these standards are only now being seen as important (Stacks 2016). Why? First, because standards are necessary conditions for professionalism. Second, because standards tell us what to research and lead to how to conduct that research. Third, because standards provide the only way to effectively and efficiently provide comparative evaluation of communication programs for their efficacy and for their ability to meet communication objectives.

Ethical standards address how researchers should approach research, the research participants, and the nature of business. Unethical research calls into question the validity of the research and the researcher’s professionalism and, perhaps more importantly, it calls into question the researcher’s neutrality and impartiality in assessing the data, evaluating it, and making recommendations. An ethical researcher should be above the fray, providing pure and unbiased data and evaluation.

Measurement standards address how data should be created, assessed, and evaluated. Furthermore, all measures should report reliability and validity information and include the actual reliability statistics.

Evaluation standards provide the researcher with a tool that allows him or her to compare results against others. Evaluation standards include how much statistical error the researcher was willing to accept (usually no more than 5 percent) and results against other recognized research findings.

Best Practices in Public Relations Research

In public relations research, academic as well as professional,2 there are nine best practices that can serve as the foundation for establishing a standardized set of measures for public relations activities that are essential elements in advancing public relations measurement and evaluation. These practices are divided between two broad areas: (1) the use of specific research methods and procedures and (2) the application of measures that examine both the quality and the substance of public relations activities.

Research Methods and Procedures

There are three research methods and procedures that are an essential part of best practices in public relations research. These methods and procedures include every key step in the research process from the inception of the project through the delivery of the research report itself. These three steps are as follows:

  1. Setting clear and well-defined research objectives.

  2. Applying rigorous research design that meets highest standards of research methods and ensures reliable research results.

  3. Providing detailed supporting documentation with full transparency.

Clear and Well-Defined Research Objectives

Setting clear and well-defined research objectives is the critical first step in the public relations research process. Unfortunately, it is the aspect of best research practices that is typically either overlooked or not given the level of attention that it requires in order to create an effective and reliable measurement and evaluation system. The establishment of clear and well-defined definitions is particularly critical since research objectives function as the foundation upon which the rest of the research program rests (Stacks 2017). The key to setting these objectives so that they can effectively contribute to a measurement and evaluation program that meets best standards involves answering the following five questions.

  • Is the information need clearly articulated?

    • In order for any form of measurement and evaluation to be effective, it is essential that the information be specific and unambiguous. A generalized information need such as, “How well did the program perform?” is unlikely to serve as an effective basis for any research-based decisions.

      The more appropriate questions are: “What is the level of awareness of the product, issue, or situation?” “How knowledgeable is the target audience about the material being communicated?” “Is the information relevant to the target audience?”

      “How has the attitude of the audience been impacted by exposure to communications?” “Is the target audience willing to take any form of action as a result of exposure to the communication program?” These questions result in setting specific information objectives that can be reliably measured and provide data that can be used to improve communication performance.

  • Are the target audiences for the communication program well defined?

    • It is essential to understand who the target audience is as precisely as possible.3 This is important for several reasons. The primary and foremost reason is practical. To conduct research that reliably measures and evaluates a communication program, it is essential that those to whom the program is directed also serve as the source of the information about the audience. A poorly defined audience is typically one that is so broad in its scope that it includes those unlikely to express an interest or need. An example of an audience that may be too broad in its scope is “women aged 18 to 49 years old.” By contrast, a more narrowly defined audience is “mothers of children that are 12 years or younger.” While the former group includes the latter group it is less precise and depending on the product or service, less likely to yield the same information.

  • Are business objectives being met through the information gathered from the research?

    • The central reason for conducting any type of measurement and evaluation research is to address a business issue or concern. Consequently, as the objectives for the research are being established, it is critical that a detailed assessment of the business takes place as a first step in the process. For example, if the issue is assessing the introduction of a new product category, then measuring awareness is a highly relevant and essential measure. However, if the business issue concerns a prominent national brand, then purchase intent may be a more relevant and important measure to include in the research program. The more closely research is tied into delivering business objectives, the more valuable and strategic it will be.

  • Is there a plan for how the findings from the research will be used?

    • Just as it is important to have a clear understanding of the research objectives, it is equally essential to understand the types of actions that can be taken as a direct result of the information that is gathered in the research process. The intent is to create research that functions as an aid in the decision-making process, rather than having it serve as an end in and of itself. For this reason, it is best to consider likely internal users or customers for the research findings at the outset (e.g., marketing, investor relations, new product development, human resources, market, or business units). Human nature being what it is, it is also advisable to secure their involvement and buy-in first, so that the findings are welcomed and applied constructively, not just as an afterthought. Objective listening research and the insights derived from it are tremendously powerful in terms of internal education for management and appreciation for the strategic focus of communication.

  • Is the organization prepared to take action based on research findings?

    • Just as important as having a plan for applying the research is having an understanding of the actions the organization is willing to take based on the findings. If the senior decision makers are unwilling to undertake specific actions, then creating a research program that measures and evaluates that action will have little value to the organization and may actually be counter-productive to the organization’s long-term goals and objectives.

Rigorous Research Design

Once objectives have been established, it is important to design research that both supports the objectives and is rigorous enough to provide usable and actionable information. This rigor not only assures reliable research results, but also provides a foundation for measuring and evaluating communication performance over time. Again, a series of nine questions needs to be addressed in order to ensure that rigorous research designs are applied.

  • Is the sample well defined?

    • The research sample, just like the target audience, needs to be precise in order to make sure it is the actual target audience for communication that is included in the research. The recommended approach is to screen potential research respondents for these defining characteristics before the start of the study. These defining characteristics can be demographic (e.g., age, gender, education, occupation, region, etc.), job title or function, attitudes, product use, or any combination of these items. However, while it is important to define the sample precisely, caution must also be taken to make sure that key members of the target group are included in the sample. In some instances, samples require minimal quotas of specific types of respondents to ensure that analyzable segments of each quota group are included in the study.

  • Are respondents randomly selected?

    • One of the most significant and immeasurable biases that can occur in a study is the exclusion of potential respondents who are difficult to reach and therefore are less likely to participate in the study. Special attention needs to be paid to ensure that these individuals have an equal opportunity to participate. This is equally true for telephonic as well as online surveys. This is typically accomplished through multiple contacts over an extended period with a random sample or replica of the group being studied. It is also essential to be sensitive to the audience being studied and appropriately adapt the ways that responses to questions are secured. Examples of these very specific groups of individuals that require increased sensitivity are young children or other groups where there are special laws and regulations guiding data collection, night-shift workers, ethnic minorities, and disabled or disadvantaged groups. (See Chapter 8 for a detailed discussion of sampling.)

  • Are appropriate sample sizes used?

    • Samples need to provide reliability in two distinct manners. The primary need is to make certain the overall sample is statistically reliable. The size of the sample can vary considerably from a few hundred respondents to over 1,000 individuals. The decision to use one sample size over another is contingent on the size of the overall population represented by the sample, as well as the number of subgroups that will be included in the analysis. For example, a national study of Americans typically requires a sample of 1,000 respondents. This assures geographic and demographic diversity as well as adequately sized subgroups between which reliable comparisons can be made. By contrast, a survey of senior executives may require only 200 to 400 completed interviews to meet its objectives.

  • Are the appropriate statistical tests used?

    • Survey research is subject to sampling error. This error is typically expressed as range of accuracy. A number of different standards can be applied to determine this level of accuracy as well as serve as the basis to compare findings between surveys. The most common standard used is the 95 percent measure. This standard assures that the findings, in 19 out of 20 cases, will be reliable within a specific error range for both sampling and measurement. This error range varies depending on the size of the sample under consideration with a larger sample providing a corresponding smaller range of error. With that standard in place, a number of different statistical tests can be applied. The key is to select the proper test for the situation being tested. (See Chapter 10 for a detailed discussion on statistical testing.)

  • Is the data collection instrument unbiased?

    • A questionnaire can impact the results of a survey in much the same way as the sample selection procedures. The wording and sequence of questions can significantly influence results. Therefore, it is essential to make sure that wording is unbiased and the structuring of the questionnaire does not influence how a respondent answers a question. Paying attention to this concern increases the reliability of the findings and provides a better basis for decision making.

  • Are the data tabulated correctly?

    • Special concern needs to be taken to make sure that the responses from each questionnaire are properly entered into an analytic system so that data from the entire study can be reliably tabulated. Data preferably should be entered into a database with each questionnaire functioning as an independent record. This will also allow for subsequent verification if errors are detected and will also allow for the greatest analytic flexibility. Accuracy will also be significantly enhanced with this approach. Spreadsheets do not provide the same analytic flexibility as specialized statistical packages (i.e., SAS or SPSS) and it is significantly harder to detect errors when using that type of data entry system.

  • Are the data presented accurately?

    • Assuming the data are tabulated properly, it is equally important that it be presented in a manner that accurately represents the findings. While data is often selectively presented, the omission of data should not be allowed if it presents misleading or inaccurate results. Consequently, the full dataset needs to be available, even if the data is only selectively presented.

  • Is qualitative research used appropriately?

    • Well-executed qualitative research (focus groups, individual in-depth interviews, and participant observation) can provide unique insights that are not available from other sources. While these insights are invaluable, this form of research is not a substitute for survey data. Qualitative research is particularly useful with three applications: development of communication messages, testing and refinement of survey research tools, and providing insights as well as deeper explanations of survey findings. (See Chapter 6 for a detailed discussion on qualitative research methods.)

  • Can the study findings be replicated through independent testing?

    • If research is properly executed, reproducing the study should yield similar results. The only exception is when significant communication activity has occurred that will impact attitudes and opinions. Unless the study is reliably constructed so that it can be replicated, it will be difficult to produce studies that can be reliably compared and which will demonstrate the actual impact of communication activities. (See Chapter 9 for a detailed discussion of experimental design.)

Detailed Supporting Documentation

While it is essential to employ a rigorous research design when measuring and evaluating public relations activities, it is just as critical to document how the research was conducted. This documentation provides a clear understanding of the issues being measured and a detailed description of the audience being studied. Just as important, it provides the information required to replicate the study so that consistent measurement and evaluation can be applied. The three questions that need to be answered to ensure that the documentation meets the standards of best practices are as follows:

  • Is the research method described fully?

    • The description of the method includes not only how the study was conducted (telephone, in person, online, etc.), but also the timeframe when the interviews took place, who conducted the interviews and a description of the sample.

  • Is the questionnaire—as well as any other data collection instruments—available for review?

    • This ensures that the reader understands the context of the questions by being able to refer back to the questionnaire when reviewing the dataset. It also allows for easier replication of the study.

  • Is the full dataset available if requested?

    • Availability of the data provides full transparency of the findings, as well as the foundation for doing comparative analyses with subsequent waves of the research. It also allows for additional tabulation of the data and other analyses that may be useful in a subsequent analysis.

Quality and Substance of Research Findings

The second broad area contributing to best practices in public relations research involves six practices which ensure that the research findings contribute to improving communication programs. These six practices are as follows:

  1. Designing the research to demonstrate the effectiveness of public relations activities.

  2. Linking public relations outputs to outcomes.

  3. Using the findings to aid in the development of better communication programs.

  4. Demonstrating an impact on business outcomes.

  5. Being cost effective.

  6. Having applicability to a broad range of public relations activities.

Demonstrating Effectiveness

The central reason to conduct measurement and evaluation research is to determine if a communication program works. Consequently, every set of research objectives and each research design needs to ask the following two questions:

  • Is the research designed to show the potential impact of a message, program, or campaign?

    • This is the primary acid test when designing a measurement and evaluation effort. Unless the research has this capability built into the design, it should be reconsidered. These designs can vary considerably from situation-to-situation. However, a common element of many measurement and evaluation programs is setting a baseline or benchmark at the initial stages of the research and using that benchmark as the basis for evaluating performance, preferably throughout the campaign at specified intervals.

  • Is the research designed to function as a benchmark to gauge future performance?

    • A benchmark study has to examine basic communication measures. The importance of each of the measures may vary over time. However, basic measures of awareness, knowledge, interest or relevance, and intent to take action need to be considered for inclusion in most studies.

Linking Outputs to Outcomes

Significant proportions of public relations measurement and evaluation focuses attention on the evaluation of media placements. While media placements are often critical in the evaluation and measurement process, they only represent one limited aspect of the public relations process. More importantly, concentrating analysis only on that one area fails to take into account the fundamental issue that public relations activities take place in order to impact a target audience. While the media are a key target for this activity, they actually function as an intermediary or conduit. The fundamental question that needs to be asked is:

  • Does the research examine the entire public relations process?

    • This process needs to include an examination of the program’s communication objectives and media placement, as well as the impact of these placements on the target audience.

Developing Better Communication Programs

The goal of a measurement and evaluation program is not to determine the success or failure of a public relations program. The goal is to improve the overall performance of these efforts. There are two best practices in this instance that need to be applied:

  • Is a diagnostic element built into the research that provides insight and direction to improve program performance?

    • Research needs to do more than measure communication performance. It also needs to provide insight into the communication objectives and the target audiences in what we have labeled an “end-to-end” process. Consequently, the research needs to offer direction for public relations programs and their content and to also identify corrective strategies so the programs achieve their goals. Measurement in this instance is not an end in itself. Rather, it is a diagnostic, feedback-oriented tool.

  • Is research conducted early in the program to take advantage of the information?

    • Ideally measurement and evaluation should take place at the onset of a communication program so that the findings can be incorporated into the program planning and strategy. The benefit of this research is lost if the only research conducted takes place at the end of the effort.

Demonstrating Impact on Business Outcomes

While a more effective communication program is a central reason to conduct research, the real goal is to have a demonstrable impact on business objectives. The key questions that need to be asked about the research design, therefore, need to concentrate on evaluating communication performance—outcomes—as well as mediating variables such as reputation and relationships (and trust and transparency [Rawlins 2007]) to business outcomes. Establishing appropriate benchmarks and building in key performance indicators are increasingly a valued part of research activity which further cements communication into organizational improvements.

  • Did the product sell (outcome); were attitudes changed (outtake); did reputations improve as a direct result of the public relations program (outcome)? (Stacks and Bowen 2013)

    • Each of these is a specific business outcome that has an impact on the operations or an organization. It is essential to determine if it is the program that affected these changes or was it other actions.

  • How did the public relations effort contribute to overall success?

    • If the public relations program contributed to these changes and shifts, then it is equally important to determine which elements of the program had the greatest impacts (correspondence between outputs and outcomes).

In Chapter 1 we introduced the concept of best practices (Michaelson and Macleod 2007) as Figure 11.1 demonstrates—and by now it should be readily apparent—there is a strong interrelationship between the organization setting communication objectives, messages sent by the organization, how those messages are received, and how the outtakes from those messages impact on the objectives goals set by the organization.

Cost Effectiveness

There are a number of formulas that provide guidelines for the proportion of a public relations budget that should be devoted to measurement and evaluation systems (see Pritchard and Smith 2015). The issue, however, is not about how much should be spent, but if the inclusion of research in the program increased effectiveness, that it has a value that is greater than the cost of the actual research.

Figure 11.1 Best practices

  • Did the research enhance the effectiveness of the public relations efforts?

    • This is the first question that needs to be answered. If the program did not improve as a result of the research or if the direction to improve future programs was not gathered, then the research needs to reevaluated and redesigned to ensure these goals are met.

  • Was the return on investment for conducting the research program greater than the actual cost of the research itself?

    • However, even if the research is effective in improving program performance, the cost of the research still needs to be considered. Research that costs $10,000 but only offers incremental performance of $1,000 is a poor investment. This does not mean that research should not be conducted in this situation. Instead, the research design and the research objectives need to be reevaluated.

Applicable to a Broad Range of Activities

While the direct intent of public relations measurement and evaluation is to improve communication performance, it is also essential to note that public relations does not operate in a vacuum. It is typically integrated with other aspects of an organization and these needs to be taken into consideration so that the benefits of the research can be used as widely as possible.

  • Is the information gathered applicable to other areas?

    • These areas can include new product development, corporate reputation, other marketing communication methods as well as promotional use.

Benefits of Best Practices in Public Relations Research

The benefits of best practices go beyond merely “doing it right.” Following these practices offers specific business advantages. These advantages stem from generating highly reliable results that go beyond merely providing information. They are results that are actionable, improve decision making based on the availability of highly reliable data, and yield a potential database that allows a comparison of findings from case to case that can also be applied to parallel communication programs. Just as important is the increase in overall quality that will lead to consistency in the application of research and the findings from that research.

Implementing Best Practices

The primary best practice that needs to be followed is the inclusion of research, measurement, and evaluation as a core part of a public relations program. Ideally, an individual in each organization should be charged with managing this process—to know the best practices and to assure that these best practices are followed. While there is no standard approach for how public relations research should be conducted, following best practices yield reliable and usable results. By following these basic guidelines, research will provide the requisite insights for improved planning, effectiveness, and demonstration of the importance and value of strategically linked communications to organizational success.

Evaluating and Interpreting

Once the research has been completed it is important that the final presentation of results and interpretations be made. Typically, this is done in two forms. First, a written report that begins with an executive summary of the findings and followed with an in-depth discussion of what was found, how reliable and valid those findings were, and tables and graphics that visualize those findings for the client. Second, an oral presentation to senior client leadership is presented that takes the written report down to its essence and is used as a baseline itself to foster questions and answers. If you have followed the standards and best practices reported in this volume, you should have no problem with evaluating the research, its methodology, and its results. With the appropriate other business function data (i.e., marketing, human resources, information technology, and so forth), you can establish relationships between campaign outtakes and outcomes as they relate to business-driven results during that same timeframe. Looking at the final outcome(s) and correlating them to other function outcomes provides a measure of the return on investment (ROI) that the public relations campaign has for that investment.

1 Sections of this chapter were originally published in Michaelson and Macleod (2007) and Stacks (2016, 2017).

2 For a detail discussion of academic evaluation methods, refer to: Stacks (2016, 2017).

3 We can no longer get away with measuring publics; they are too heterogeneous in a global business environment that is so clearly interconnected via the Internet.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.152.254