CHAPTER 2

The Move Toward Standardization

This chapter is a summary of three top five peer-reviewed articles published by us with Dr. Donald K. Wright of Boston University and Dr. Shannon A. Bowen of the University of South Carolina (Michaelson and Stacks 2011; Michaelson, Wright, and Stacks 2012; Bowen and Stacks 2013b). Each article addresses a similar theme—the movement from business as usual to best practices to the standards of how we do research that is essential to the modern public relations or corporate communication function. In this move from merely counting what is sent to audiences (outputs) to examining the effects on intermediary and target audiences, we focus on outcomes that are typically described as soft rather than the hard financial indicators used by other business functions. Chapter 3 reviews and extends this business-oriented model and provides the necessary background and definitions of terms necessary to understand modern public relations. However, as demonstrated in Figure 2.1, the communication profession has moved from a focus on producing outputs to one of creating and implementing strategies that yield result that, in turn, can be correlated with financial data to show public relations’ impact on the program or campaign.

In a sense, we’ve come a long way in a short period of time. In doing so, we have taken an approach to excellent public relations envisioned by Jim Grunig beyond the corporate world and into daily practice (Grunig, Grunig, and Dozier 2002).

Why Standardization?

Standardization of public relations research is the next step up from best practices. While best practices tell us how to best meet objectives, standards define what needs to be measured. Public relations research and strategy have progressed from a rather primitive counting of outputs (the communication product: brochures, media releases, tweets, and so forth) (Stacks and Bowen 2013, 21) to a more strategic, social psychological orientation. This led Michaelson to propose the model of public relations’ best practices introduced in Chapter 1 (see Figure 2.2). He defined best practices as, “A method or technique that has consistently shown results superior to those achieved with other means, and that is used as a benchmark” (Michaelson 2007; Michaelson and Macleod 2007). The model focused on how to produce the best research and had its own goals and objectives: (1) be rigorous in design, (2) be complete in measurement design and evaluation, and (3) report so that the research improves the strategic value of public relations by advancing our knowledge base.

Figure 2.1 The movement toward standardization

Source: Stacks (2011), Copyright 2011 by The Guilford Press. All rights reserved.

The first question to be answered is what is a standard ? According to the Oxford English Dictionary, standard is “an idea or thing used as a measure, norm, or model in comparative evaluations” (Oxford English Dictionary n.d.). Standards then provide comparative evaluations that gauge the absolute performance of programs and program elements, which, in turn, allow us to compare performance of prior and competitive programs within industry and category and relative to other industries or categories.

Figure 2.2 A best practices model

Source: Michaelson and Macleod (2007).

Standards are the hallmarks of professionalism. They provide evidence of the validity of the research process and in combination with best practices, provide rigor and reliability in measurement. This provides the public relations professional the ability to determine if specific communication goals are met (the absolute measures) and a way to identify if changes in specific measures are significant based on the performance of similar programs or campaigns (the relative measures). And, finally, standards measure progress and allow the professional to take corrective actions, if needed, to ensure communication goals and objectives are achieved (Figure 2.3)—goals and objectives that in turn serve as the foundation for achieving business success.

An excellent campaign or program would have multiple measurable objectives across the campaign. With this foundation of standardized measures, we now have the ability to effectively measure and evaluate nonfinancial data and correlate with financial data provided by other business functions. However, we must assure that business goals and objectives are parallel to our goals and objectives. For public relations, there are three standard public relations objectives. The first deals with disseminating information to target publics or audiences; this is the informational objective. Here we check to see if the messages were received, recalled, and understood. If a communication is not received, it has no value; if it is not recalled even if received, there is no value; if it is received and recalled and understood, only then it has value. Each can be measured quantifiably and evaluated against expected benchmarks.

Figure 2.3 Standardizing goals and objectives

*Benchmarks not being met and informational strategies need to be reframed or refocused.

Source: Stacks (2011).

Second, does the communication do what it was strategically meant to do? This is the motivational objective and its focus is on perceptions of the message. The motivational objective consists of three components: cognitive (agreement, disagreement, neutrality), affective (emotional impact), and connotative (behavioral intention). If the objectives meet benchmark expectations, then the public relations function has added to business goals and objectives. Third, does the strategy actually produce intended results—expected behavior? Finally, the behavioral objective provides evidence that (1) the business goals and objectives are met and (2) where public relations strategy forms an important part of the larger business decision-making process. The research process represents a campaign continuum ranging from strategic development, refinement based on benchmarks once the campaign is engaged, and final outcome evaluations against an established baseline.

Research standards give the public relations professional, the corporate chief communication officer, and the public relations agency executive valuable data into strategic decisions made at what Grunig has called the managerial table by providing information that has been comparatively evaluated on those standards.

Toward a Standardization of Public Relations Research Ethics

We believe that before any research is begun, an evaluation of its ethics and the ethics of the research need to be undertaken. Shannon A. Bowen and Don W. Stacks provided an ethical standard for the public relations researcher that is based on a set of (1) principles, (2) their core values, and (3) a way to test the ethicality of a problem (Bowen and Stacks 2013a; Bowen and Stacks 2013b). It is our hope that this ethical standard will not only further professionalize the practice, but also (1) drive data collection, (2) strengthen the credibility of research reports among decision makers, and (3) increase the confidence they have in research findings. This standard should increase the legitimacy and support of public relations’ role as an ethical counselor or advisor to top management.

There is a difference between ethics and research ethics in particular. First, ethics is a best practices component. As Bowen, Rawlins, and Martin noted:

Issues managers must identify potential problems, research must be conducted, and both problems and potential solutions must be defined in an ethical manner. Therefore, ethics can be defined for public relations as how we ought to decide, manage, and communicate. (Bowen, Rawlins, and Martin 2010, 130)

Although this idea has been critiqued, many public relations professionals, as keepers of an organization’s reputation, are also called upon to provide ethical guidance to their dominant coalitions (i.e., the management team) (Bowen 2008; Berger and Reber 2006; Curtin and Boynton 2001). Those dominant coalitions manage issues and support the conduct of research. Currently, research ethics is individualized as firms or professionals who serve to guide the research process. This approach is haphazard and leads to a lack of consistency in ethical standards across the profession. Indeed, very little ethical guidance is offered that is specific to public relations research (Stacks 2002; Stacks 2011; Stacks 2017). However, specific standards would help to unify the profession and lead to a more consistent ethical practice across all forms of data collection and analysis. Given this lack of guidance, Bowen and Stacks undertook of a study devoted to identifying the ethics of public relations research.

In conducting this research, they were guided by two research questions: First, “how do professional associations that deal with public relations research, both academic and professional, express codes of ethics, statements, or conduct regarding the ethical practice of research?” And, second, “if these associations have ethics guidelines, what principles or core values are espoused?”

To answer the first, they first looked at 14 public relations or corporate communication research associations related to public relations research for their published ethical codes of research. All association codes or statements were downloaded from their websites and reflect at that time the most up-to-date statements on the ethical conduct of research.1

They found that all 14 associations had a formal ethics statement; four of them stated codes of conduct with legal overtures. Bowen and Stacks then looked for formal research ethics statements, finding eight of them did state formal research ethics statements. They then looked more closely at those statements to see if they articulated one or more of five core principles identified in the ethics body of knowledge as stated by the Institute for Public Relations Measurement Commission: intellectual honesty, fairness, dignity, disclosure, and respect for all involved.

Finally, they looked for inclusion of 18 core values as identified by the Institute for Public Relations’ Measurement Commission (2012). These are specific values that the ethical researcher should possess (see Table 2.1).

The number of core values found in the 14 associations ranged from 21 percent (valuing truth behind the numbers) to 86 percent (intellectual integrity). The mean percentage of all 18 core values was 58 percent. Thus, overall, the inclusion of core values across associations was barely over half. Additionally, 11 of the associations had other statements that were not analogous to the 18 core values identified here.

Table 2.1 Core ethical values*

Autonomy

Judgment

Respondent rights

Protection of proprietary data

Fairness

Public responsibility

Balance

Intellectual integrity

Duty

Good intention

Lack of bias

Reflexivity

Not using misleading data

Moral courage and objectivity

Full disclosure

Discretion

Source:*http://www.instituteforpr.org/research/commissions/measurement/ethics-statement/

As the results presented revealed, the ethical statement set out by the 2012 Institute for Public Relations Commission on Measurement provides a good starting point for a research ethics standard among public relations professionals:

The duty of professionals engaged in research, measurement, and evaluation for public relations is to advance the highest ethical standards and ideals for research. All research should abide by the principles of intellectual honesty, fairness, dignity, disclosure, and respect for all stakeholders involved, namely clients (both internal and external), colleagues, research participants, the public relations profession, and the researchers themselves. (Institute for Public Relations 2012)

This statement is based on core values that are highly deontological, or duty based, in nature. Furthermore, it sets research ethical standards to all involved in public relations: clients, colleagues, research participants, the profession, in general, and individual researchers. Based on this evaluation, an ethical research standard was proposed:

Research should be autonomous and abide by the principles of universalizeable and reversible duty to the truth, dignity and respect for all involved publics and stakeholders, and have a morally good will or intention to gather, analyze interpret, and report data with veracity. (Bowen and Stacks 2013a)

Toward Standardization of Measurement

In turning to standards for measurement, Michaelson and Stacks argue that such standards must take into consideration three factors: the communication objectives set for the problem, the life cycle or stage of the effort, and the audiences to which strategically chosen channels and messages will be created, to include intermediaries or third-party endorsers (Michaelson and Stacks 2011).

A prerequisite to any standard measure requires that we first confirm the reliability and validity of those measures. A reliable measure is one that measures consistently over time. A valid measure is one that actually measures what is intended to be measured. We can establish statistical reliability through accepted standard reliability formulas (Stacks 2017; Michaelson and Stacks 2010).2 Validity is established in several ways or forms, usually through face validity, content validity, construct validity, and criterion-related validity. However, validity is dependent on the measure’s reliability. That is, something can be reliable but not valid—a clock set earlier than the actual time may be reliable but is not a valid measure of time. Standard measures should always include information on their reliability and validity as an ethical and transparent part of reporting.

We can identify four standard measures of interest to public relations (see Table 2.2). Of interest in evaluating nonfinancial data during a campaign is the outtake, which attempts to demonstrate what audiences have understood, heeded, and responded to a communication product’s call to seek further information from public relations messages prior to measuring an outcome (Stacks and Bowen 2013, 21). Outtakes often deal with audience’s reaction to the message, including favorability toward, recall accuracy, and retention of that message. It also measures whether the audience is planning to respond to a call for information or action. The outtake is also examined when using intermediary measurement where key messages are assessed for inclusion, tone, and accuracy.

Table 2.2 Standard measures

Output measures

Measurement of the number of communication products or services distributed or reaching a targeted audience, or both.

Outtake measures

Measurement of what audiences have understood or heeded or responded to a communication product’s call to seek further information from public relations messages prior to measuring an outcome; audience reaction to the receipt of a communication product, including favorability of the product, recall and retention of the message embedded in the product, and whether the audience heeded or responded to a call for information or action within the message.

Outcome measures

Quantifiable changes in awareness, knowledge, attitude, opinion, and behavior levels that occur as a result of a public relations program or campaign; an effect, consequence, or impact of a set or program of communication activities or products, and may be either short-term (immediate) or long-term.

Intermediary measures

Quantifiable measures of messages provided by third-party endorsers or advocates.

The outcome measures the quantifiable changes in awareness, knowledge, attitude, opinion, and behavioral intent that occur as a result of a public relations program or campaign (Stacks and Bowen 2013, 21). It is an effect, consequence, or impact of a set or program actions, and may be either short-term (immediate) or long-term. Both standard measures of outtakes and outcomes should be employed in assessing target audiences or targeted third-party endorsers.

Nonfinancial measures gather perceived or attitudinal data (Stacks and Bowen 2013, 19). Target audience measures are basically defined into five standard types, reflecting how they meet the requirements of informative, motivational, and behavioral objectives (see Table 2.3). The first are measures of awareness and recall. The second are measures of brand, product, service, issues, or topic knowledge. Both measures focus on gathering objective knowledge (i.e., are stakeholders aware of and recall accurately the message as intended, and how much do they actually know about the object of interest?). Interest and relationship measures focus on attitudes toward the object and its relationship to the respondent in terms of peers, family, community, and so forth. Preference measures focus behavioral intentions toward the object, whether they intend to purchase or support the object. And, finally, advocacy measures, which focus on behavioral intentions—aim to measure the likeliness that they will advocate for the object.

B.A.S.I.C. is a life-cycle approach to measuring communication objectives (see Figure 2.4). It argues that we need to take into account where on the life cycle the audience is—are they aware? If so, can we advance their knowledge, sustain the relevance of the outcome, initiate action, and create advocacy? As you might expect, these communication objectives clearly reflect informational (awareness), motivational (knowledge, relevance, action), and behavioral (advocacy) objectives.

These communications are aimed at target publics and more specific audiences, as defined by combined demographic, psychographic, lifestyles, netgraphics,3 characteristics. Intermediaries are also included and represent targeted outtake audiences—the media and third party endorsers who often serve an intervening audience role. Finally, in today’s world the social media must be evaluated as a key channel for delivering messages to relational peers, opinion leaders, and advocates through blogs, tweets, and YouTube® communications.

Table 2.3 Target audience measures

Awareness or recall

Thinking back to what you have just (read, observed, reviewed, or saw), place an X in the boxes if you remember (reading, observing, reviewing, or seeing) about any of the following (brands, products, services, issues, or topics).

Knowledge

Based on everything you have read, seen, or heard, how believable is the information you just saw about the (brand, product, service, issue, or topic?) By believable we mean that you are confident that what you are (seeing, reading, hearing, or observing) is truthful and credible.

Interest or relationship

Based on what you know of this brand, product, service, issue, or topic, how much interest do you have with it? How does this brand, product, service, issue, or topic relate to you, your friends, family, and community?

Preference or intent

Based on everything you have (seen, read, heard, or observed) about this (brand, product, service, issue, topic), how likely you are to (purchase, try, or support) this (brand, product, service, issue, topic). Would you say you are “very likely,” “somewhat likely,” “neither likely nor unlikely,” “somewhat unlikely” or “very unlikely” to (purchase, try, or support) this (brand, product, service, issue, or topic)?

Advocacy

Statements such as:

I will recommend this (brand, product, service, issue, or topic) to my friends and relatives. People like me can benefit from this (brand, product, service, issue, or topic).

I like to tell people about (brands, products, services, issues, or topics) that work well for me. word-of-mouth is the best way to learn about (brands, products, services, issues, or topics). User reviews on websites are valuable sources of information about (brands, products, services, issues, or topics).

Intermediary measures (see Table 2.4) focus on what is contained in third-party or advocate messages. Do the messages contain the basic facts or key points? Are there misstatements or erroneous information? And, is there an absence of basic facts or omission of some or all the facts? The methodology typically employed is content analysis.

Figure 2.4 Communication objectives

Table 2.4 Intermediary measures

Three specific measures:

The presence of basic facts in the third-party or intermediary story or message.

The presence of misstatements or erroneous information.

The absence or omission of basic facts that should be included in a complete story.

Presenting Standardized Measurement

While these standards are the basis for valid and reliable measurement, there is also the additional need to collect and present this data so that the measured results provide meaningful insights that positively impact campaign and program performance.

In a 2016 presentation on strategic standardization in public relations, Stacks points out that standardization goes beyond the data and also includes the presentation of that data (Stacks 2016). In that presentation, he identifies seven data and analytical standards that need to be incorporated in communication measurement:

  • Collecting data across the entire campaign timeline

  • Describing and interpreting the data

  • Using descriptive statistics to reported as percentages, proportions, means, medians, standard deviations, and other generally accepted forms of data presentation

  • Applying inferential statistics to interpret descriptive data for changes over time and to detect trends

  • Establishing baselines to establish campaign success

  • Creating datasets that are clear and provide the information needed to make decisions

  • Testing differences in the data with a high level of statistical confidence (95th confidence level) so the insights are reliable.

As noted in Chapter 1, collecting data across the entire campaign timeline or “end-to-end measurement” is essential in order to identify at which stage a communication program is meeting or failing to meet its goals. Only through this type of measurement can a public relations professional effectively manage a program or campaign.

Our 2011 article provides more detail and numerous examples of how to create standard public relations measures. These details are provided and discussed in Chapter 4. Before we turn to the third area where the profession is moving toward standards, we need to underline that if you have not measured you cannot evaluate; if you measure and that measurement is not reliable or valid, then your evaluation will not be reliable or valid.

Toward a Standard Model of Program Excellence

Over the past two decades, a significant literature has developed that examines those factors most influential in creating effective public relations. The most prominent publications in this literature are the research on excellence in the practice of public relations authored by James Grunig, Larissa Grunig, and David Dozier (Grunig, Grunig, and Dozier 2002; Dozier, Grunig, and Grunig 1995; Grunig and Grunig 2006). That research broke new ground in reliably identifying factors that allow public relations professionals to increase their effectiveness in meeting communication goals and objectives for their organizations.

While the work that defined excellence in public relations has been significant and influential on the practice of public relations, unintended gaps exist. These gaps limit the overall utility of the work in assisting public relations professionals in achieving overall excellence in practice. The gap that is most noteworthy is the lack of a specific definition of what determines excellence on the actual outputs, outtakes, and outcomes of public relations professionals—specifically public relations programs, campaigns, and activities. This is not intended to diminish the importance of the work by Grunig, Grunig, and Dozier. Rather, its intent is to build on that research and to create a unified theory of what constitutes the full scope of excellence in the profession (Michaelson, Wright, and Stacks 2012).

So, how can the public relations function or agency establish the actual impact of a campaign? It does so by:

  • First, following the established standards of measurement and research as stated earlier and in later chapters in this volume (Michaelson and Stacks 2011)

  • Second, defining excellence

  • And, third, effectively evaluating excellence.

The questions then become (1) what is excellence; (2) how can it be evaluated; and (3) what standards should the public relations profession establish to create a metric for program or campaign excellence?

The quest for excellence began over 20 years ago when Grunig and colleagues first examined what they felt were companies that practiced “excellence in communication.” Based on a survey of corporate communication practices across industries and international boundaries, they reported that companies practiced excellence in communications if seven factors were practiced (see Table 2.5).

Table 2.5 The concept of communication excellence

The excellence in public relations project

Report said organizations practiced excellence in communication if:

  • Senior management team was committed to communication excellence

  • Chief communication officer reported directly to the CEO

  • Company was committed to tell the truth and prove it with action

  • PR and communication was more preventive than reactive

  • PR efforts began with research, followed by strategic planning, followed by the communication (or action) stage and always included an evaluation of communication effectiveness

  • Company was committed to conducting communication research that focused upon outcomes and not just outputs

  • Company was committed to education, training, and development of its public relations and communication professionals

Source: Grunig 1992; Grunig, Grunig, and Dozier 2002.

There are as listed in Table 2.6 other criteria for defining companies that are excellent communicators. In other words, we followed along the lines advanced earlier regarding standards of research ethics and measurement.

It is our contention that companies which demonstrate excellence in communication have learned three things over the phases of public relations campaigns or activities (see Figure 2.5). First, they understand that in the programming of a campaign during the developmental phase, the issue is first examined from its location on the communication life cycle. They set the public relations function’s goals and objectives parallel to the business’s goals and objectives and establish a baseline against which to evaluate planning plan over time. And they create three sets of measureable objectives with targeted benchmarks.

Second, during the refinement phase they actively measure objectives quantitatively relative to expected benchmarks and phases within that campaign. They then alter or change tactics based on these benchmarks, and continually scan the environment for unexpected events or actions.

Table 2.6 Other factors defining communication excellence

Judging criteria of major public relations awards

Secondary research such as generally accepted practices (GAP) surveys such as the Annenberg series

Examinations of what various organizations are doing in terms of:

  • Setting objectives

  • Research and planning

  • Identifying target audiences

  • Evaluating communications excellence

  • Establishing ROI measures for public relations and communication efforts

  • Developing some general understanding about the contributions of public relations and communication to the business bottom line

Figure 2.5 The research continuum

Source: Stacks (2011), Copyright 2011 by The Guilford Press. All rights reserved.

Finally, they correlate nonfinancial outcomes (i.e., behavioral intentions) to business function financial outcomes as measureable return on investment (ROI) for its return on expectations (ROE) planning (these concepts are covered in detail in the next chapter).

Excellence, then, can be defined and evaluated as the public relations function adopting standards of research that focus on the relationship between the company’s and the public relations’ goals and the establishing of mutually supportive measurable objectives with targeted benchmarks that are measured and evaluated against baseline and expected nonfinancial outcomes, and correlating those findings with the data obtained by other functions’ financial outcomes so that they may be entered into whatever company decision-making strategies are employed (e.g., Six Sigma, Balanced Scorecard). ( Michaelson, Wright, and Stacks 2012)

The Excellence Pyramid

Once excellence has been defined in a potentially measurable way, it must be evaluated. Evaluation, we argue, is best conducted against a model that sets the standards for effective, very effective, and excellent activities.

An effective campaign meets the basic needs and is considered successful but does not advance because it fails to measure up to the requirements of the stage. If it can meet the standards of the next stage, that is, intermediate, it is evaluated as very effective. And, if it reaches the third and final stage—advanced—it is evaluated excellent. The model, then, takes the form of a three-level pyramid (see Figure 2.6), similar to Maslow’s Hierarchy of Needs and Herzburg’s Two-Factor Hygienic Model of Motivation (Maslow 1970; Herzberg 1966; Herzberg 1968; Herzberg, Mausner, and Snydermann 1959). We turn now to an examination of what constitutes each level.

Figure 2.6 The Excellence Pyramid

Level 1: Basic

As shown in Figure 2.6, the model suggested for campaign programming excellence begins at the basic level. It includes the five components discussed earlier when laying out the argument for measurement standards whereby the campaign can demonstrate that it:

  1. Set public relations goals and objectives relevant to the business goals and objectives.

  2. Conducted research and planned the campaign based on that research.

  3. Produced outputs that effectively reached target audiences.

  4. Measured outtakes to determine that the campaign was on phase and target.

  5. And, produced nonfinancial outcomes (results) that could be correlated to other business functions financial outcomes.

Each component has criteria that must be met to ensure that a campaign will do what it is supposed to. It is binary outcome—either it was carried out and carried out correctly or it was either neglected or carried out incorrectly. Furthermore, the components are addressed sequentially, from left to right. Each is essential to the next to ensure campaign success and demonstrate an impact on business goals and objectives. Finally, these components are objective and evaluated as to whether they have met a particular standard and are either present or not present in a campaign. If the standard components are found in each of the five categories, the programming can be evaluated as demonstrating basic effectiveness, that is, it was successful but did not really advance company, brand, or whatever goal was being attempted. To do so requires that the campaign meet three criteria at the intermediate level.

Level 2: Intermediate

Once the essential components have been satisfied, the company or agency may move to evaluating the programming at the second, that is, intermediate level. Unlike the basic level, which is objectively measured, the intermediate level is more subjective and must be evaluated on some scalar measure, one that includes a midpoint for uncertainty (Stacks 2017; Stacks and Michaelson 2010). As shown, the second level consists of three factors:

  1. Deep connections to target audiences: Planning that achieves an intermediate level of excellence builds a bond or relationship between the campaign and audiences through motivational objective messaging strategies.

  2. Global leadership support and engagement: Planning that achieves an intermediate level of excellence is supported by senior management and is aligned across the company, product, or brand’s environment; and they have internal support at the highest level. This puts the communication function, as Grunig et al., argue at the “management table.”

  3. Creativity and innovation is demonstrated when a unique approach to the problem, product, brand, or issue is taken. Since the communication function is responsible for messaging strategy across the board, it makes sense that intermediate excellent programming will be original in approach, inventive in distribution through the best communicational channels, and innovative and efficient in its execution. Creativity sets campaign planning apart from competing campaigns and often results in further enhancing the communication function’s credibility within senior management.

All three factors come into play when there are buy-ins by the corporate management team that yields supportive commentary and criticism on potential tactical outputs. Furthermore, at this level, overall campaign planning clearly involves business or corporate strategy with the communication function fully integrated into the larger campaign.

Level 3: Advanced

Indication of excellence is found at the highest, advanced level of campaign outcome. Here, public relations sets the agenda for target audiences on key messages. That agenda should be extended to a larger environment through advocacy or word-of-mouth and other diagonal, grapevine forms of message transmission—blogs, tweets, Facebook mentions, and so forth. This extension is critical in establishing a two-way symmetrical dialogue between company and target audience(s) in a strategic long-term plan linked to company, product, brand, or issue goals. Advanced planning also demonstrates leadership not only in internal planning but also impacts on the profession as well. It becomes the standard benchmark against which others establish levels of excellence—that is, it is timeless in strategy, tactics, and demonstrable measured results that clearly show a connection to overall business goals and objectives.

Summary and Conclusion

Based on our discussion of ethnical, measurement, and outcome standards, we believe that we can come to several conclusions.

  1. Public relations has arrived at a stage of professionalism that demands certain standards of research.

  2. Standards for ethical research, including principles and core values, can be identified, taught, and evaluated, thus increasing our professionalism.

  3. Measurement standards are necessary for comparative evaluation against not only competitors but also to produce results that can be factored into a company’s decision-making strategies, therefore, increasing public relations’ impact on business strategy.

  4. Excellent research and measurement must take an end-to-end approach to prove its effectiveness and usefulness to the client.

  5. Public relations excellence can be defined and evaluated.

  6. Several standardizing models and tests have been identified and proposed that should be the focus of continuing research.

Finally, we conclude with the notion that as a young profession, our profession has extended its reach across time, distance, and culture. Continued discussion of standards and best practices of research ethics, measurement and evaluation, and planning excellence can only enhance and strengthen our profession.

Chapter 3 builds on these research standards and turn our focus to how research helps the public relations professional make a case that he or she has contributed to a client’s or company’s success.

1 Two professional association websites had member-only access. Materials from those websites were gathered with the help of members, and we owe them our sincere thanks for their assistance and support.

2 For continuous measures the Coefficient Alpha is used, for categorical measures the KR-20 is used. In content analysis, there are a number of reliability statistics available—Scott’s pi index, Holsti’s coefficient, Cohen’s Kappa.

3 A netgraphic identifies how an audience approaches and uses the social media.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.129.249.194