3


Positioning Your Station against the Competition

 

 

 

Hearing Your Station the Way Listeners Do

Unless your station is the only one your audience can receive, you do not program in a vacuum. If there is competition, your job is to differentiate your station—in a positive way—from the others. This does not mean copying the leading station because you just can’t beat a leader by copying what it is doing.

As a program director, you have to get a little schizophrenic at this point. Naturally, you cannot compete with a station you haven’t listened to, and yet you must not let yourself get obsessed with your competitors. If you do, you’ll end up reacting to them, and that’s suicide. Your job is to focus on your own station and let the competition react to you.

The starting point, when confronting competition, is to get a realistic concept of how your listeners perceive your station. If you’ve been at the station for any period of time or if you think only in the radio industry’s terms, you’ll find this hard to do. I know this from experience. In my first programming assignment in the late sixties, I was program director at my hometown station—the one I’d grown up with and had a lot of affection for. Only by this time, it was getting badly beaten by a newcomer to the market that was copying the then- successful Drake formula for Top 40: a jingle between every record and an airstaff that kept its remarks brief. That station’s music wasn’t well targeted in my opinion.

Yet, in just a couple of years, it had become dominant, and my station had become an also-ran. Worse, even though my formerly overcommercialized station was now running relatively few spots and the “plays more music” competition had become prosperous and carried a full load of commercials, my research showed that listeners still felt that the competing station played fewer commercials than we did. If you asked the person on the street why they didn’t listen to my station, all too often they would respond that it had too many commercials.

That was when I started to learn the lesson that perception is reality. You can’t argue people into changing their minds about their perceptions; you can only seek to change their perceptions to match what you believe to be reality. Back then, though, I was still handcuffed by my own perception that my station was clearly the better one in every way. How can you fix a problem that you don’t understand?

For me, the solution was to take a day off, get in my car, and drive to a secluded canyon. It was a part of the market I’d never been to before, a complete change of scene. I parked and listened alternately to both stations. The turning point for me came that day. After listening to both stations for hours, I suddenly realized that even though all my station’s ingredients were better, they were not as well presented. The other station actually sounded better, and many of the things I hadn’t liked about them were, in fact, a part of the reason they were succeeding.

Even now, many years later, I still believe that I was right in thinking that my station was already playing the right hit music for that market and that the other one was not, but there were too many records on my station’s playlist, and some weren’t played often enough to catch the listener’s attention. The biggest hits weren’t coming around often enough, and some of the lesser hits were rotating too often.

In addition, there was the matter of station image. The other station’s jingle-between-every-record was very repetitive, but it sure created a very strong audience perception of the station’s image. (The structure of the hour thus was broken down to the smallest possible unit—just one record—before “repeating.”) My station had jingles but no consistent pattern of usage, and about the only really dependable structural elements on my station were the hourly station identification jingles and the news at forty minutes past the hour.

The airstaff on my station was more entertaining and had more ability than the competition, but they lacked a strong station structure within which to work. My personalities were sloppy at times, and they sometimes ad-libbed without having a point. In addition, they tended to have less energy and often displayed little sense of purpose or direction on the air.

Particularly troublesome to me was the incorrect perception of listeners about each station’s commercial loads. That demonstrated for me the most important point about programming radio: What people expect of a station is what motivates them to tune in and listen longer and more often, and that’s based on their past experience with the station.

Remember the story I told in Chapter 1 about the manager of the self-serve laundry? Here, too, we see that what a station is actually doing at the moment listeners tune in matters very little in meeting or changing perceptions about the station; it’s what they expect to happen next that influences their listening behavior. My station, in the listeners’ past experience, had run too many commercials (especially for the same few advertisers, repeated too often). The fact that the station now seemed to have few commercials whenever they happened to tune in did not change their expectation that they’d hear plenty of them the next time they tuned in. So, they stayed away.

On the other hand, the competing station “sounded” the same as when they started in the format a couple of years earlier, with jingles before each record (containing the increasingly inaccurate “plays more music” line). Listeners’ past experience with the station had been that few commercials were broadcast, and even though the station now often ran at least eighteen minutes of commercials per hour, listeners still expected fewer commercials the next time they listened, so they kept tuning back in for “more music.”

My challenge in this situation was to build a strong hourly structure for my station to serve as the “package” for my programming “product.” I needed to reestablish my station’s “brand” and clearly distinguish my station from the other one.

The solution I chose shows a bit about how a station’s hourly structure and the way it’s executed can change audience perceptions and create fresh expectations, “erasing” the listeners’ old experiences with the station. The key, I decided, was to change audience perceptions about the quantity of commercials on each station. That’s not because commercials themselves are necessarily objectionable. In fact, I regard it as really dumb to persuade the listener that commercials are undesirable with lines like “WXXX plays fewer commercials” or “KXXX has commercial-free hours.” I object to that because once listeners understand that the station thinks commercials are bad, they will naturally think negatively about the station whenever it runs a commercial.

Many listeners do not understand why stations run commercials, and some even believe that radio stations are financed by the government. Even those who do understand that the spots are necessary to support the station nonetheless will think of commercials as undesirable when the station uses liners that reinforce that idea, and naturally, they will then react negatively to an advertiser’s message every time they hear one on that station. This reduces the effectiveness of the commercials on the station, undercuts the salespeople, and cripples the station as a business. It’s a very poor programming strategy.

Actually, I have a surprise for you. If commercials are relevant to the interests and needs of the listeners and to their culture, commercials can be positive elements. In fairness to advertisers and to the station’s own image, commercials should be presented as interesting information, which they often are.

So, although I felt that the incorrect listener perception about my station’s commercial load was the key to my strategy in this particular programming situation, I did not want to cause negative feelings about commercials themselves. The easy and conventional strategy would have been to use promos and liners to dramatize the lack of commercials on my station, but that would have hurt the station as a sales medium, and so I never considered that.

 

Consistency Beats Inconsistency

My analysis of the competing station’s strength showed that at nights and on Sundays and Mondays, when the station’s spot load was low, the station was able to play many records in a row (with jingles between), fully meeting established listener expectations. When they had a full commercial load, however, they “stopped down” after each record (that is, stopped the music or programming) and ran commercials back to back (or double-spotted) each time before jingling back into music. The unpredictability of how many records were played between spot breaks had helped the station maintain its “more music” image, but I saw that this inconsistency could eventually lead listeners to expect commercials between every record.

If I succeeded, my station would have to be capable of running a fairly heavy spot load eventually, so I wanted to find a way to project a consistent image of playing a lot of music while accommodating a varying spot load. I was prepared to concede to the competitor their strength—while attacking them at their weak point.

I borrowed my solution from the Beautiful Music stations of the day, and we became the first Top 40 station I know of to adopt fixed “stopdown” points regardless of the spot load. That is, I set the commercial stopdowns at the :10, :20, :30, :40 (within the news), :50, and :00 points of the hour, with a jingle out of each spot break into the music. Rather than claiming “more music”—which not only would have been copying the other guys, but would not have been believed by the audience—I used the indefinite slogan “music power” as the station concept phrase and included it in all of the jingles. This was an affirmative but vague statement about the music on the station, which listeners would have to define for themselves over time.

The format, as I designed it, called for these mandatory fixed spot break positions with at least two records in a row, plus as many more as would fit within the seven or so minutes between each fixed spot break. I instructed the airstaff to stop down for something in these breaks in every hour. If no commercials were scheduled, they were to run a public service announcement (PSA) or a station promo.

My secret weapon in this programming strategy was the consistency of the stop points. Listeners would learn over time that when we played that “music power” jingle, they would always hear at least two records in a row before the next spot break. Listeners would even learn subconsciously where the spot breaks were on my station, and I wanted them to. This unconventional idea was based on the principle that if listeners know when the spots will be run, then they also know when the music will be played. If listeners knew that the spots on my station ran at :10 and :20, for example, then they also knew where the music was in between. That clear understanding led them to stay tuned through the spot breaks, even though each might contain up to four units of commercials or three minutes of spots, whichever was shorter.

The strategy worked. With nothing more than this “different” station structure to distinguish my station from the others, we won back our own community in the ratings in the first year. In the second year, we beat the competition in the two-city combined metro (the competition was licensed to the other city), and in the third year, we beat them in the whole 100-mile-long market by several points—even though the competitor covered all of it, whereas my station was only able to reach 70 percent of the market population.

Every programming situation is different, but the principle needed to win is the same in every competitive situation: Start with the listener’s perceptions and expectations of your station and of the competition. Where are competing stations strong and weak? Where are you strong and weak? How can you exploit their weaknesses to your own advantage—without overtly reacting to them?

In the end, both my station and my competitor wound up playing about the same amount of music (though still slightly different songs) and ran about the same number of commercials. However, the two stations sounded different, which meant that each developed its own identity in the listener’s mind. If your listeners don’t have a clear idea of your station’s identity, you’ll certainly have a hard time making them loyal and frequent listeners.

Let’s review what happened in this example. The strategy was making the other station’s weakness—the inconsistency of how many records they’d play between commercials, which varied drastically with the commercial load—my station’s strength. Notice that, in return, their strength (they had many breaks because they never had more than two commercials in each spot break) was my weakness (the number of spots on my station would vary in the spot breaks, but the number of spot breaks were the same, in the same places in all hours, and we never omitted spot breaks when we had no commercials to run).

That, in theory, meant that the two stations statistically were equal. The reason we beat them in what should have been a standoff was that our format structure seemed not only clearly different from theirs but also newer than theirs. Novelty gave us the edge. As often happens in situations like this, when a programmer wins with an unconventional strategy, the competition completed our victory by reacting to us—in part, by demolishing the strong image and hourly structure they’d established, by cutting back drastically on their jingle use, and by trashing their hit music image by adding a number of nonhit album rock tracks. Half of winning is making the other guy lose, as I mentioned earlier, and this is fairly easy in radio when you win with something that is “out of fashion” in the industry.

You’ll always find your success by starting with, and then molding, the audience’s perceptions and expectations of your station and your competition. However, you can only do it through freshness and relentless consistency of presentational style, not through name-calling or the use of logic. Never argue with your listeners.

Radio is a medium “consumed” subconsciously and emotionally; the listener’s literal, logical mind is somewhere else while he or she listens. With the conscious mind occupied, radio is soaked up “subconsciously” and almost subliminally. As a soundtrack to the listener’s life, radio is perceived through its pattern of presentation, which is where the station’s packaging, its hourly structure, comes in. Make that pattern clear, positive, distinctive, and well defined, and you’re usually ahead of your competition right from the start.

 

The Role of Research

Now, let’s spend some time on how you can identify the listener perceptions of your station and the competition. The key is audience research. Audience research can take many forms. It can be cheap or very expensive, invaluable or actually misleading. The guidelines given in this section will help ensure that it works properly for you.

Research, to be effective, has to be objective, not biased toward any particular point of view or expected outcome. It has to tell you what the consumers—the listeners—think, not just what the re-searcher thinks they think. In my experience, professional researchers have problems with interpretation. They rely on logic to explain their findings, even when logic has little to do with listener behavior.

Had a professional researcher been involved in helping devise a strategy for the station in my case study, he or she would have found that most of the potential audience thought that my station had too many commercials. The researcher would probably have recommended that the station cut back on the number of commercials played. However, the station was only playing four minutes of commercials an hour and was going broke, and we probably would have chucked the costly research report into the wastebasket in disgust.

That reaction would have been a mistake because the basic data were correct: Listeners did believe that our station had too many commercials. The fact that their perception varied so greatly from reality actually highlighted the real opportunity for us. When dealing with professional researchers, then, I suggest that you examine the data yourself and reconcile the findings into a pattern of listener perception. Take the professional researcher’s “logical” interpretation of the data with many grains of salt.

Research doesn’t have to be expensive, and it doesn’t even have to be done by a professional researcher. It can be informal like the research in the case study presented earlier. It consisted of lots of (frustrating) listener conversations, followed up by good, hard, really objective listening to both my station and my competitor’s, putting aside for a while my own professional beliefs and prejudices.

 

Designing Your Own Study

Whether you decide to do your own research or elect to hire a firm (or a university marketing department) to do formal research for you, the three parts of the project are the same: (1) defining the goals and designing the questions, (2) obtaining objective information through some sort of interview process or behavior study, and (3) interpreting the results. Let’s address each of these parts in turn.

You’ll never get any useful research if you haven’t a clue what you want to find out! Start there. The goal of the research project is to answer specific questions about your station and others. What are those questions? Boil them down to the smallest possible number; a tediously long questionnaire will result in degraded results, as those who agreed to cooperate get tired of the time and effort it’s taking to participate.

If you aren’t sure how to focus your questions, have informal and unscientific conversations with listeners before starting to make the questionnaire. Try to spot recurring thoughts and perceptions about your station and others. (In a more formal setting, focus groups can perform this function.)

Once you’ve figured out what you want to learn more about, design the questions carefully. Keep in mind that what you want to investigate is listener behavior, not listener opinion. When you ask people to report or explain their own behavior, you are asking them to intellectualize something inherently emotional. They may do their best to be honest with you in their answers, but all too often you’ll wind up with what they think, rather than what they do.

For example, when you ask people what they like to watch on television, they report liking documentaries and quality drama. However, when you hook up a device to their TV set to record their actual viewing habits, you often find them watching lightweight comedies and undemanding game shows. The usual conclusion has been that people lie to researchers to elevate their status. Perhaps some do, but based on my own experiences in research, I find that most people really do try to tell you what they think is the truth.

If so, then, why does this disparity occur? When you ask viewers to think about their favorite TV shows, the ones they remember best are those exceptional programs they are reporting. However, when they come home at night, worn out from working, the last thing they want is to be challenged and enlightened. They’re exhausted, and they seek “mind candy” to help them relax, so they watch unchallenging and unenlightening shows. Behavior doesn’t match opinion, and the poor folks who choose a situation comedy over a documentary at the end of a busy day probably wouldn’t see their preference as an inconsistency. After all, we asked them about what shows they liked best, not the ones they would pick when they didn’t want to think after a hard day.

Incidentally, this phenomenon creates a real problem in the most common form of music research: playing fragments, or “hooks,” of songs for people over a phone line or—worse yet—in an auditorium. In these situations, the participant has to recognize each song hook, try to recall the whole recording it comes from, and then figure out what he or she thinks of it. After some thought, the listener honestly reports an opinion, instead of behavior. Worse yet, when tested in an auditorium setting, each participant can be influenced by a neighbor’s body language or murmured comments.

In fact, I’ve found that one sign of “intellectualized”—and thus flawed—music test findings done on audiences over the age of twenty-five is the reporting of burnout: the active rejection of overly familiar songs. When they listen to the radio, I find that the mainstream adult listener is not likely to tune away from any song they know and like just because they hear it a lot. Song burnout, as a programming tool, seems to be pretty much a fiction created by intellectualized responses.

To repeat, when designing the questions in an audience research project, always focus on listener behavior rather than opinions. That said, it is not a bad idea to add a few opinion-eliciting questions on the key points you’re exploring. Opinions can be useful in interpreting behavior, even though they don’t necessarily correlate to actual behavior at all. Use opinion research to cross-check with the behavioral responses and to help you find revealing inconsistencies and paradoxes, such as the “too many commercials” opinion in my case study.

A behavior-oriented question might be, “What radio station do you most often turn to in the morning?” A corresponding opinion-eliciting question might be, “Which radio station do you think has the best morning show in the area?” Another behavioral question: “When you switch away from that station, what sort of thing are you looking for, and on what stations do you usually find it?” An opinion-oriented question: “What do you think of radio station KXXX? What do you think of WXXX?”

When you see a disparity between listener perception and reality, analyze it for its implications. Similar research for a station in Los Angeles once allowed me to discover that although the staff thought they worked for a music station, the music was so irrelevant to its listeners that the audience thought of the station as a talk station. The music was perceived simply as filler. This led to the programming conclusion that the music could be redirected toward the younger audience the station wanted to get without losing a single one of its current older listeners, and that’s the way it turned out.

Once you have defined the goals of your research and have designed the questions, the actual study takes place. Generally, if it’s an interview-based study, it’s done from a script to make sure that the wording stays the same. The wording and question sequence will have an effect on the answers obtained, so they must remain constant throughout the survey for consistent results.

Most people outside radio have a hard time reading a script believably as if it were spontaneous. For that reason, some stations prefer to avoid expensive research companies and do their own perceptual research—not so much to save money as to get the best compliance and quality control. Refusal rates for telephone studies have been increasing steadily in recent years because of the amount of telemarketing being done, and an interviewer who is obviously reading a script will get more refusals than an interviewer who sounds like a courteous and interested human being. The more refusals, the greater the error factor as you get farther and farther from a truly random sample. This sort of probability study demands a randomly selected cross section of the population for statistical accuracy.

The telephone is the most convenient way to conduct interviews, but you’ll get a lot of refusals. In addition, people who don’t have phones will of course be excluded from the study, skewing the results somewhat. That’s because those who don’t have phones tend to differ in various ways from those who do, and that could include tastes in radio and music. However, all of the major radio rating services now in business fail to reach those nonphone homes too, and the advantages of telephone-based surveying usually outweigh the disadvantages.

One disadvantage that can be overcome concerns unlisted phone homes. Studies indicate that people who have intentionally unlisted phone numbers differ in various psychological ways from those with listed numbers, and the radio survey companies do try to include these unlisted homes in their universe of surveying. This group should be included in your survey, too. The easiest way to do this is the method that the now-defunct Birch survey once used, and that today’s successful “second rating service,” Willhight Research of Seattle, Washington, still uses: Begin with a random selection of listed numbers for your starting sample, but don’t call any of those numbers! Here’s what you do. Using the phone book, select the phone number on each page that is a predetermined number of lines below the top of the page in a particular column on the page. Then, change the last numeral downward by a fixed number. For example, change 555-1234 to 555-1232, and change 555-3341 to 555-3339.

By making this systematic adjustment, you eliminate the bias toward listed numbers that your original sample created. Of course, you’ll also reach disconnected numbers this way, which is the price you pay for randomization. Radio rating companies eliminate from their surveying all businesses and “group quarters” (dorms and barracks). Due to the unfavorable telephone interview climate of these busy locations, you may want to skip them, too, if and when you reach them.

In-person surveying is more work than using the phone, but it generally yields fewer refusals and greater cooperation. To really do it right, you’d want to adopt an approach similar to the one once used by the Pulse rating service: in-home interviews. This type of interview avoids the biases that arise from interviewing at specific lifestyle locations, such as commercial malls. The technique involves interviewing on residential blocks selected from addresses drawn at random from the phone directory. All of the homes on the block are included except the one in the phone listing you selected. This one step eliminates both the unlisted-home and non-phone-home biases.

With a good interviewer (Pulse drew from the same pool of people as the Census Bureau does), the refusal rate from potential interview subjects should be less than 5 percent. It’s no surprise that, using this method, Pulse produced the most accurate radio ratings of any survey company ever. It was expensive to do, though, and the lack of radio station support led to the company’s demise in the late seventies. However, this can be a relatively inexpensive technique if you use your own staff or students from a nearby college statistics or marketing class to do the interviewing.

One other form of listener research should be mentioned: focus groups. Focus groups should be used only as “thought starters”—to identify possible listener mind-sets for subsequent research. Not only is a focus group far too small to have any statistical validity, but it’s not random either. Those impaneled on focus groups are usually drawn from lists of people who want to participate in such groups—not only to earn money, but to express their opinions.

The greatest hazard of a focus group is the risk that a station executive sitting behind the one-way glass will hear an opinion expressed by the group that matches his or her own, will see this as confirmation of that opinion, and will immediately make a bad decision about the station as a result. That’s just human nature. It happens often—and not just with general managers either. It can happen to you! Treat anything you hear from a standard five-to fifteen-person focus group with a lot of skepticism until you are able to verify it with reliable research.

That is not to say that you must do research in the conventional manner. As you have figured out by now, I propose that the best program directors look for unconventional ways to reach their goals, and that includes audience research. The best way to approach any radio problem—and probably any problem you’ll ever encounter—is to identify your goal first and then work backward to find a workable, reliable method to reach it.

For example, I have had great success in music research using a “reverse focus group” method that I’ve trademarked as ReFocus™. This approach does away with the need to maintain statistical validity, yet still allows reliable and accurate results to be generated inexpensively from a small group. The key in this case is impaneling only people who are “prequalified”—that is, firmly established, using definite criteria, as being right at the core of the audience target—and then using an interview setting that’s casual and doesn’t cause the group to start intellectualizing their emotional, behavioral responses. The researcher must ensure that the participants treat the occasion as an informal, social gathering in which conversation occurs, rather than as a serious and important event in which they are interviewed. Naturally, you’ll have to find your own way to prequalify your subjects if you use an approach like this, and you’ll also have to develop a way of cross-checking them to make sure that the group stays “on target” and doesn’t start intellectualizing their own behavior.

Perhaps you’ll find a completely different, unconventional way of doing research that yields results that you find reliable. If you do, use it.

If you choose to have audience “perceptual research” done by a professional research firm or a college statistical class, remember to place most of your confidence in the gathering and tabulation of the raw data. Be skeptical about any accompanying “interpretation and recommendations” you receive from the researcher. Like many salespeople and general managers, professional interviewers are very logical and rational. They do a fine job setting up the study and tabulating the results, but they often totally miss inferences, paradoxes, and implications in the data. They’ll usually give literal and logical interpretations that can lead you to absolutely the wrong course of action.

Radio is “consumed” by the listener with the right brain—the inaccurately named “unconscious mind”—and behavior and emotional response (right brain functions) are what we, as program directors, are trying to understand. The “right brain” of our listeners is what we are learning to communicate with, using every element of programming employed at our station.

In this chapter, we’ve looked at packaging a radio station, and the key principles in developing a strategy to create a clear identity for the station and to position it versus its competition. We have also consid-ered how to determine what’s in the audience’s minds already about our station and its competition. You can’t get anywhere by directly and rationally contradicting what the listener thinks, but you can repackage the station to emphasize its strong points and freshen it with respect to the other stations, thus altering listener perceptions. The starting point is existing listener perceptions.

Whatever strategy you come up with will probably be executed by an airstaff. Of course, it is possible to automate a station in a very sophisticated way using a desktop computer, but then you may lose one or both of the greatest strengths of a station: the one-to-one human contact between an on-air personality and a listener, and the local flavor of the station. In the next chapter, we’ll assume that you’re going to be the captain of a team, and not of a computer. How will you lead your team?

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.117.107.90