CHAPTER
9

Designing Community and Social Experiments

How to Apply Agile Marketing Techniques to Community Management Initiatives

Throughout this book, I’ve emphasized the importance of doing research and testing to understand your brand, your target audience, and your community.

I’ve explained how to create content in numerous forms. But once you’ve come up with the content, it’s time to experiment. And to help, you can count on our good friend, the scientific method.

You may remember that we touched on this briefly in Chapter 6. Here, I aim to provide more information as to how to perform a social media or community experiment using the scientific method as applied to marketing initiatives, which we’ll refer to from here as the marketing scientific method. I will show you how to do the following:

  • Apply the scientific method to digital marketing
  • Use the lean and agile methodology in digital marketing
  • Analyze the data

Let’s get started.

Designing an Experiment

Experiments have seven parts:

  1. Formulate a problem or question.
  2. Research for evidence.
  3. Generate a hypothesis.
  4. Establish benchmarks.
  5. Run your experiment.
  6. Draw conclusions (to validate the hypothesis).
  7. Share results.

Formulate a Problem

If you’ve had a community strategy for any length of time, you’re always striving for better results. Those in tech or startup environments are likely chasing the elusive goal of optimization. When you are in the process of designing an experiment, focus on one very specific problem at a time.

Let’s start with a simple one as our example: let's say people are engaging with your Facebook posts (comments and likes), but they don’t click on them to read the actual content.

Exercise: Look at the results of your community listening and discovery and identify a social media or community problem that you would like to solve.

Research

Your next step is to gather as much background information as is relevant to inform and then plan the experiment. In Chapter 6, we talked about the types of research we can use, including primary and secondary. The problem, in question form, is this: “Why aren’t our followers reading our content?”

To answer this, you’ll want to gather a variety of data such as statistics about the types of content that perform best in your field, information about what the people in your community are interested in (which you can extract from your work on the exercises in Chapter 7), information about the current landscape as it applies to your business, and information on user experience for posting links on Facebook. You’ll also want to compile all the data you may have from previous research on this subject, such as looking at content that has historically received more click-throughs to relevant blogs.

Exercise: Identify the sources of both primary and secondary research that can help you gather necessary information to design your experiment and to compile the findings from your research.

Generate a Hypothesis

Now that you’ve done some discovery work, it's time to plan your experiment. To generate a hypothesis, you synthesize your research so you can decide what you need to test for the biggest impact on resolving the identified problem. This means paying close attention to the data to try to find the tactics that might yield results in your unique community. Let’s analyze the items that you gathered in your research:

  • Statistics: By looking at what types of information traditionally perform best in your field, you’ll have some research to support what you should be posting. Again, exact audience preferences are always unique to your community, but industry or competitor trends can give you a general idea of where to start.
  • Listening and discovery results: What are people actually talking about? Does your content relate to these issues? Is your content pertinent to current trends? If not, you may need to rethink the type of content you’re sharing within your community.
  • The current landscape: Is it possible that the problem has nothing to do with your content, and something more to do with the way your audience is using Facebook? It’s possible that the situation is not unique to your business, and that people often “like” and comment on posts without taking the time to read them. If so, it might help to identify why this is happening. Perhaps the audience isn’t as interested in blog content as they once were, or maybe they enjoy engaging with one another more through comments and posts than through reading content. Either way, the insight should inform your next steps.
  • User experience: It might be the case that your links are too long, or that the Facebook widget you use to post has formatting problems (which has been a problem in the past). A user who has a bad experience may have no reason to click. This kind of problem may get even worse when your audience checks posts from mobile devices rather than a laptop or desktop computer.

In reviewing this data, you should be able to determine why your blogs aren’t getting clicks. If there are several possible issues, create a list of the probable reasons, and prioritize the order in which you’ll test. Select the strongest one that will also be the easiest to test (there’s no sense in testing the difficult ones if the reason why people aren’t clicking is actually very simple). Remember, you should test only one variable at a time.

In the example we’ve been using, let’s assume the hypothesis is that the links we drop into Facebook are long and dry, deterring people from clicking. Because they are not attractive, it's possible that shorter, snappier links will increase click-through rates.

Exercise: Synthesize your research and generate a hypothesis focused on one variable to test.

Establish Benchmarks

Benchmarks are essential to planning your experiment. Decide which metrics determine success. In our example, it’s pretty simple. We’re not as concerned about engagement for each post, because in this particular case we want click-throughs that generate traffic to the blog. That gives us one item to focus on for the experiment (though we may want to monitor engagement as a secondary metric).

In community management, we establish benchmarks as a way to determine whether our experiment was successful, or whether we need to continue testing other hypotheses. Our benchmarks give us a baseline to measure against and determine whether we need to spend more time testing a particular variable, or whether we’re ready to start testing other community tactics or initiatives with their own variables.

The easiest way to monitor these benchmarks is to add them to your content calendar we discussed in Chapter 8 to help you get a snapshot view of how you’re progressing on your goals. To make the best use of the information, you need to decide what key facts will help you with your overview. For more-complicated tests and benchmark tracking, I add the pertinent details to the recommendation memo or marketing brief I’m using. Regardless of where you choose to store the information, make sure to include dates, key performance indicators, actual results, and all other data you need to help measure your progress. This should tell you at a glance where you were successful, and where you need to repeat your experiment to reach the goals you set.

Exercise: Determine which measures you’ll use to help determine success in identifying a hypothesis that correctly highlights the cause of a problem. Establish realistic benchmarks based on the data you’ve gathered.

Run Your Experiment

This is the actual “testing” part of the experiment, which you execute based on your hypothesis and gather data as you go. If, perhaps, you think the links you’re using on your social media platforms are too long, shorten them up. Use your original, long link posts as a barometer, and find out whether shortening links using Bitly makes a difference in getting people to click through. For other tests concerning marketing problems, you may want to try using different colors or different types of photographs. In e-commerce, it might be as simple as determining whether the expression “half off” is more successful than “50% off.” Run the test several times to get a good sample set. You may also want to create different permutations of the test. For example: half off vs. 50% off. Then run the winner of the two against “Save 50%.”

Exercise: Using the variable that you’ve identified as pertinent to your particular problem, gather data through testing.

Draw Conclusions

Okay, now we reach the part where you solve the puzzle. You can move into reflection mode, using your results to validate whether your hypothesis was true or false. In our example, shorter links either did or did not have an impact on the number of people who clicked through to the blog. Did the number of people who clicked through change? Did it go up or down? Remember, it’s best to repeat the experiment a few times in order to make sure you have the best results.

If your experiment didn’t support the hypothesis you predicted, it’s time to go back to the drawing board and identify a different variable you can manipulate. Keep testing!

Exercise: Review the results of your experiment against your benchmarks. Did your experiment generate the results you anticipated? What additional tests do you think need to be done to optimize your results?

Share Results

This is an extremely important aspect of the reflection phase of running an experiment, and the one that can have the largest impact on whether your team adopts more lean and agile ways of working. Once you’ve completed your experiment, make sure you share the results with your team and management in order to complete the cycle of knowledge transfer. In looking at the data you were able to extract from the testing, decide what else you need to test. Even if, in the example problem, the test generated more clicks, you may want to see if you can continue to optimize and improve performance. Perhaps even shorter links have even more impact on the number of people clicking through. Even if the test wasn’t successful, share the results of what you learned. Brainstorm with the team to decide what other variables are worth testing, to determine, for example, whether the type of content being posted isn't ideal, the link images aren’t interesting enough, or the blog titles need to be improved.

Remember, Edison failed thousands of times in trying to create the lightbulb—an invention that revolutionized the way we live. However, for him, it was also a matter of perspective: he never felt like he’d failed; rather, he said that he found 10,000 ways that didn’t work.

In marketing, testing should be a high priority in helping to optimize not only your content but also your strategies. So test away!

BUILDING A RECOMMENDATION MEMO

It's important to know how to convince the right people that experimenting is a good idea. Give them a clear view of your thought process and approach. I find that a “reco memo,” as I like to call them, is a great way to provide transparency to all stakeholders. The top-line memo format also provides a level of comfort and familiarity as it’s easy to digest and often used in other business and marketing initiatives.

In the real world, where everyone is pulled in a lot of different directions, people may prioritize other things above managing the community. Maybe you know where the holes are in your community strategy and have tried to fill the voids, but when your boss is worried about whether revenue is enough to sustain the business, it can be hard to convince that person to devote any resources (people or money) to experimenting at all.

When this happens, it’s really important to have a clear message that addresses the potential value of your experiment to the bottom line and organizational goals.

One of the most effective things you can do to garner support is to get your ideas into an easy-to-digest slide deck and disseminate to the appropriate parties in your office. You can reinforce your document with the “walk and talk” elevator pitch about why testing new ideas is a good idea. Or, if absolutely necessary, tell your boss that you are certain your plan will be successful. You can always request forgiveness if it isn’t. (Relax—if you’ve followed the preceding rules, you won’t have much to worry about on this front.)

Determining how to close the gap between senior-level roles and more junior-level roles can be tough, but you need to remember that your role was created for a reason. You are the expert (or at least are on your way) within the role you were selected to play in the business. If the important testing initiatives that might allow the business to stay agile and relevant move slowly through your office, identify the barriers and find ways to remove them. You can’t go wrong with being proactive and agile.

Continuous conversation about what needs to be fixed and why it needs to be fixed are of great importance, but in many organizations, they don’t happen nearly as often as they need to. By suggesting and encouraging testing and experimenting, you can show you’re striving to keep the company moving in the direction of innovation—allowing it to adapt to the ever-changing environment.

A sample template for an experiment's recommendation memo follows:

Summary: This is an overview of the experiment and how it would be conducted.

Objectives: What, exactly, are you trying to accomplish with this experiment? A well-written objective statement should spell out clearly why you want to complete an experiment, and how the results of the experiment could positively impact your company. If you can’t identify with some degree of precision the reasons for completing the experiment, it won’t make a great deal of sense to the person you’re pitching to spend time and money on it.

Strategy and Tactics: This outlines how, exactly, you intend to complete the experiment. What resources are needed? Who should be involved? The more specific you can be here, the better. This section shows how the experiment will work, which requires a significant understanding of the process and elements at your disposal. Think, for example, of such considerations as messaging and banner ad modifications, press relations, technical needs, and process implementation.

Metrics for Success: This will vary depending on the type of experiment you are running. A social media experiment will have very different metrics than a landing page experiment. However, it is important to establish which metrics are the important ones at this stage. For instance, if you are running a paid search campaign, it may not make a difference that an uptick in users occurs during the time the ad is running if these come from social media and can’t be correlated to paid search efforts. Therefore, if you were looking at page views as a measure of success, you might need to go back and refine your analytics to make them more accurate. You may also want to look at your traffic sources as well as your traffic or user counts.

Other Recommendations: This may be optional, depending on the size and requirements of your recommendation. If you’re requesting a small experiment, this area may not be needed. However, for much bigger experiments, it is important to use this area to cover your bases. If your experiment requires feedback from fans and followers, you may want to institute quality assurance to guard against hostile participants. Or you might wish to implement a designated team to respond to the influx of site traffic you expect to receive. The possibilities are endless and are specific to your experiment, so if you’re running a big experiment, be sure to cover all your bases.

Budget: How much will this cost? Be sure to take into consideration the number of work hours as well as costs such as paid placements, hiring extra support staff, and design needs.

Next Steps: What, exactly, needs to happen to get the experiment off the ground? This involves items such as approval from leadership, logistical planning, and timelines.

Depending on the size and type of experiment, you may wish to include more-detailed sections for schedules, creative needs, and logistics.

Here is an example recommendation memo:

Summary: Our client is an eco-friendly fashion brand that wants to see if changes to the website user experience will affect community engagement on its message boards. Therefore, we propose an A/B test to determine whether different website layouts affect the level of engagement. Historically, our company’s best customers are those who are actively engaged on a message board, so we hypothesize that encouraging even more people to sign up for the message boards will result in additional conversions.

Objectives: Determine whether different landing page layouts encourage higher click-through to the home page and encourage more engagement on our brand’s message boards.

Strategy and Tactics: The long-term strategic objective of this experiment is to create more long-term, engaged relationships with our fans while also eliciting feedback about our products and what our fans would like to see us doing in the eco-space.

To that end, we will develop a new landing page. A, the control page, is our current page, while B, our variable, contains specific call-to-action messaging highlighting the message board and encouraging site visitors to sign up. Pages should be mostly similar and contain the same content, other than the highlighted messaging to encourage users to sign up for the message board.

If the change proves to be successful, we will test and iterate to optimize message board signups.

Metrics for Success

  • % of account signups (comparing A vs. B)
  • Frequency of use of new signups (comparing A vs. B)
  • % of registered users who revisit the message boards based on revised wording (A vs. B)
  • Amount of engagement from signups (comparing A vs. B)
  • Monthly growth count

Other Recommendations

  • Twitter campaign encouraging people to visit the site and join the message board
  • A strategy to determine message board quality control
  • Live feed of popular content
  • Catchall area for Q&A with top 10 questions
  • Expert series events—special guests who may be influencers in the space

Budget

Total: $5,000

Breakdown:

  • Site development needs: $1,000
  • Design needs: $1,000
  • Resource costs: $3,000

Next Steps

  • Leadership approval on budget and process
  • Work with design and development team to determine capabilities
  • Work with copy teams to develop materials

The Agile Marketing Process

As I mentioned earlier, much of my own business revolves around the concept of lean and agile methodology. The two processes are similar and complementary, but agile suggests a methodology that’s at the core of execution. It focuses on an iterative and responsive delivery approach that supports high customer satisfaction. Lean suggests doing a project in a leaner or pared-down way by removing waste and building simple quality controls. (Think of lean like lean ground beef, which has all the beef but less fat.)

Both processes are based on the idea that companies can achieve the best results through an ongoing learning loop, like the one we reviewed in detail in Chapter 4. This learning and application of what we learn describes how a company can quickly and effectively achieve real results that are rich in customer and market insights.

Many older, more traditional businesses are locked in their ways, determined to make every initiative go through high-level approvals before getting anything done. This is necessary for some aspects of business, but it’s my belief that we can put some of the control into the hands of our employees and allow them the freedom to make mistakes and learn from them. This is where the Perks slogan—“Think like a Brand. Act like a Startup.”—comes into play.

Rather than doing heavy research and putting major projects on hold during the research phase, we instead aim to iterate ideas based on analysis of customer feedback. We do this by emphasizing speed and responsiveness, instead of absolute perfection and long planning cycles that tend to stifle achievement of business goals given today's business velocity.

Lean and agile processes serve to do the following:

  • Allow for iterative improvement of strategies
  • Increase efficiencies and learning
  • Incorporate customer and market validation
  • Assess and optimize investments

As such, they offer businesses several benefits:

  • Increased responsiveness: Being iterative allows individuals and organizations to adapt and make decisions along the way based on real-time customer feedback.
  • Improved speed to market: Focusing on a specific problem and developing a solution to address it allows CMs and collaborators to get their message or product to market quickly.
  • Reduced risk: A structured, iterative approach with established benchmarks and metrics allows companies to manage and monitor its activities and level of investment. Also, it provides the opportunity to reallocate investments.  

In our commitment to leverage lean and agile methodology, we’ve learned how to design a specific way to integrate this methodology into the problem-solving process:

  1. Define the problem, the specific pain points, and the reasoning around the problem. The process of identifying a problem that needs to be solved isn’t so different from the marketing scientific method we explained earlier in this chapter. Here, we also get specific about the pain points, and think about ways in which the problem affects our customers throughout the customer experience.
  2. Develop solutions that address the problem. Solutions to a problem with the customer experience may come from more than one direction. Work with the various teams to figure out ways to alleviate the pain point. This may mean working with customer service teams, designers, developers, marketers, business and finance teams, and more in order to develop potential solutions to resolve the issues. Don’t forget, customer experience is the result of interdisciplinary efforts.
  3. Prototype and iterate solutions via collaboration and co-creation with customers or other stakeholders. The wealth of ideas that come from collaboration can help guide you to the best, most cost-effective ideas for your business. Often, you can also rely on your customers or community to help you work through your prototypes and iterations by seeking their feedback. Your community is there because they want to be in touch with your business, and with one another. They provide a valuable resource if you can leverage that opportunity.
  4. Validate with customers to ensure you have something worth pursuing. Even after you’ve collaborated internally and worked with your customers to help build your solution, you still need to validate it. The goal here is to make sure your solution works for them before you invest more money and resources into a project that ultimately might not resolve the issue, or inadvertently creates more problems.

The Lean Scientific Method: Testing and Iterating

Once you’ve gone through all the steps of the lean, agile marketing process, it’s time to do it all again. In executing lean marketing for existing initiatives, the goal is to optimize performance data in order to improve. This means testing and optimizing constantly to help you achieve the results that will help grow and sustain your business.

PRO TIP: “SOCIALIZE” THE SOLUTION

If you are encountering a problem in community management, keep in mind that the onus isn’t on you to solve it alone (even if the core of the problem does fall under your job description). People from all backgrounds have a valuable place in community management, including product teams, developers, editorial staff, social media experts, and customer service teams.

As you begin to understand and practice the core tenets of lean and agile methodology, you learn to give ideas some oxygen. It never really matters whose idea it is within a team, especially if you have a brilliant idea but don’t have the means to execute it. Socializing ideas helps people understand the problem more thoroughly, and also helps them want to contribute to it in a positive way, which makes them more invested in resolution.

When things really get messy, you might find yourself in a difficult post-mortem where the question becomes “What do we do?” or “What should we have done differently?” These “Oh Shit” meetings (pardon my French) can be an excellent catalyst for solving other problems, but often happen too late.

A/B Testing

A/B testing is a form of testing used in marketing and advertising that uses randomized experiments with variants (A or B), which contain a control sample and a variable you want to test. As a community manager, you’ll come to rely on A/B tests, as they’re great for segregating a single variable so you can see how audiences react differently to changes. This section shows how to use A/B tests to improve and optimize your results.

Testing Ads

Think about the many variables you can test in an ad. If you don’t have much experience with paid media, pull up Facebook on your computer right now and look at the ads. You can test a number of things.

Copy: Look at the words being used to sell the product. Notice words like “Best” as opposed to “Good,” and words like “Deal” as opposed to “Bargain.” Also, notice whether the words in the ad are all capitalized. Some people report that simply capitalizing every word can make a difference in ad reception.

Call to action: What words do they use to make someone take action? Maybe it’s “Click Here,” but it could also be “Act Now” or “Act Fast.” Each has a different implication, and audiences may respond differently to each.

Images: If your photo isn’t generating the results you want, try again using a different color or a different image entirely. Keep making changes until you’re able to optimize your imagery.

Targeting: Facebook ads allow marketers to target a particular audience based on demographics, location, interests, and a host of other factors. If you’re not seeing success with ads that cover a large space, try hyper-local ones to see if they generate better results (or vice versa).

Ad placement: If the ad isn’t working well for you on Facebook, why not try running the same ad on Google to determine whether that placement generates stronger results?

Ad destination (landing page): Many people make the mistake of driving people to their home pages, rather than directing them to the page of the event they’re promoting or the product they’re selling. In most cases, specificity will be best here, but it never hurts to test.

Bids: If you’re not seeing the results you’d like with ad space, you could try changing your bid. If you are seeing some results, it may make sense to bid higher on a particular campaign to see if it improves your results even more.

Testing Your Home Page

Sometimes a great ad generates all the clicks you need, but the home page they are redirected to isn’t creating the same effect. Think about your business’s web page. You can test and improve numerous elements in order to create the best user experience. Let's look at a few.

Button shapes: This detail may seem insignificant, but, if optimized, can create a significant difference. Perhaps small buttons aren’t drawing people because the audience can’t see them as clearly. Making them bigger may help people navigate the site easier. Or, if they’re too big, these may make the page look cluttered.

Text: Try incorporating more or less copy on the page to see if that makes a difference.

Color: Perhaps your page’s color scheme is making it difficult for anything to stand out. Try experimenting with a different color palette to see whether it makes an impact.

Email collection box or lead generation: If the collection box is not prominent, you may be missing out on opportunities for people to log on to your site. However, if your email collection box is obtrusive, it may prevent people from wanting to subscribe.

Check-out process: If your e-commerce website check-out page has a lot of friction—things that make check out difficult for one reason or another—your customers are bound to notice. Focus on testing ways to eliminate this friction.

Site or product photography: If your photos aren’t clear and crisp, this may be deterring customers. Specifically, bad photography may cheapen the brand experience. Try incorporating different or higher-quality photos to see if they increase users’ time on the site.

Testing Email Marketing

When it comes to email marketing, there are so many variables to test that entire industries and platforms are dedicated to narrowing down these variables and iterating on a continual basis. When it comes to testing email, you have many choices.

Subject lines:   Track these against open rates to see which types of line copy and lengths are the most effective.

Call to action:   You can experiment with different things to request from people, as sometimes requests that are too obtrusive can be detrimental to your strategy.

Format: If your email templates use devices such as bullet points as opposed to paragraphs, try experimenting to see if one works better than the other.

Hook: If you’re leading with a question, it may be more effective to lead with a bold statement. You won’t know unless you experiment, which will give you insight into your audience’s preferences.

Length of message: I can’t tell you how many times I’ve seen emails perform far better simply because the author shortened the amount of text it contained. It’s important to remember that people these days aren’t always on their computers when checking their email; a lot of the time people go over their inbox through their phones or tablets, so something that seems short on a desktop computer may seem much more cluttered on an iPhone.

Timing: Because of an audience's busy schedules and variety of responsibilities, it’s important to consider the best times of day to send emails, as well of the day of the week. Try A/B testing the time of day you send emails to see what times provide the best results. Then test the day of the week; does your audience prefer weekdays to weekends? Or vice versa? This provides great context for time of day.

Color: Psychologists know the power of choosing the right colors, but marketers can learn as well through A/B testing. Try different color schemes to elicit different results, and see which are optimal for you.

Images: Big, bold, powerful images can provide more impact than lighter ones, but they also take time to download, depending on the types of devices your customers use to access the Internet. Think carefully about these items as you A/B test imagery in emails.

Customized links: Sometimes shorter or customized links can work better, as long, messy links can be a distraction to people. Try experimenting with different types of links to determine whether they affect the number of click-throughs.

Offer/deal/promo: Does 50% work better than “Buy One, Get One Free?” A/B testing the deals in your email validate that you’re advertising the right types of promotions to your customers.

Testing Social Media

You can also conduct testing by using your brand’s social media. You have some valuable things to test.

Hashtags: Test whether incorporating trending hashtags into your tweets makes a difference in their visibility. You might be surprised to find that incorporating the right hashtags can have a significant impact on your reach, as people are often compelled to seek out trending topics.

Time of day: Many people find that tweeting at certain times of day generates better results. Try to determine what times work best for your business, and send your most compelling tweets during that time.

Tags:   Tagging individuals can increase your reach as people interact and engage with you.

This is an overview of just some of the things you can test. If you run a community platform, you can test the functionality of your platform, as well as items like moderator input, community announcements, and so much more.

How Long Do You Test?

In running a test, you can review results over long periods, but you may want to see them over the course of an hour. You can test hour-over-hour, day-over-day, week-over-week, year-over-year, or in any increments you prefer. Many people like the consistent time blocks to see how they’re doing. In performing annual testing, don’t forget to incorporate changes such as Christmas falling on different days of the week each year. Try to figure out the best and most accurate comparison, and focus your tests on that. Be sure to look at comparisons with comparable timeframes.

Testing Tools

You can use various tools to test, but let’s cover some of the most useful ones to start:

  • Google Analytics: This free analytics program contains features to help you track website traffic.
  • Optimizely: This site allows you to run A/B tests on your website without requiring you to make changes to the entire site.
  • Adobe Target: Part of the Adobe cloud, this tool allows the user to tailor and personalize customer experiences to maximize revenue through A/B testing, automated personalization, and multivariate testing.
  • KISSmetrics: This tool works with Google Analytics to give you data about the people who are engaging with your site. It can tie the steps in the process back to individual people and can tell you how users are engaging across devices.
  • Yesware: Billing itself as an email marketing tool for salespeople, Yesware helps track emails and allows customization based on what you learn.
  • MailChimp: With built-in analytics tools and A/B testing capabilities, MailChimp is a great way to help optimize your email marketing strategy.
  • Unbounce: Unbounce enables the user to test landing pages for optimal results.
  • Bitly: Bitly allows you to track engagement with hyperlinks. By turning any link into a Bitly link and then sharing it on social media, Bitly tells you how many people are clicking, and how they’re engaging.

Image Tip  Run an A/B test using Facebook ads. If you’ve never experimented with Facebook ads, log on to Facebook and see how to create and structure them. You can create a Facebook ad for as little as $10 (in some cases, even cheaper), and in many cases even small amounts capture significant numbers of impressions and clicks. Using the marketing scientific method outlined previously, run two ads that are identical except for one variable. Compare results, share your findings with your team, and decide which test to run next.

Using Community for Customer Validation

I’ve given you various tools to conduct testing in the digital space. But you can also leverage your community to get information about your proposed changes and concepts. Even with all the digital technology in the world, sometimes nothing is as good as reaching out to the members of your community for their feedback.

Some of the ways you might do this is through survey data, focus groups, listening and discovery (as discussed in Chapter 7), and opinions of early adopters. Activities like focus groups and one-on-one interviews can be helpful in getting great qualitative information from the people who are most valuable to your community.

Analytics Review

We’ve talked about how to review and digest your results, but one topic we haven’t touched on in much detail is analytics. Analytics are tools designed to help you understand your data. The results may be as simple as telling you the number of people who clicked your link, but they can also detail the times that people clicked, how they found your site, and all kinds of other driving factors.

Reviewing your analytics can help you figure out whether your experiment was a success. It’s important to know which key performance indicators to measure, and once you have those nailed down, you can come up with some very interesting information about your brand.

If you’re using social media listening tools, you’ve probably worked with social media analytics at some point. Even tools such as Bitly, Klout, LinkedIn, and Facebook have analytic information you can tap into to determine whether your experiment was a success.

In this era, the collection of analytic data is easy because of the automation that digital interaction provides. This is why Google Analytics can update in real time to tell you the number of people who are visiting your page at any given moment —a feat that would have been nearly impossible had you ever tried to run a similar experiment using print newspapers or magazines.

Usually placed under the umbrella of “marketing,” analytics are a massive area of study, making it impossible to cover them in depth in this book. People who decide specifically to study analytics can learn how to work with data to discover powerful insights. However, numerous online resources also teach viewers the technical aspects of understanding and using analytics software. If you want a more in-depth understanding of analytics, log on to the Google Analytics Academy to learn more about the analytics that affect your website. Understanding Google Analytics can be useful because it sets the framework for all the other analytics programs you’ll encounter.

Summary

In this chapter you learned about the importance of experimentation, using the scientific method to design an experiment, and using lean and agile methods to market. Whew! Let’s continue in a serious vein by taking a look at tracking and measuring performance.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.103.16