12. keeping HEROes safe

It’s time to face the risks that lurk in the HERO-powered business. Because employees, armed with the technologies of the groundswell, are not just powerful, they’re dangerous. Like all powerful tools, these technologies carry risks.

What could go wrong?

For one thing, as Domino’s Pizza found out in April of 2009, employees can upload videos to YouTube. In this case, it was two pizza makers stuffing cheese up their noses and performing other unspeakable acts on food that appeared destined for delivery to customers. No matter that the perpetrators eventually denied ever delivering unsanitary food, Domino’s still suffered brand damage.1

Your employees don’t do that? What about the Sprint employee who posted details about the Palm Pre phone on a blog, violating a nondisclosure agreement?2

And it’s not just malicious employees. At Cisco, an employee posted a job opening, inadvertently revealing a change in strategic direction. At Microsoft, a product manager announced he was changing jobs, revealing the unannounced news that a product was being discontinued.

And we haven’t even gotten to security breaches. An employee at a global bank just told us that, unable to remember the passwords to the twelve corporate systems he used, he wrote them all down on a piece of paper taped to his laptop.

Employees are a danger to themselves and their companies because they use whatever technology they can get their hands on. This technology has potential risks. So how can you lock down technology to keep them from doing any of these things?

You can’t.

There was a time years ago when IT security meant locking down your network and corporate databases, putting everything behind the drawbridge and moat that protect the corporate castle, and giving only authorized people the password. Secrets were safe. Well, mostly safe.

But now the communication tools are wherever your employees are. Responding to customers at the speed of the groundswell, HEROes in your company use email, instant messages, blogs, blog comments, Facebook, LinkedIn, Twitter, YouTube, Flickr, Skype, WebEx, Google Docs, YouSendIt, and hundreds of other sites and tools, more every day. They work, not just on corporate PCs, but on their own computers, iPhones, BlackBerry phones, and tablet PCs. As we saw in chapter 7, over 40 percent of information workers are provisioning their own technology. How are you supposed to lock all this down? One IT security professional described his job to us as “a world gone mad.”

You can’t protect things any more by locking down the network and password-protecting the databases. While IT was busy securing the network perimeter to keep secrets inside and intruders outside, the perimeter moved. It moved to wherever an employee is trying to work.

It’s as if you had built a giant fortress to protect your village from marauders only to wake up one morning and find that the villagers had moved all their houses into the fields beyond the safety of the fortress. They won’t come back in where it’s safe. It doesn’t suit their needs. It makes getting things done too slow and it prevents them from working in the ways they need. They like it out in the fields.

Malcolm Harkins has a great way to describe this. As Intel’s chief information security officer, he’s responsible for keeping the company’s secrets and people safe. At his first security team meeting in 2005, his team was complaining that the security perimeter had vanished. Securing the corporate network was no longer enough to protect the company. But Malcolm saw it differently. He saw that the perimeter hadn’t vanished, “it had moved and we just missed it.”

You can’t lock all this stuff down. The more you try, the more you slow down and trip up the HEROes. You need a new IT security strategy. And just like your customer strategy, you’re going to have to depend on the one thing you have going for you—the intelligence of your workforce. Or as Malcolm Harkins says, “Make people the new perimeter.”

the new job of information security

Remember the HERO Compact? In it, we said that the HEROes’ job is, in part, to obey the rules set up by the IT group to keep them safe.

Since locking down technology doesn’t work so well, the parts of IT that focus on security must focus more of their efforts on policy and education and risk. IT has two new jobs:

  1. Train and educate information workers about how to keep themselves safe.
  2. Help HEROes assess, manage, and mitigate risks associated with their projects.

Note what’s not included here. IT is not responsible for risk. Instead, people in IT must advise workers to keep them safe, and help them to improve the security of what they do.

CASE STUDY

making Kodak’s HEROes safer

Bruce Jones knows about risk. Bruce has global responsibility for IT, security, forensics, compliance, data privacy, and risk management at Kodak. As an IT security professional, Bruce has been dealing with the risks that come with new technology at Kodak for twenty-nine years.

Managing risk is in Bruce’s background. He comes from a law enforcement family. His father and grandfather served and protected their communities going back to the 1930s. Risk and safety and bad guys were everyday topics of conversation.

But Kodak is a business, and if Kodak isn’t making money, there’s nothing to keep safe. In the past five years, Kodak has had its share of not making money as its film business has declined with the relentless rise of digital cameras. That gives Bruce a lot of motivation to help the business grow.

One way Kodak grows is to get closer to customers. After all, Kodak was built on empowered customers, people capturing their Kodak moments and making memories part of their lives. Photo sharing has become a killer application for Facebook and, increasingly, for Twitter.

Kodak’s employee HEROes have embraced the groundswell and become active participants in the customer conversation, listening to rebukes, responding to concerns, reaching out to its customers. Kodak’s Facebook pages from around the world are swamped with fans and with Kodak employees. Kodak’s YouTube videos are viral and touching. Kodak’s Twitter presence advertises product deals and tips for photographers. Kodak’s employees are everywhere online, talking about Kodak.

This activity creates risks. In the spring of 2009, Bruce noticed an alarming increase in email spam on Kodak’s corporate systems, from around 8 million to 58 million messages a month. Kodak’s spam filters blocked most of it, but the spam filters were overloaded and valuable business messages were delayed. Why? Bruce attributes a large portion of the increase in spam to company employees who had posted their Kodak email addresses in places where spammers could find them, including in social networks. Then Kodak increased its visibility through its sponsorship and product placement within the popular program Celebrity Apprentice on NBC. While this created a lot of buzz about Kodak, it also attracted spammers, who then noticed all those highly visible Kodak email addresses.

Spam is no joke—many spam messages include links that invite email recipients to sites filled with viruses and other malware. In the past, the typical response would have been an attempt to further lock down corporate systems to keep the bad stuff out. Instead, Bruce decided to work on educating Kodak’s staff, to make them as smart as possible in a dangerous online world. He moved the perimeter.

Bruce started with a new policy to protect employees engaged in social media inside and outside the company. Bruce joined a cross-functional team from Kodak’s legal, security, marketing, and corporate communications departments to create a simple pamphlet called “Social Media Tips: Sharing lessons learned to help your business grow.” The content included perspectives and policies from human resources, IT, and marketing; you can read it at Kodak.com.3 They also created a more specific version for internal use at Kodak. In Bruce’s words, “We spent the most time deciding how to make this applicable to everybody.” They kept it simple, to get as many people as possible to read it. The objective was to improve online safety through education.

The guidelines in these policies include things like “Know what you are talking about,” “Always be transparent,” and “Listen to what others have to say.” But along with these simple, friendly, helpful tips are important policies: no anonymous posts or comments, don’t talk about things you shouldn’t, accept personal responsibility when you make a mistake. The tips apply to internal and external communications. And while they are written for social media, they apply equally well to video, cloud computing services, and mobile communications—all the technologies of the groundswell.

The social media tips pamphlet is written in English, but deployed globally and available to the public. There’s also a forty-minute self-guided course on security awareness that now includes social media guidance in sixteen languages. Each employee takes the course every other year as part of the business conduct policy.

By educating the staff on where the guardrails are in social environments, Bruce changed the way Kodak thinks about technology. Now maybe they won’t post their Kodak email addresses in places where they shouldn’t. But more importantly, Bruce helped Kodak communicate the message: “We trust you and we’d like to keep you safe.” He showed the employees how to be safe. That’s a much better message than “We’re worried you will screw up so we are locking you down.”

Instead of locking Kodak staff out of social environments, they are now free to reach out and interact with customers and friends there. Kodak needs this flexibility to continue building its brand. A company transforming itself—like Kodak’s transition to a postfilm world—needs all the employee HEROes it can get with creative ideas to help with that transformation. Kodak’s policies encourage HEROes to innovate safely.

Bruce sees his job now as helping these HEROes. He anticipates technologically dangerous events and educates employees and managers on the potential risks of the business initiatives they come up with, particularly those involving groundswell technologies: employees using social media, smartphones, and cloud Internet services. It’s not his job to say no to a HERO project. It’s his job to say, “Here are some potential consequences. Here are the risks we may face in doing this, and here’s what we can do about it.’’

policies and education help protect HEROes and their employers

Kodak is just one example of a company that’s using policies, education, and training to keep HEROes safe as they engage in the groundswell. So have BBVA, Boeing, Booz Allen Hamilton, Electronic Arts, IBM, Intel, Lockheed Martin, Procter & Gamble, and Verizon. These policies vary, but they include three foundational principles of groundswell safety. These principles are deceptively simple, but each has a track record of success. You can write them on a notecard (see table 12-1):

  1. Put your name on everything you do.
  2. Remember that you are an employee.
  3. Own up to mistakes and fix them.

put your name on everything you do

We get this question more than any other: “Should we require a login for internal social applications?” The answer is always yes, because the first principle is to make employees use their real names and identities. Anonymous contributions undermine the balance between employees and management that’s fundamental to the HERO Compact. Employees will watch their words if they know those words can be traced back to them. At Electronic Arts, Bert Sandie, the director of technical excellence at EA University, says that this policy of requiring a login and identity has resulted in exactly zero posts or comments that had to be taken down. Fernando Summers at BBVA said once this policy went into effect, there were no problems on the company’s internal blogosphere.

TABLE 12-1


Principles of groundswell safety

table


The first principle—identify yourself—applies both to employees working internally and to those working externally. Internally, it’s generally straightforward to require a user name and password and include the employee’s name (and in best practice, their picture) on every profile, blog post, internal cloud service, and comment. To make it easier for employees (which will encourage participation), this means moving toward an architecture where a single login provides access to all the applications an employee needs, including social, video, and cloud applications.

Externally, this transparency principle must be implemented as policy, which is what Kodak does. At Best Buy, as we described in chapter 1, retail store employees can join the Twelpforce and answer questions on Twitter. But they must first register their handle with Twelpforce. IBM’s social computing guidelines include this statement:4

Identify yourself—name and, when relevant, role at IBM—when you discuss IBM or IBM-related matters. And write in the first person. You must make it clear that you are speaking for yourself and not on behalf of IBM.

remember that you are an employee

This principle is a reminder that while groundswell technologies provide new ways to communicate, collaborate, and publish, they don’t actually change any of the responsibilities that you already have as an employee.

At IBM, this principle is reinforced in the social computing policy and in a direct reference to its Business Conduct Guidelines, also available publicly on IBM.com in the Investors/Corporate Governance section. At Sun Life Financial, a Canada-based financial services company, this principle is anchored in the employee code of conduct and made visible in the social media guidelines.

Oddly, this is the place where most businesses have abdicated responsibility to the IT security organization. The sentiment seems to be, “If it involves technology, it must be an IT policing job to keep people from doing the wrong thing.” That’s ludicrous. If an employee is going to violate a corporate policy or break the law, that’s a personnel issue, not an IT issue. It’s a business responsibility. Breaking that law or violating that policy on a public site like Facebook, Twitter, or a blog doesn’t change that responsibility at all.

So this principle exists for one purpose only: to remind employees that they are first and foremost an employee when acting on company time or on company business. This doesn’t apply specifically to groundswell technology, either. It could happen in person, by phone, or through email. The principle serves to remind employees whom they’re working for and what their responsibilities are. If Domino’s staff had known more about this principle, maybe they wouldn’t have posted that damaging video on YouTube.

own up to mistakes and fix them

This last principle is more human than the others. It recognizes that we aren’t perfect. We might be grumpy some day and snipe at a customer rather than address their concerns. Or post while venting anger rather than after taking a walk to calm down.

This principle has two main components. First, it tells an employee that it’s okay to make mistakes as long as you do everything in your power to fix them. If a post has a fact wrong, then fix it with a crossout and the correct information. If a tweet kicks off a firestorm of angry responses, then tweet back and apologize and offer to make it good. It’s the employee version of what Domino CEO Patrick Doyle did after the video incident. He apologized publicly and with visible emotion on YouTube. It’s a mea culpa, and it’s okay as long as you make good.

The second component is more subtle. It says that employers must trust and support employee HEROes. While this will always be a judgment call, the right posture is to help the employee work through the mistake and make it right. Firing them is usually the wrong approach. And even when things seem to go wrong on a project, it’s often more productive to fix it than to pull the plug.

This principle also means that managers need to show faith in employees rather than shutting them down at the first sign of problems.

Here’s an example from IBM. In 2007, CEO Sam Palmisano approved using IBM’s Innovation Jam system to ask employees for suggestions on how to improve the company. At the end of the first day, the jam was full of complaints, not suggestions. Sam resisted the temptation to shut it down immediately as a failure, because he had faith in the company’s employees to contribute appropriately. Sure enough, by the second day, the complaints petered out and the suggestions started to appear. By day three, the suggestions were coming in full bore and the complaints were few and far between.

Now IBM’s Innovation Jams are frequent and productive—and the management is glad they didn’t pull the plug. And as a helpful “I’ve got your back” by-product of this experience, Sam Palmisano reads every idea post and has made some organizational changes based on the feedback, positive and negative, without any retribution.

Empowered employees make mistakes. Punishing them sends the message that innovation is a career risk. Management’s job is to get employees’ actions and self-interest aligned with company goals. If not, then you have a much bigger problem to solve than information security; you have a workforce that operates in fear. That’s not good for generating employee HEROes.

assessing, managing, and mitigating risk

From Kodak and IBM, we’ve seen the principles involved in using policy, education, and training to help with risk. We said this was the first of the two new jobs of IT security. The second is helping HEROes to assess, manage, and mitigate risks. IT people must advise those building with technology on what might go wrong, and what alternatives might be better.

Bruce Jones at Kodak does this. “I’ll tell you the risks and the likelihood that the risk will happen, and I’ll work with you to understand the potential business impact,” he says. “But at the end of the day, it’s a business decision. Our job is to assess and help manage the risk. Your job is to run the business.”

Bruce’s team manages this risk assessment process, but the business leader signs off on it. Kodak uses a multitier approach, where general managers or other senior leaders approve riskier initiatives and local managers approve smaller projects. But the focus is still on people to manage risks, not technology to lock everything down.

Again, as with policy and education, the focus is on people, not just systems. As Khalid Kark, Forrester’s expert on security, puts it, “You have to start where the information is, and that’s wherever your people are. It requires rethinking your security architecture.” Instead of a locked-down network, the new security architecture has three layers: people, risk assessment, and information protection.

  1. Build systems that recognize that people are the new perimeter. As Malcolm Harkins of Intel says, “The perimeter is anywhere your people are, from an application developer who left in a buffer overflow to an employee that clicks on a blog link that leads to a malware Web site. Our job is to help each person on the perimeter to understand where they stand and how to be aware of potential danger.”
  2. Give those people tools to manage risk. This starts with an IT-led security assessment: How dangerous is it? How likely is it to occur? Then an authorized business manager makes the decision, as at Kodak. Dell’s Manish Mehta, whom we described in chapter 9, helps Dell’s HEROes building social applications with best practices for safe social interactions. The chief information security officer in IT must work with HR, legal, compliance, and the business sponsor to make sure any business decision is made with full knowledge of the risks. But ultimately, it’s a business decision, not an IT decision.
  3. Use technology that protects information, not just networks. Traditional security technology secures the network. In a new job for IT, the new security technology must protect information, particularly applications and Web access. It must also deal with new devices, cloud computing services, a mobile and remote workforce, new media types like video, and new channels of communication like Facebook and Twitter. The security team will have to invest in technologies like data-loss protection technology from vendors like Symantec and McAfee that keep confidential information such as customer social security numbers from showing up in email messages. It will have to look into new message-interception technology from start-ups like Socialware to allow employees to use Facebook and Twitter to answer customer questions while remaining in compliance with regulations.

This new security architecture frees the IT department from a no-win task. IT security has for too long been the department of “no,” the group that told business HEROes what not to do. Now it can be the department of “yes, and here’s how.” IT goes from policing to working with IT, legal, HR, and business owners in assessing, managing, and mitigating risk, and matching it up with value.

when to say no to a HERO

We’re not naive. There are still times when you and your IT security team and legal staff must say no to an employee HERO, when her actions are just too dangerous to continue. But the goal should always be to analyze the real risks so that eventually you can say yes.

For example, the rules around customer communications in the U.S. financial industry now require archiving and retaining sales employees’ tweets and Facebook updates. So, until IT can find and implement a solution to intercept and archive the messages, banks are right to ban these activities for sales and service people for customer messages, until interception and archiving solutions become available.

Here are four situations where you should probably just say no:

  • When your regulator has created new laws that you can’t yet comply with. This applies to many banking, brokerage, and insurance applications, as well as applications in some life sciences companies. It doesn’t mean shutting down internal deployments of social technology or cloud computing, but it does make external applications more complex.
  • When your customer contracts prevent you from sharing anything about the contract. This applies to external communications for the defense industry. Building a secure collaboration platform for you and your customer to use is okay, and may even be required. Internal social applications are fine. But employees must be highly aware of laws banning the export of intellectual property and sharing customer secrets.
  • When your legal team has issued an opinion that prevents it. This often happens when the legal team hasn’t yet figured out the legal risks. For example, are video conferences “electronic communications” like emails that have to be archived? Or are they “voice communications” like phone calls that don’t? The U.S. law hasn’t decided, but you must. Your legal team is responsible for keeping the company—and you—out of court. Listen to them, but also make them part of your policy team.
  • When your HERO could be compromising customer or employee data. Privacy laws in the United States and especially in Europe and some Asian countries make it clear when any information about an employee or consumer must be protected. Here, you can get to yes, but it’s even more important to set up and train people on the principles.

how IT goes from prevention to support

In the HERO Compact, IT’s job does not end with keeping HEROes safe. The IT group must also shift its responsibility to making HEROes successful. This is how IT goes from the department of “no” to the department of “yes”—and it’s how HEROes go from supporting players to a force for innovation within companies. We describe all of that in chapter 13.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.220.219