CHAPTER 2

Can–Trust–Will

Can–Trust–Will is our model for hiring. While the rest of this book focuses on how to apply it to cybersecurity hiring, this chapter focuses on the model itself—with a bird’s-eye view of each piece. Call it a funnel, call it a multilayered process—but in the end, these are each filters that are geared to get you to the right candidate.

An Overview of Can–Trust–Will

Each phase of Can–Trust–Will requires attention and accuracy in order to make the system work effectively and efficiently. Most of the job search and most of the model will not be about looking for and finding an individual. There’s a reason the job search has three phases; it’s not an outright hunt for an individual. When using this model, you must understand that it begins by assembling and filtering candidate pools, not just by searching for someone. The right candidate pools, correctly assembled and filtered, will produce the right candidates for final differentiation in a behavioral interview. Correctly done, the process of assembling and then filtering pools of candidates down to the individuals who should be interviewed will identify the particular individual you need to hire.

This may sound counterintuitive, but you’re actually not looking for an individual candidate until very late in the process. For most of your talent search, the focus should be on building candidate pools and then extracting from those candidate pools more refined, smaller candidate pools. Once you have a sufficient understanding of who you’re looking at, then you can start looking at individuals, but not before.

And the order of the pools is crucial. Can–Trust–Will begins with the easiest differentiators and ends with the most difficult. Why? Because easy is also inexpensive, while difficult requires time, expertise, and money. So this process begins by filtering the largest candidate pools for capabilities which are easy to assess and delays the more expensive and time intensive data extraction for the smaller, more refined candidate pools. The last thing your budget needs is a hiring process which runs expensive assessment processes on large numbers of applicants who have no realistic possibility of getting a job offer. Start with Can.

Phase One—Can: This is the process of filtering the initial candidate pool for technical skill. Do they have the technical competence required for the job opening? This is simple, not because the job skillset is simple, but because the assessment is binary. Can the candidates actually do the technical stuff, or can’t they. And since it’s binary, the method doesn’t matter too much as long as it tests the skills you need. It might be a written test, or a practical assessment—hand them a device and watch how they fix it—or it might be an interview. But the sole focus is to determine technical skill.

And the key is to resist the temptation to do more “while the candidate is here.” The notion that you will save money and be more efficient by bringing everybody in one time and running them through everything from skills tests to behavioral interviews is an expensive mistake. And it also leads to making The Big Mistake which we address in detail in Chapter 4.

Remember, you’re building pools. And you’re building pools for the purpose of quickly, accurately, and inexpensively filtering for the individual who will actually succeed in this specific role at your company. The first pool necessarily includes all the people in the world you could find who are interested in this job, and consequently, your first step is to separate out the people who can actually, technically, do the job. And you’re going to do that binary assessment as quickly and inexpensively as you possibly can. And if that means candidates come in just to do a short interview or a quick test that is specifically focused on the last time they debugged a Cisco stack, or to tell a story about when they did this, that, or the other technical thing—that’s fine. Remember, it’s a high-volume process. You’ll be assessing many candidates, so the key is to ensure it’s very quick, very simple, and very focused on technical competency.

And once you have your smaller pool of candidates who Can do the work, you move on to identifying those you can Trust. Most of the industry already hires based on Can, and consequently, many companies will already have some process in place to make the assessment. Do that, but recognize you’re not done at that point. Companies which only hire on Can quickly face high turnover because they don’t understand anything about their candidates other than their technical capability. And technical capability, by itself, does not correlate to job success. It is crucial to understand who the candidate is—can he or she be trusted, and what are his or her behaviors that will lead to success in the role. In order to gain this deeper understanding, a deeper dive into both background and behavior must be completed. Background is Trust; behavior is Will.

Phase Two—Trust: Once you have identified the subset pool of candidates who can actually do the work, the process of understanding “who are these candidates” can begin. Again, the key is to do the simple, inexpensive differentiation first so that the deeper, more expensive, and time-consuming examination can be done on fewer people. Understanding each candidate begins with the simplest forms of due diligence. And although Trust encompasses everything from basic public records checks all the way over to a determination of ethical code and moral compass, the process of layered review which structures the overall Can–Trust–Will process is also used to structure the variety of due diligence processes which are included in the Trust phase.

Determining how deep to go and how much time and money to devote to a given candidate in the Trust phase is driven by the specifics of the job description, and in particular the access which comes from each job and how much harm an employee can do to the company from that position. For example, someone who checks identification and enters guest information into the security system has much less access and can do much less harm to the company than a systems administrator who has root access to that same security system. Consequently, a criminal records check is likely to be sufficient for the employee who enters data while a much deeper background investigation is warranted for the systems administrator.

The key is, it’s layered. Simple and inexpensive records checks are done for less risky job roles, and more complex, time consuming, and expensive background reviews are done for higher risk job roles. The time and resources dedicated to each individual vary but are driven by the risk inherent to the job role. Ethics and moral compass are relevant here because they are relevant to corporate culture and transition us to the final filter. Moral compass and ethics are some of the drivers of behavior, because what you value influences the things you do. In that way, we transition from the deep end of Trust to the shallow end of Will. And Will matters, because it predicts behavior, and behavior leads either to success or failure.

Phase Three—Will: This phase is a deep dive past the generic reliability characteristics which are extracted in the Trust phase. In the Will phase, there are many fewer candidates and the focus is now on behavior. What Will the candidate do in the future? Behaviors are the characteristics of a person which drive that person’s actions. And that’s really the core purpose of this hiring system. To identify an individual who has (or can learn) the technical skill and will also behave in a manner which enhances the functioning of the company regardless of whether it’s an ordinary day or one where everything comes crashing down and the stress level is through the roof.

The processes used in the Will phase are based on two premises: first, there is an observable difference between what a person is capable of doing (what they Can do), and what they actually do when they show up for work (what they Will do). Furthermore, what a person Will do is not linear. The fact that a person will do exactly what’s needed under normal circumstances does not mean that they will continue to do what is needed when stressors arise. Will is complex because it is affected by circumstances, and circumstances change, sometimes very quickly. Whether there’s a crisis situation at work or they’ve got a parent in the hospital—what a person actually does under stress varies without relation to what they do under normal circumstances.

Second, the best predictor of future behavior is past behavior. And the key to understanding past behavior is the behavioral interview. The process is to ask the few candidates who have passed all of the filters and made it through to the interview to share stories from their past which will demonstrate how they previously behaved in situations which are similar to what you anticipate they will encounter when working for you. In evaluating the stories the candidate shares, you will be able to assess whether the candidate possesses the behaviors which correlate to success in the specific job role at your company. The behavioral interview, the final phase, is time consuming, is expensive, and requires a good level of skill by the interviewer. Consequently, it should not be done for a large volume of candidates—only for the candidates who have a reasonable probability of success, and only at the end of the process immediately before the hiring, decision is made.

In addition, behaviors are extremely job specific, so the behavioral interview must focus on the specific set of behaviors which correlate to success in the job role. To properly prepare for a behavioral interview requires that you know, at a granular level, what behaviors correlate to success and which to failure for each specific job. There is no universal set of behaviors that a cybersecurity or IT person needs. Thus, the Will phase requires a thoughtful approach to drafting a job description, including both Can and Will factors—capabilities and behaviors—for each role. Even though it requires resources and skill, the behavioral interview allows the interviewer to understand and analyze what behaviors the candidate possesses—the good and the (potentially) negative—and differentiates between what the candidate Can do, in contrast to what they Will do and thereby drives the best possible hiring decision.

Now that we’ve described the Can–Trust–Will structure, and before we take a deeper dive into each component, we’d like to spend a moment on how to assess. Many of the experts who have contributed to this book are concerned with learning, particularly the value of candidates who have the ability to learn quickly, and those who are tinkerers, who have the curiosity to figure out complex problems, and who are unafraid to break things so they can figure out how to fix them. The question generally posed is related to high-potential candidates. What is the best outcome for candidates who fail the technical skills evaluation, or even some components of the behavioral interview? Is it best to summarily reject these candidates, or is there a better way?

Our better way is to take binary outcomes (Can or Can’t) and divide the rejection outcome into those who can probably be trained and those who probably can’t. Welcome to Failure–Coachable, Failure– Noncoachable, and Success.

An Overview of Assessment: Failure–Coachable, Failure–Noncoachable, and Success

So far in our overview of Can–Trust–Will, we have described the things which must be assessed and the order in which they should be examined. Now we take a minute to discuss how to assess. Running through the system doing binary evaluations will result in some candidates being accepted but many others being summarily notified that they do not meet the requirements and will not be considered further. The concern is whether such a strict system will result in good candidates being rejected when just a bit of improvement or training would produce an excellent employee. How do we ensure we don’t miss good candidates who just need an assist?

Binary assessment—pass/fail—is efficient, and not digging deeper is a defensible decision, particularly in the Can phase where there may be a large volume of people who meet the standard and are capable of doing the technical work. In that case, digging deeper to assist marginal candidates is probably not cost-effective or worthwhile. But if there are not a sufficient volume of candidates with the technical skill or if you are in the Will phase and are disqualifying every candidate, it is worth digging deeper to determine if a candidate can be trained or coached to success.

This deeper assessment requires an evaluation of potential. If a candidate cannot or will not do what is necessary right now, is it possible they will be able to in the future? Is a currently unqualified candidate trainable? Can the deficit be remedied by learning? Is the candidate coachable? To fully assess potential, there are two aspects to consider. First, can the particular candidate learn? Not everyone can learn and not everyone is willing to learn. Furthermore, the reality is that not everybody can learn quickly, or quickly enough. Second, and more importantly, is the deficit something which can be taught? Many deficits can be corrected through training, but there are some things which people are bad at which probably can’t be taught. Potential depends on whether the candidate is a learner and on whether the specific deficit can be remedied by training. And that is the beginning of the dichotomy between Failure–Coachable and Failure–Noncoachable. It’s a question of identifying weaknesses but also making a judgment regarding potential to correct the weakness. Can this person learn this thing or can they not? The simplest example we usually turn to is: I can probably teach you how to drive a forklift, but I probably can’t teach you how to be on time. And that’s the difference between Failure–Coachable and Failure–Noncoachable—to make sure that you are capturing the people who are going to be great if they have training, and separating those from the people who are not going to improve from training.

A more complex example is something like writing a meeting agenda. Some people can learn how to do it, but some people actually can’t. If you have a person who loves the chaos of an unstructured conversation, you can tell them that agendas are important and you can explain to them that a free and easy conversational style is great in a poker game, but it doesn’t produce the results you need in a professional business setting, but they probably will never learn how to do an agenda and keep decision making on track during a business meeting. It may be that they simply can’t do it, or it may be that they won’t do it because they’d rather have the free conversation they like instead of the structured conversation the business needs. But since it’s unlikely to happen, regardless of the cause, their inability to write a meeting agenda can’t be solved by training, and the potential for their success in such a role is very low. They are Failure– Noncoachable. On the other hand, if you have a person who has never structured a meeting agenda before, and it takes them six hours to do what should only take twenty minutes—but they actually do it—that person has learning potential. What we have seen is if the person understood the process, if they learned, the first time it took six hours, but the second time only took three hours, and by the fourth iteration, they were doing it faster than the supervisor. That’s a person with high potential for success in such a role. They are Failure–Coachable.

Spotlight: Digging Deeper Into Coachable Versus Noncoachable

The distinction between coachable and noncoachable can be delicate and layered and requires thorough analysis. We were recently called to assist a client with a new hire who had been identified as a high-potential leadership candidate and immediately placed into the client’s leader development pipeline. Spots in the program were limited and highly coveted, and this employee—who had been assessed as a top performer—was now failing out. The company wanted to know why. We conducted a 360-interview and evaluation process which revealed the employee was highly intelligent, capable of learning the system materials, and interested in the benefits of promotion. However, when presented with the additional workload required by the leader development pipeline program, he would consistently delay doing the assignments, request an extension, and then when pressed, would present reasons for failing to complete the additional work. We quickly realized his reasons were carefully curated to fit within HR policies which made it impossible for the company to take corrective action. So he stayed in the program taking up valuable resources while making no progress toward completion. Our evaluation concluded that the candidate possessed the intelligence and skills to succeed in the program but was simply unwilling to do the work. While we were impressed by his highly skilled passive avoidance techniques, we assessed him as Failure–Noncoachable.

Further illustrative of this concept is a situation where an executive coaching client was struggling in her role as Chief Operating Officer at a medium-sized company. She had been successful in a variety of developmental roles, which had led to a promotion. Her boss wanted to “get her some help” and to remedy “her tendency to micromanage.” After a few months of coaching sessions, we found she didn’t actually micromanage: she did all the work. As her career had progressed, she succeeded by working harder than her contemporaries, coming in earlier and leaving later, and taking pride in being a “workaholic.” While that strategy had been effective in the earlier developmental roles, it failed her in the Chief Operating Officer because there were not enough hours in the day to do everything herself. Things progressively fell apart as the tasks for which she was responsible exceeded her bandwidth. Unfortunately, she continued to insist that it was her job to do everything. She believed that a fundamental part of her role as an executive was to come up with the best ideas. If her staff came up with an idea, she used it—but considered it to be a personal failure. In her mind, and her self-image, she had the most experience and skill so she should be coming up with everything. If someone of lesser skill and experience solved a problem, it was because she had failed. Consequently, she was incapable of focusing on ensuring operations were completed because she was fully engaged with doing everything. And she could not understand why she was being criticized when she was working so hard. Because of this perspective, she was fundamentally unable to understand why operations were failing. She was also incapable of understanding that her staff was unhappy because they had to wait for her authority to do even the most basic tasks, which meant they waited for extended periods to get basic approvals because she was so busy. We assessed her as Failure–Noncoachable. A person either has the ability to acknowledge skill in another without being threatened, or they don’t—she did not. Allowing others to succeed lowered her self-image. A person with this attitude may be able to change, but it usually comes as an epiphany; it’s not something that can be taught.

On the opposite end of the spectrum, a client came to us from the culinary industry. She had landed a job in a prestigious restaurant right out of culinary school and was building a successful career in this male-dominated field. However, she was preparing to quit because of how frustrating her work situation had become after a recent promotion. She had been given supervisory responsibility for two entry-level chefs, and they would simply not do their jobs. As a consequence, our client was regularly disciplined for the errors caused by her charges; she found that all she wanted to do was scream at them even though she knew it wouldn’t help. Through the course of a ten-week coaching program, we began a process of learning about the details of her situation and suggesting strategies to try. Nearly all of our guidance was focused on helping her to understand the behavioral characteristics of her charges and to develop strategies which caused them to face immediate negative consequences for bad behavior and immediate positive consequences for good behavior (all without her needing to scream). One of the things which impressed us the most about this client was how hard she pressed us for explanations about our advice and for detail regarding how to interact in alternative scenarios. Her most common question was: “Yeah, but how … ?” She wanted to succeed, and as some of the strategies began to work, she deepened her questioning. In short, she had a high level of willingness to try a variety of different things, evaluate results, then refine, and refine, and refine again. When we picked up the phone for our week eight conversation, she announced she had been promoted to middle management. A year later, she was promoted again and this time to sous-chef. Going from screaming frustration to a double promotion simply based on a willingness to try anything new to see if it worked, and to keep refining until it did work, is Failure–Coachable.

This dichotomy can now be applied to one of the most common issues in cybersecurity, which is communication. Specifically, the use of technical terms; the notion of “speaking tech” versus “speaking people.” If you have a person who can both “speak tech” and “speak people,” but they won’t lower themselves to “speak people,” the issue is Will—and the question to answer in assessing potential is whether the candidate is responsive enough to a coaching conversation to change their opinion and choose to “speak people” when it’s needed. If the person responds positively to a discussion of when technical jargon is useful and when it creates problems, they are Failure–Coachable. They may need a series of reminder conversations to reinforce their improvement trajectory, but the problem can be remedied through coaching because they are willing to change and “speak people” when it’s necessary. On the other hand, if the response is “Yes, I can ‘speak people.’ But I shouldn’t have to. So I won’t.” That choice is Failure–Noncoachable. Such a person may change his or her mind over time, particularly if a negative consequence is involved, such as reassignment to a role which does not require interaction with nontechnical personnel. But training isn’t going to help, so training is a waste of time and resources. Another example is teamwork: if a person knows how to operate in a team environment, they will rate high in a teamwork personality assessment—but it only matters when things get stressful and/or if in the day-to-day if they want to work in a team environment.

And that raises one of the high-impact issues with Failure–Coachable in the cybersecurity world: the time and expense of training and coaching. In addition to the expense of some types of technical training, the industry is currently struggling with an inability to reliably predict aptitude for cybersecurity training and skill development. This raises a complex set of decisions which are driven by a variety of factors. Employer size and financial capability is an overarching factor which drives different companies to handle candidates with a lack of skill and training in different ways. A large company with a great need for entry-level employees will be more able to place new hires into a training program and be minimally impacted by those who fail out or discover they don’t have the aptitude to learn the skills necessary to perform jobs in the cybersecurity realm. Smaller companies don’t have the budget or time to have an in-house training program. Consequently, a Failure–Coachable candidate has a better chance of being hired in a low-level job with a very large company and a very poor chance of being hired by a two-person firm who needs an expert who can immediately begin to serve the client they just landed.

The concept of distinguishing between Failure–Coachable and Failure–Noncoachable has broad application to both hiring and job performance in the technical world, in general, and the cybersecurity arena, in particular. In general, skill can be taught, attitude can’t (this goes back to the forklift/being-on-time example). Why is this important? Because, particularly in cybersecurity, not every applicant will come to you fully developed in the way that you need them. If your process disqualifies everyone who has a high level of failure indicators, you’ll not only miss people you should hire, you’ll miss the opportunity to team build through training. But it’s critical when differentiating between Failure–Coachable and Failure–Noncoachable to ensure you only consider people with attributes which can actually be trained. Hiring an otherwise great candidate with the hope that they will somehow become inspired to show up on time (for example) is futile. On the other hand, hiring someone who has passion for the work and is eager to learn but does not have the technical skill to do the job is not really a risk. It’s a training issue to be addressed during onboarding. The key is to distinguish between those who will progress through training and those who will not. Marie Chudolij,1 a Senior Program Manager for Siemplify, a security operations provider, expressed it simply:

Marie: You can teach people how to do things, but you can’t teach people how to behave or change their personality. You can review a resumé and it looks like they’re going to be a rockstar, but when you have an opportunity to sit down with them, the personality may not necessarily be quite right. I honestly feel that personality is going to be far more important than your previous experience. Because I can teach most people how to follow new processes or work with new programs, but I can’t change who they are.2

As Marie explains, there is a fundamental difference between what a person can do and how they will behave. In fact, the Failure–Coachable category was created to address what to do with great people who need training. Failure–Noncoachable is the repository for those who have the skill but lack the ability to implement through interaction. Bill Brennan,3 the Senior Director of Global Information Security at Leidos, reinforces this point:

Bill: I can’t change your gray matter. You are who you are. I can probably modify some of your behaviors, but your experiences make you who you are. I can teach you technical skills. I’m confident that if I think you’re smart and you’re capable and you have the right nature for the role that we’re looking for, I can teach you the technical bit.4

Bill’s observations match our experience. Who you are is driven by what you value, and your values dictate your behavior; essentially, how you choose to act, or more precisely, how you choose to interact. The key is to identify, as early as possible, the need for training. That’s why Can—capability and skill—come first. And why Will—what Bill calls “gray matter”—is the final step. Capability and skill are much more susceptible to correction by training; how a person chooses to act, much less so. Amanda affirms this view, explaining why cybersecurity is so unique in this aspect:

Amanda: In my opinion, information security is ultimately risk management. Sure, you need to have the technical understanding and layers of controls and tools to identify, protect, detect, respond, and recover, but at its core, information security manages the risk to the confidentiality, integrity, and availability of an organization’s information. That understanding sets you apart in infosec; you can learn the technical aspects. It’s those innate traits of wanting to learn, wanting to solve the puzzle, an ability to communicate clearly, keep calm under pressure, and a strong moral compass—that you need to be successful in information security.5

Some may be uncomfortable with the term “failure” as being unduly pejorative. In the absence of an equally clear term, we believe “failure” is appropriate. Particularly, when divided into “Failure–Coachable” and “Failure–Noncoachable.” But there is another reason which is relevant to security in general and to cybersecurity in particular. This is a high stress world. We expect the people in these jobs to act quickly, clearly, and accurately when everything is crashing down around them. We expect them to be able to put the business first and themselves second. We filter for the ability to give bad news to someone much higher in the chain of command, someone who can damage their career or fire them, simply because accurately describing how bad things are is crucial to solving the problem, but is also simply the right thing to do. In essence, we are looking for resilient and mature professionals. As Nick Davis,6 the Director of Information Security Governance Risk and Compliance for the University of Wisconsin System, explains:

Nick: When I interview people, I look at their outlook on things. Are they optimistic in nature, are they kind, or do they seem aggressive? Do they seem thoughtful, do they seem anxious? And I look for those general personality attributes because information security is a frustrating field to work in. I’m looking to see if they can maintain a calm demeanor, can they stick to their convictions while they’re maintaining their calm demeanor. You don’t want people that are just people pleasers. You want people who can remain calm and polite, but people that don’t move easily from their convictions.7

One of the markers of resilient and mature professionals is to not be derailed by failure, to be able to look past failure, to learn from it, and to be hungry to “fail forward.” If you can’t get past having failures (because they will happen to everyone in this field), then you probably will not succeed in the high-pressure environment of cybersecurity. Differentiating between Failure–Coachable and Failure–Noncoachable is actually how we distinguish between a candidate who is comfortable with the discomfort of failing forward and identifying a candidate who should look for a rewarding career outside of cybersecurity.

Can They Do It?

The Can of our hiring model is specifically, one might say narrowly, focused on technical skill. The specifics of Can may be uncovered through the resumé, a preliminary interview, or some type of skill testing protocol, but the key is to remember that since it’s the first step, it’s the biggest candidate pool and therefore the most expensive. To ensure you find the employees you need at the end of the process, and to avoid unfairness and potentially, litigation, whatever evaluation you do for one candidate in each pool, you must do for all of them. Cybersecurity skill evaluation takes many forms. Government competitions, commercial assessments, homegrown corporate testing, skill interviews, cursory resumé reviews (looking just for skill capability)—it depends on the sector, size, and cybersecurity maturity of the hiring entity.

One of the most difficult challenges is having the self-discipline to use resumés properly in this initial phase. So many processes begin with a stack of resumés and quickly devolve into separating them into two stacks: some form of “interview” and “discard.” This is fundamentally a waste of time because it does not extract skill data and also leads to a high volume of interviews (usually, all of the “not sures” also get interviewed). If there is a resumé review in the Can phase, it simply must be confined to confirming whether the resumé provides data which reveal whether the candidate has the technical skills necessary for the job role. All the reviewer should look for is degrees and prior job experience, nothing more.

Questionnaires sent in advance may be used in place of a Can-based, in-person interview. There are many commercial vendors that offer quizzes tailored to general job descriptions (i.e., malware analyst, IT security, and incident responder). These evaluations can consist of twenty-five questions or scenarios requiring analysis that may take four hours. Generally, advanced question sets may be best suited for mature cybersecurity organizations with a large-scale operation. However, this requires a staff of HR professionals with the resources to conduct research upon which to build question sets which correlate to job performance. Such testing may also be done on-site, if the employer prefers—it’s the same evaluation.

A key consideration to keep in mind here is the difference between the thought processes which are essentially technical skills and behavioral characteristics which are not. The confusion can come during the Can evaluation of highly technical skills for some cybersecurity jobs. Let’s begin with a simple example. A candidate who is highly suited to read network traffic logs all day is highly suited to it because they love the structure and process which comes with reading such logs. Reading these logs is fairly simple and most people with a bit of technical training can do it. But only a detail-oriented person who is most comfortable when following procedure is willing to do it all day long. Consequently, the differentiator for a job reading network traffic logs all day is not the technical skill of being able to read these logs; it’s the behavioral characteristic of wanting to do it and finding satisfaction in doing it that correlates to job success. This is an issue to be addressed in the behavioral interview done during the Will phase, and not indulged in during the initial Can phase.

A more difficult example is the evaluation of the thought process which is part of the technical skill set for data analysts. The challenge is that questions that serve to unearth whether a candidate has the problem-solving/analytic thought process capability required for the job role may mimic behavioral interview questions when they’re actually not. The confusion arises when the interviewer, quite correctly, asks the candidate to recount a situation which demonstrates her thought process. The difference is in what the interviewer is seeking. For Can, the interviewer wants a story about a thought process; for Will, the story needs to be about a behavior. How you think is a skill, it’s not a behavior. How you interact is a behavior, not a skill.

And while both can be explored, particularly when operating at a sophisticated level, the difference is found in the form of the question. The Can question is: “What did you do when you detected malware spreading through a certain sector, assuming this operating system, this software, this [fill-in-the-blank]?” This question wants to know what your brain did. In contrast, the Will question is: “Tell us about a time when you uncovered malware on a senior executive’s device; what was your incident reporting strategy, and how did you work with your team to stop the spread?” This question seeks out how your feelings impacted your behavior. Alexi Michaels,8 a trainer-developer at BlackBag Technologies, a computer forensics software company,

Alexi: My boss did not ask me super technical questions—he said he did not like to judge based on putting someone on the spot. But instead he did ask technical questions on what I would do in a certain situation, or how I would conduct a certain type of analysis.9

The Can-based interview may also include on-the-spot assessments in the form of real-time scenarios, as Martin Durst,10 a Senior IT Support Specialist at Drexel University’s Kline School of Law, recalls from his own interview experience:

Martin: When I got to the [interview] room only the hiring manager was there. My future team member popped in, he said, “This is perfect,” and took me to the office of the Associate Dean of Students, who was having issues with his computer. So my now senior teammate asked me to troubleshoot the Dean’s computer. Even though I don’t think it was planned this way, it worked out because it became a practical part of the interview. His computer was slow, and I was able to fix it. The Dean, the hiring manager, and my teammate were all very happy, and the rest of the interview went very smoothly.11

Whether the Associate Dean’s computer slowdown was premeditated or not, the scenario allowed the hiring manager an undoubtedly keen glimpse into Martin’s skill set, problem-solving ability, and mindset in handling a problem, thereby going above and beyond the Can phase in a few respects. A whiteboard session may serve the same purpose: to help the hiring manager see the candidate’s ability, but also her thought process and reactions to stress. As Amanda explains:

Amanda: We conduct whiteboarding sessions as the final interview round for candidates. Nothing overly complicated, but we found that asking a candidate to whiteboard a few simple scenarios helps distinguish the candidates who just know the buzzwords. It also provides insight into how an individual conducts themselves under pressure and how the candidate problem solves. Something I find valuable during an interview is a candidate’s willingness to say “I don’t know.” Humility and self-awareness are important personality traits. Not to just say “I don’t know” and wait for someone to give you the answer, but to look for the answer, to talk through their thought process, and knowing when it’s time to ask for help. The whiteboarding session has become essential to our hiring process. It’s not our intention to get a “gotcha,” in fact, I think standing up in front of three people with a marker and a board is pressure enough. The exercise showcases the candidate’s skill level in what would be required in their day-to-day, while also getting a better understanding of their personality, rather than the typical formality of an interview.12

The Can phase of our hiring model necessarily focuses on technical skill. And both Alexi and Martin recount different ways in which data can be extracted to reveal a candidate’s level of skill. Amanda’s example actually shows something equally important, which is that just because a candidate has already demonstrated Can, technical skill-based questions can still be part of the later Will review processes. Will does not have to be assessed in a vacuum. The whiteboard is the final stage of Amanda’s selection process, and while it tests for differential data relevant to behavioral characteristics, it does so in a unified process which places the candidate in a situation where they must show how their technical skill and moral compass interact with their behavioral characteristics under stress and thereby reveal the whole person. By placing the whiteboard session at the end, Amanda conducts a deep, time consuming, and expensive review, but only for the very few candidates who are most likely to succeed. It is wonderful confirmation of the efficiency and cost-effectiveness of the Can–Trust–Will process.

Should the Organization Trust Them?

Many people get to some point in the due diligence and insider threat discussion when they say, sometimes in frustration, “But I have to trust my people. Without trust, we can’t function.” And as far as that statement goes, it’s correct; but a deeper understanding of trust in business is warranted. How do we trust in business? How do we build trust in business? And how do we know when we can trust in business? The answer is as simple as it is counterintuitive.

If we look at Trust from a due diligence standpoint, we want to do a general review of trustworthiness that is based on past behavior. Does this person lie, cheat, or steal? We answer that question by looking in the past to see whether they’ve lied, cheated, or stolen before. If we are doing something deeper: will this person be a negative disruptor or will they not—that now becomes a hybrid. Can I trust this person with my innermost thoughts as we are problem solving, or as soon as I say something that they disagree with, are they going to pounce on me? That becomes a trust interaction and very quickly goes to how you have a trust relationship with another person. Here, we distinguish between interpersonal trust and trust in business. The interesting thing about trust in the business and leadership sense is whether a person has a trusting relationship depends on them, not on the person they are trusting. You see this in executive coaching and leadership seminars when they talk about micromanaging.

Trust does not arise because a person is trustworthy, and it does not come when a person demonstrates trustworthiness by establishing a track record of good behavior. Trust can be immediate and has nothing to do with the person being trusted. For business, trust in another person exists when you believe they cannot harm you or your business. When you believe there is nothing a person can break or screw up that you can’t fix, you are able to give that person leeway to try, fail forward, and learn. In business, that’s trust. Part of true team-building actually starts with the idea that as I interact with my teammates, I know I can say anything because I’m in an environment that doesn’t punish me for saying it. And it’s also an environment where I can try things because there’s no way I can create irreparable harm. Amanda explains how trust allows her team to effectively get the job done:

Amanda: Our team is composed of members with diverse experiences and perspectives. While most of our team has worked in IT, some members worked in customer service, project management, risk management, and some in different industries. Each member’s unique perspectives are amplified by our trust for one another to do our best, do what’s right, and meet our collective goals. Transparent communication is everything to our team. We’ve often referred to our team as the “circle of trust”—a reference to Robert DeNiro in Meet the Fockers—we need to trust each other in order to be effective as a team.13

This idea is further explored in Chapter 5, on “Trust and Teamwork.” If, on the other hand, you think a person can create problems that will be difficult for you to solve or which will create irreparable harm, then you won’t trust that person to act without oversight and their track record of good behavior will have no impact on your micromanaging of their work. Business trust rises and falls on your perception of your own ability to handle problems created by others. If you have difficulty with trust at work, the issues to be addressed lie within.

Spotlight: Due Diligence Vetting

Due diligence is not static, it’s a process. And like most processes, to be effective, it requires a strategy. So, where do you start? Due diligence has three essential components. They are trade-offs, and there is no way of getting around them. So use them to decide what’s important to you and your company, and then make value judgments needed to build your strategy. The three components are speed, accuracy, and cost.

Quick due diligence processes can be either cheap (a down and dirty glimpse) or expensive (lightning fast return), but they are generally less accurate because they are a single snapshot in time. Remember, a records check or data pull is only as good as the database being pulled from, and a single data pull begins getting stale as soon as it’s done. Something that happens immediately after your data pull will not be in your results. Slow due diligence processes can also be either cheap (slow results) or expensive (an extensive deep dive), but they will often be more comprehensive because they tend to pull data from several sources over the time frame of the due diligence process. In addition to data pulls, a longer process can include reference interviews, work history confirmation, and other deep dive investigative components.

Expensive due diligence is not necessarily high quality. Sometimes expensive just means labor intensive. And sometimes, it just means expensive. Accurate due diligence does not necessarily mean expensive. A cheap data pull from an aggregated database which is highly likely to have what you are looking for is both cost-effective and accurate. And cheap due diligence is not necessarily low quality. The low-cost due diligence services usually rely on a data pull from one of the three major data aggregators.

Once you begin to see the level of complexity caused by the trade-offs of speed, accuracy, and cost, you’ll understand that expert assistance is a necessity to ensure you don’t hire someone who ends up damaging all you have built. Consulting with a reputable firm is a good way to ensure your due diligence process is properly assembled.

Spotlight: Data Aggregators

Knowing who the big data aggregators are and understanding what’s in their databases can assist in your strategic decision making.

As of this publication, the three main data aggregators are TransUnion, Thomson Reuters, and Lexis/Nexus. Many of the companies who offer due diligence and background check services use one or several of these data aggregators. Each database has positives and negatives because each aggregator gathers different types of data and refreshes its database at different rates. Most important is to understand that aggregated data are second-hand, so you need to know how accurate it is. For example, if an aggregator advertises that it collects ninety percent of all available data, you risk having the data you need being in the ten percent gap. Next in importance is the refresh rate. Is the aggregated database updated daily? That’s great, but also very expensive. Is it updated monthly? Quarterly? That gap means that even if the aggregator collects one hundred percent of available data, it still misses the most recent data and will continue to miss it until the next refresh. Some may think this is a binary choice between buying the most expensive service and doing nothing—but it’s not. The key with due diligence records checks is to ensure you know what you’re getting for the spend.

If you ensure you know where the gaps in the data are, you can make a choice about whether and how to fill those gaps with other due diligence services. And it’s knowing what the gaps are that allows you to utilize due diligence services more effectively in executing your strategy. The key is to understand the data and how it’s being searched so you know how to assess the result. And that means asking questions of the search firms you consider hiring. And finally, understand when you hire a search firm that uses aggregated data, it adds a step—a third party is searching a second-hand database. Quick? Yes. Cheap? Certainly. But also potentially inaccurate. When looking at services which use aggregated data, remember the trade-offs: speed, accuracy, and cost.

What’s the next step beyond firms which use aggregated data? Many due diligence and investigative firms offer services which conduct public records checks on individuals. Generally, the service needs you to select a time frame and to provide the places where the candidate has lived and worked during that time frame. A basic example would be a five-year check. The company would conduct public records searches by contacting the county and city clerk offices of every jurisdiction where the candidate has lived and worked in the past five years. It’s easy to see that it’s more expensive and slower to search public records for each jurisdiction where the candidate has lived and traveled. And the farther back in time, the longer and more expensive it can be. But it’s more accurate.

Going beyond aggregated data and public records checks leads to what many due diligence firms call Executive Background Checks or Full-Spectrum Due Diligence. These services can include a variety of data pulls and in-person investigative work, including social media analysis, interviewing references, confirming degrees, interviewing associates, colleagues, and neighbors. What can be done to examine a person’s background is limited by law, money, and how much time you have to do the investigation.

The key when building your due diligence process is to begin by setting a strategy which meets your needs. Setting your strategy begins with understanding the trade-offs between speed, accuracy, and cost and deciding if a check-the-box process will meet your requirements. Most would reject a check-the-box program on principle, but we take a minute to address it here because for some companies, it actually is best. In highly regulated industries and in government contracting, particularly where some of your employees will need to hold U.S. Government security clearances, your due diligence process is already in place because you must comply with regulations. If your company is subject to regulation, the only question is whether there is a business case for doing more than the regulations require. In most cases, the regulations are comprehensive enough, and in some cases, expensive enough, that devoting even more resources to due diligence is not worthwhile. Consequently, heavily regulated companies don’t need to do strategic planning beyond developing a system which efficiently implements the regulations which control due diligence for their industry.

For unregulated industry, the strategic planning process is more substantial. Once you understand the trade-offs, what’s next? How do you build a cost-effective due diligence program which gives you what you need? It actually begins with deciding what harm your employees can do to your company and to your business model. It’s also important to recognize that the potential harm an employee can do varies with the employee role. The more access an employee has, the more harm they can do, and consequently, the deeper (and more expensive) your due diligence process should be. It’s important to recognize due diligence is not a one-sizefits-all process. A warehouse employee can harm the company through theft and negligent operation of warehouse machinery, while a sales person can harm the business’ reputation through inappropriate interaction with customers. And a systems administrator has the ability to compromise your entire way of doing business, everything from stealing intellectual property to disabling and shutting down your electronic systems. The due diligence issues for the warehouse employee can probably be adequately addressed with a data pull from a firm using a data aggregator. But the systems administrator should be examined in much more depth before being trusted with access to the core of the business.

Consequently, when you go through the process of establishing the capabilities and behaviors which correlate to success in the specific job role, it is also important to determine how a person in that job role could harm the company and build a due diligence process that will collect data which correlates to the risk of a person engaging in that behavior. On the simple end of the scale, it is the very basic notions of honesty: who will lie, cheat, or steal, and who won’t. On the complex end are deeper behavioral assessments which correlate to deeper security issues including insider threat, theft of trade secrets, and economic espionage.

On the simple end, it’s simple: past behavior is the best predictor of future behavior. The most basic way to identify those who are likely to lie, cheat, and steal is to do criminal records checks, financial records checks (bankruptcy, small claims, and other financial disputes), and degree verification. Often, these simple records will reveal whether a person is honest, or whether, when under pressure, resorts to self-interest at the expense of others. And while we’re still on simple things, it’s important to notify candidates of what records will be checked and what indicators you’re looking for. If you disqualify a candidate without having notified them before the records pull, they may have a legal remedy. The key is simple: build your due diligence process to correlate to the behaviors you are seeking and notify all candidates in advance, so nobody is surprised. Your goal is to identify and hire the people you need, not play “gotcha” with candidates who have taken the time to apply for work at your company.

For the complex end, a single data pull is not enough. You need to develop expertise in behavioral trait analysis or bring in expert assistance. To determine if a candidate poses an insider threat risk, you need to know what the threat is and what the indicators are—the “flags.” And this requires expertise to ensure the flags you designate actually correlate to the risk you want to prevent.

At this point, it should be clear that a layered and dynamic process is required for effective due diligence. For some roles, a simple and quick records check process is all that is necessary to determine honesty. For roles with deeper access and correspondingly greater ability to cause harm, a records check is just the first step in a much deeper and more comprehensive review of a person’s past behavior and may include predictive analytics to determine if a candidate is suitable for a high-trust position. With this understanding, your due diligence program will develop layers, with the simple and inexpensive work being done earlier in the process and the deeper, more complex data pulls and analysis being done toward the end. In fact, some of the deeper work is actually done after hiring and will become part of your insider threat program.

One of the keys to remember with due diligence is that we are dealing with people, and people change. Your assessment is only as good as your data, and as a person changes, the data changes with them, but the data lags. Consequently, while your initial hiring due diligence process can often be done with one data pull, and your deeper reviews are done with more comprehensive data, ongoing security requires periodic updated data and analysis to monitor how the threat changes in relation to how people change.

In addition, and even though it’s outside our discussion of hiring due diligence, you should recognize that due diligence in hiring informs your ongoing security programs, including how often and how comprehensively you conduct reinvestigations. Your reinvestigations cycle not only is an important part of regulatory compliance, it also informs your insider threat and theft of trade secrets program. And these programs drive the effectiveness of your security and cybersecurity systems. Finally, your overall security programs depend on the effectiveness of your hiring and onboarding due diligence programs. It all interlocks.

And recognizing it all interlocks gives you the final and most significant advantage: creating and maintaining a continuous and ongoing security program means you actually don’t need to build a perfect due diligence program for hiring. Since no static program will be effective to keep you safe, you don’t need an onboarding system which weeds out one hundred percent of the bad apples. But you do need an ongoing process which continuously looks for problems and weeds them out as they appear. If each step in the security program cycle filters, refines, identifies, and removes threats, no one component of the cycle needs to be perfect—the system will be. So don’t even try to build a due diligence hiring program which operates as a static barrier. Create a unified security cycle which includes due diligence, compliance, network security, insider threat, theft of trade secrets, and durable cybersecurity programs which interlock and support each other. As you can see, Trust is both complex and ongoing, and does not end at hiring. Trust is a crucial component of your company’s culture, its way of being, and how it interacts both internally and externally.

Will They Do It?

When executed correctly, Trust will drive most of your internal security programs. Fundamentally, it’s a choice; it’s each individual’s willingness to behave with shared intention. And so we come to Will. How do you tell which of the candidates that have met the Can and Trust standards are willing to function in the teams you have built to execute your company’s purpose? Bill Bender articulates the starting point this way:

Bill: The theory goes—and I subscribe to it—that it’s the creativity and the ingenuity, and the innovative spirit of a human individual and those inherent personality traits that are likely to be the things that will make you a better cyber operator. And it’s not all about just walking in the door with the technical skills and the geek science and engineering training, but it’s instead, some of the soft skills that you would bring in terms of ingenuity, innovation, creativity, and the like.14

We think these “soft skills” are actually foundational skills, and they lead to the core of the person—the individual’s behavioral characteristics—and the recognition that while behaviors are choices, they can be identified and evaluated before a hiring decision is made through the behavioral interview. And it is only behavioral characteristics as extracted through a behavioral interview which can offer a window into how an individual will act in the future. Will is a phase of the hiring process that generally does not receive the attention and weight that it should; our model hopes to change that. As Jason Meszaros,15 the Director of Technology Infrastructure and Information Security for the Minnesota Twins Major League Baseball Team, explains:

Jason: I sat on a panel at one of the local universities where it was all cybersecurity professionals, CISOs, etc. We sat down with all of the professors in charge of putting together cybersecurity programs. Every single one of them was highly focused on computer forensic experts, people who can dig into code, who can do that really deep dive. And of all the people in the panel, I was the only one who raised my hand and I said, “What about communication skills? What about problem solving skills?” They were so focused on, “We need to have the most highly technical people hitting the marketplace, because those are the ones who are going to get hired.” The reality is, in my opinion (and I was the anomaly on the panel), that those are all great skills, and they need to know that information. But do they need to be so deep into it that they lose all the other skills? That’s a huge negative in my opinion. I want people who can actually come in and communicate and articulate what the issue is, and how they’re going to resolve it. That doesn’t necessarily mean it’s always technical, because not all problems that you solve are uber-technical problems, some of them are people issues. People make mistakes—people open the wrong link—and we have to resolve that and make sure that if they click ransomware, they know how to react, who to call. There’s a whole process: a procedural focus, there’s absolutely a communication focus, on all of those different skills in order to be successful and it’s not just, “Can I dig deep into the code to figure out what the issue was?” You need those people in your office too, but there are additional skills that are out there.16

Unfortunately, many companies still don’t appreciate the significance and impact of behavioral interviews—just as Jason was the anomaly on his panel. Upon accepting behavioral interviews as necessary, the key to successfully extracting differential data in the Will phase is to also accept that behaviors are job specific. There is no universal set of behaviors that a cybersecurity or IT person needs. However, there is a place to start.

If you have a job opening, then someone in your organization may already be doing that work. He may not be doing it well; he may be multi-hatted and overwhelmed; he may be on a termination trajectory; but someone is already doing the job. You may have had several people in the role over the previous months or even years. That’s your data. And it’s individualized to the company, the hiring manager, and the job role. Look at what made each previous person successful and what made them fail. Build a “perfect person” by reviewing these behavior characteristics with everyone who interacts with that role and who relies on the role—both external customers and internal information users. These data build the person you’re looking for. And it’s based on what you actually need.

And it is crucial to remember that what a person can do bears little relation to what they’ll actually do, particularly under stress. In addition, choosing to engage in a behavior that you can do, but you don’t like to do, is really where the rubber meets the road during the Will phase because stress tends to cause intermittent behaviors. When things are going well, the choice to behave in ways you don’t prefer is easier. But when you’re under stress, you’re going to default to what you prefer, and that can result in disruptive behavior at the most stressful time. Consequently, the behavioral interview must be carefully structured to extract behavioral characteristics across a variety of circumstances—it becomes much more sophisticated when differentiating for crisis. You actually need to know if your candidate who can “speak people,” and will do so when things are calm, will default to their preference when stressed, which is yelling a string of words that nobody understands and then watching the building burn because no one understood what they said. As Wheeler Coleman,17 formerly the Senior Vice President and CIO of Blue Cross Blue Shield of Michigan and now the CEO of Executive Consultants United, explains:

Wheeler: I am interested in learning about critical times in their work experiences. If we can get real examples from these individuals about how they addressed a crisis situation, we can find out how they’re going to behave in tough times. It’s easy to be cool, calm and collected when things are going well. But when the proverbial crap hits the fan, are they able to remain calm or do they freeze up? How do they treat others when problems arise? It’s important to understand their behaviors.18

A final thought about the behavioral interview itself. Whether a person “interviews well” doesn’t matter. But it doesn’t matter for a very specific reason. It doesn’t matter because it doesn’t correlate to job performance. In other words, whether a person interviews well or poorly doesn’t reveal whether they will do well or poorly on the job. You can have the best questions supported by brilliant validation and a perfect setting and still make poor hiring selections. The behavioral interview process is not about textbook answers to good questions. Good questions are just tools to extract the information you need. If you don’t use them correctly, you’ll still hire the wrong people. It’s like having the best car and the worst driver—at best, you’ll lose the race, at worst, you’ll crash. It’s not just about hearing a good answer. Because hearing a “good” answer takes you back down the “I like you because you’re like me” dead end. Evaluating the behavioral interview is about understanding what the answer reveals. Most interviewers can’t articulate the reasons why they think a particular answer is “good.” This is because they don’t understand what information the answer reveals. The key remains: use the behavioral interview to extract differential data relevant to the behavioral characteristics you need for the job role. Evaluating answers is not binary (i.e., “good” or “bad”). Correct evaluation produces differential data relevant to Success, Failure–Coachable, or Failure–Noncoachable. And the hiring decision is not binary either (i.e., “hire” or “don’t hire”). It is Hire (standard onboarding), Hire (standard onboarding with focused coaching), or Don’t Hire.

Getting Started

A few final thoughts to consider as you work your way through this process for the first time: first, this should be done for each job you hire for because each job has different success/failure behaviors, even if several jobs have the same hard skill, certification, and training requirements. The good news is, once you have worked through the success/failure process a few times, you’ll get the hang of it and it becomes easier. Second, by using this process, you’ll begin considering candidates you would have rejected previously, and you’ll reject those you would have considered to be ideal. This is unsettling at first, but the point is important: If you want a diverse, inclusive, and highly effective workforce with high employee retention and low turnover, you must hire the people who make you feel uncomfortable and who meet your Can–Trust–Will filters. You must get comfortable being uncomfortable.

There are two other factors to be aware of as you begin building job-specific descriptions. First, stay away from category-based presumptions: things like “older people are reliable,” or “millennials are selfish.” Fundamentally, these presumptions may be correct, but they simply aren’t useful when building job descriptions. Why? Because you’re not hiring a class of people, you’re looking for a person. Millennials might be selfish, but the actual person who submitted an application may not be. Older people might be reliable, but the actual candidate who submitted a resumé might be a flake. Avoid building these presumptions into the job description and focus on describing what you need. Second, recognize that you may have a job which requires one or several personality traits which most people consider to be negative. It’s counterintuitive, but very important—and we explore it in depth in Chapter 5.

Keeping all of this in mind, begin working through the specific job role, and categorize the behaviors of previous employees (all previous employees, not just the successful ones).

1. What are the required technical skills?

2. What are the required certifications?

3. What is the required training?

4. Which behaviors lead to success?

5. Which behaviors lead to failure?

The next step is to do a deep dive into item numbers three and five. Break training down into work which absolutely must be done before hiring and work which can be (not must be) done after hiring. Training then becomes an incentive rather than just a requirement.

Next analyze the failure behaviors. Separate these behaviors into what can be taught and what cannot be taught. In general, skills can be taught, but attitude can’t, (i.e., you can teach most people how to drive a forklift safely, but you probably can’t teach someone to be on time). Behaviors which can be taught go on the Failure–Coachable list, and those which can’t be taught go on the Failure–Noncoachable list.

1 With more than twelve years of experience working in IT and project management, Marie Chudolij is currently a Senior Program Manager at Siemplify. She works in the Security, Orchestration, Automation, and Response (SOAR) space, overseeing the implementation of automation for Security Operation Center (SOC) procedures and workflows.

2 M. Chudolij, in discussion with the authors. July 24, 2020.

3 Prior to joining Leidos as the Senior Director of Global Information Security, Bill Brennan was part of Lockheed Martin’s Global Cyber Practice, where he became Managing Director of the Global Cyber and Intelligence Practice, working with government and private clients.

4 B. Brennan, in discussion with the authors. August 04, 2020.

5 A. Tilley, in discussion with the authors. July 02, 2020.

6 Nick Davis serves as the Director of Information Security Governance Risk and Compliance for the University of Wisconsin System, and has over twenty-five years of experience working in the field of information security in both the public and private sectors, with extensive expertise in public key cryptography systems.

7 N. Davis, in discussion with the authors. July 27, 2020.

8 Alexi Michaels continues her work in the field of digital forensics, where she has expertly analyzed digital evidence in internal investigations, litigation, and hacking cases. She has a bachelor’s degree in Digital Forensics from Bloomsburg University of Pennsylvania, and is currently a trainer-developer at BlackBag Technologies, which was acquired by Cellebrite.

9 A. Michaels, in discussion with the authors. July 03, 2020.

10 Martin Durst is a Senior IT Support Specialist at Drexel University’s Kline School of Law. He has a Bachelor of Science in Computer Science and is ITIL: Foundation certified.

11 M. Durst, in discussion with the authors. June 29, 2020.

12 A. Tilley, in discussion with the authors. July 02, 2020.

13 A. Tilley, in discussion with the authors. July 02, 2020.

14 B. Bender, in discussion with the authors. July 08, 2020.

15 Jason Meszaros came to the Minnesota Twins after serving in the United States Army for fifteen years, where he achieved the rank of Captain and spent the majority of his time in the Special Operations community as a human intelligence collector.

16 J. Meszaros, in discussion with the authors. August 03, 2020.

17 With more than thirty years of IT experience, Wheeler Coleman was most recently the Senior Vice President and CIO of Blue Cross Blue Shield of Michigan, where he was responsible for an IT budget in the hundreds of millions, and more than 2,000 resources (employees and contractors). Upon retiring, he became the CEO of Executive Consultants United, which offers IT consulting.

18 W. Coleman, in discussion with the authors. August 12, 2020.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.186.173