Chapter 2. Values and Actions

People must have righteous principles in the first, and then they will not fail to perform virtuous actions.

Martin Luther

Asking ethical questions in business contexts can feel unusual at best and uncomfortable at worst. But as noted in the previous chapter, big data, like all technology, is ethically neutral. Technology does not come with a built-in perspective on what is right or wrong or good or bad when using it. Whereas big data is ethically neutral, the use of big data is not. Individuals and corporations are the only ones who can answer those questions, and so it’s important to work past any discomfort.

And while big data represents both tremendous opportunity (in the form of new products and services) for broad business and social benefit, the opposite side of that coin is that it also represents serious risk. Finding and maintaining a balance between the benefits of innovation and the detriments of risks is, in part, a function of ethical inquiry.

Developing a capability to find and maintain that balance is partially ethical because of the essential nature of the technology itself. Digital business transactions (such as buying things online) and digital social interactions (such as sharing photos on social networks) inherently capture information related to, but distinct from, the data itself.

For example, showing your nephew’s picture to a friend at a holiday party leaves a faint, shallow record of that event that exists only in your memory and the memory of the person you shared it with. Posting your nephew’s photo on a social network not only creates a nearly permanent record of that sharing action, but also includes a surprisingly wide variety of information that is ancillary to the actual sharing itself. To the degree that there is a record of the simple act of sharing photos online, it contains a great deal of information.

Ethics come into play, in part, when organizations realize that information has value that can be extracted and turned into new products and services. The degree to which ethics play a role in this process is, of course, more complicated than a simple identification of which information is “ancillary” and which is not. The ethical impact is highly context-dependent. But to ignore that there is an ethical impact is to court an imbalance between the benefits of innovation and the detriments of risk.

Articulating Your Values

Organizations that fail to explicitly and transparently evaluate the ethical impacts of the data they collect from their customers risk diminishing the quality of their relationships with those customers, exposing their business to the risks of unintended consequences. Ethical evaluation includes both an understanding of how an organization will utilize the customer data that describes an enormously wide variety of historical actions, characteristics, and behaviors (data-handling practices) and an understanding of the values that organization holds.

Many values are already implicit in business decisions. Companies value happy clients and elegant product designs; employees value productive working environments and fair compensation packages. People and companies value collaboration and innovation. Some of these values are derived from the many business drivers for “doing the right thing.” Additionally, specific legal or policy requirements exist in many industries. Entire business functions are devoted to aligning those values with the business decisions and the subsequent actions we take every day.

Fortunately, you already know how to ensure that your values are being honored in the course of conducting business operations. You do it all the time. In many product design (and other) endeavors, there often comes a moment when the question is asked, “Are we doing the right thing?” or “Is this the right solution?”

In this context, the word right can mean many things. It can mean: Are we meeting the customer’s expectations? Is the design solution appropriate to the problem? Are we honoring the scope of the work? Is this a profitable feature to add to our product? Will people buy this? It can also mean: Do we agree that this action is acceptable to perform based on our values?

But when you ask, “Are we doing the right thing?” in the ethical sense, the place to start is not with a discussion of identity, privacy, reputation, or ownership (or any of a number of other important topics). Big-data ethics are not about one particular issue. Individual, specific concerns (including, of course, privacy) are absolutely important. But they are important as expressions of actions you take in accordance with your values. Ethical practices are an outcome of ethical inquiry. And while a coherent and consistent privacy policy is one possible outcome of ethical inquiry, it is far from the only possible outcome.

For example, Google’s decision not to allow pseudonyms on their Google+ social network is partially the result of an ethical inquiry into what constitutes a person’s identity. A different kind of value judgment is made when a company debates whether it is acceptable (the “right thing”) to sell anonymized data to third-party entities. Consumer protection laws such as HIPAA reflect the outcome of ethical discussions about the government’s obligations to shield individuals from the unauthorized sharing of personal medical histories. And copyright and trademark infringement concepts are derived from answering questions about who rightly owns what, for how long, and what use others can make of the created material—that is, what we value about the creation of new works and how we define the domain of ownership.

Values are also the place to start an ethical inquiry when designing products and services using big-data technologies. It would be a surprise if any organization explicitly stated that they did not value individual identity in some fashion. But, for instance, the question is not, “How should we, as a corporation, define an individual’s identity?” The ethical question is more centrally interested in what the company should value regarding specific aspects of a person’s identity, and how they should value it in the company’s individual and organizational actions.

One benefit of starting with value principles is a firmer foundation for subsequent action and decision-making. That foundation can also serve to drive increased efficiency and innovation across the board.

Teams, departments, and organizations of all types operate more effectively when they share a common set of values. Roy Disney, nephew of Walt Disney and founder of a business well known for driving creativity and innovation to enormous business and social benefit, said, “It’s not hard to make decisions when you know what your values are.” Instead of teams wasting time asking, “Should we be doing this,” a sense of explicitly shared values removes barriers and constraints to productivity and creative problem solving, turning the question into, “How can we do this?”

Turning Values into Actions

Focused action does not directly follow from shared values. A productive dialog about the appropriate action to take in support of shared values is dependent on an understanding of what those values and possible actions are.

Many people are already beginning to have this dialog. A broad range of organizations and institutions are working to align their values and actions. And ethical questions are being asked about big data in working meetings, at dinner parties, in industry groups, in legislatures across the world, and even in the US Supreme Court.

For instance, the World Economic Forum recently launched a multiyear project called “Rethinking Personal Data,” which is exploring opportunities for economic growth and social benefit in light of barriers that restrict personal data movement and protection.

As part of that initiative, the Forum defined personal data as a “new economic asset,” thus opening wide opportunities for data market innovations—not to mention a range of unanswered questions about who owns what (http://www.weforum.org/issues/rethinking-personal-data).

These represent broad-based concern and inquiry into whether or not big data is honoring our values. But we simply must get better at having collective, productive discussions about how ethics inform our values and actions. Big data is already outpacing our ability to understand its implications. Businesses are innovating every day, and the pace of big-data growth is practically immeasurable.

To provide a framework for dissecting the often nuanced and interrelated aspects of big data ethics, the following key components can help untangle the situation.

Four Elements of Big-Data Ethics: Identity, Privacy, Ownership, and Reputation

Identity

Inquiries about identity are related in similar ways. Christopher Poole, creator of 4chan, gave a compelling talk at Web 2.0 in 2011, introducing the idea that identity is “prismatic” (http://www.wired.com/business/2011/10/you-are-not-your-name-and-photo-a-call-to-re-imagine-identity/). He emphasized that who we are—our identity—is multifaceted and is hardly ever summarized or aggregated in whole for consumption by a single person or organization. The implication is that if our identity is multifaceted, then it’s likely that our values and ethical relationship to identity are also multifaceted.

Expressing a seemingly opposing view, Mark Zuckerberg recently made the assertion that having more than one identity demonstrates a “lack of integrity” (http://www.nytimes.com/2011/05/14/technology/14facebook.html).

If our historical understanding of what identity means is being transformed by big-data technologies (by providing others an ability to summarize or aggregate various facets of our identity), then understanding our values around the concept itself enhances and expands our ability to determine appropriate and inappropriate action. Big data provides others the ability to quite easily summarize, aggregate, or correlate various aspects of our identity—without our participation or agreement.

If big data is evolving the meaning of the concept of identity itself, then big data is also evolving our ethical relationship to the concept the word represents. Which makes it easy to understand the value of explicit dialog and inquiry. The more our actions are fully aligned with the evolution and expansion of identity, the more fully and explicitly we can understand the values motivating them.

Privacy

If it is true that big data (and technology in general) is changing the meaning of the word “privacy,” then we all benefit by exploring what those changes are through a discussion of what is valuable about privacy. Understanding what is valuable about various aspects of privacy, even in light of recent rapid transformations, is helpful when deciding what action we should and should not take to honor individual privacy.

Plenty of people would argue that we have gained a degree of control over how the world perceives us. Political dissidents in Egypt can express their views online in a way that no other medium, technology, or context allows them to speak—or be heard. Victims of abuse or people who suffer from the same disease can share their experiences and gain an invaluable sense of connection and community through the use of ostensibly anonymous online identities.

These perspectives, however, motivate the question: have we lost or gained control over our ability to manage how the world perceives us?

In 1993, the New Yorker famously published a cartoon with canines at the keyboard whose caption read: “On the Internet, nobody knows you’re a dog” (http://en.wikipedia.org/wiki/On_the_Internet,_nobody_knows_you%27re_a_dog). At the time, this was funny because it was true. Today, however, in the age of prevalent big data, it is not only possible for people to know that you’re a dog, but also what breed you are, your favorite snacks, your lineage, and whether you’ve ever won any awards at a dog show.

In those instances where an individual intentionally keeps any information about their identity private, at least one ethical question arises: what right do others have to make it public? If there are personal interests that naturally arise as a matter of creating that information, is the mere act of transferring it to a database (or transmitting it via the Internet) sufficient to transfer the rights associated with its creation? Extensive rights are granted to the creators of artistic works. Can the creation of data about ourselves be considered a creative act? Does our mere existence constitute a creative act? If so, then do not all the legal protections associated with copyright law naturally follow?

Further, is each facet of one’s identity subject to the same private/public calculus? By what justification can one organization correlate information about a person’s health history with information about their online searches and still claim to be honoring all facets equally? A common assumption is that these offline expectations ought to be reflected in our ability to manage that behavior online and maintain an (at least functionally) equal set of expectations. A critical topic in the privacy element of big data is the question: is that assumption true?

There are two issues. First, does privacy mean the same thing in both online and offline in the real world? Second, should individuals have a legitimate ability to control data about themselves, and to what degree?

Frequently, these discussions boil down to distinctions between offline behavior and online expectations. In the same way that we can ask of others what justification allows them to turn private-by-choice information into public data, we can ask of ourselves: why do we expect the ability to self-select and control which facets we share with the world online to be the same as it is offline?

The difference between online and offline expectations regarding the degree of control individuals have over open access to data about themselves is a deeply ethical inquiry. What value do people place on benefiting from a loss of control of their data (letting others use it in novel, innovative, and beneficial ways) versus the risk of that data being used in ways that may harm them? It was funny that 20 years ago on the Internet no one would know you’re a dog because technology allowed us to extend the ability to maintain anonymity to its extreme. Indeed, for many years, one could operate in almost complete anonymity on the Internet. And many did. To what degree has big data removed that ability from our individual choice and placed it in the hands of others?

The goal is to understand how to balance the benefits of big-data innovations with the risks inherent in sharing more information more widely.

Reputation

As recently as that New Yorker cartoon (19 years ago), reputation consisted primarily of what people—specifically those who knew and frequently interacted with you—knew and thought about you. Unless we were famous for some other reason, the vast majority of us managed our reputation by acting well (or poorly) in relation to those directly around us. In some cases, a second-degree perception—that is, what the people who knew you said about you to the people who they knew—might influence one’s reputation.

Before this gets all recursive, remember that the key characteristic is how reputation has changed. One of the biggest changes born from big data is that now the number of people who can form an opinion about what kind of person you are is exponentially larger and farther removed than it was even a few short years ago. And further, your ability to manage or maintain your online reputation is growing farther and farther out of individual control. There are entire companies now whose entire business model is centered on “reputation management” (see http://en.wikipedia.org/wiki/Reputation_management).

We simply don’t know how our historical understanding of how to manage our reputation translates to digital behavior. At a minimum, this is sufficient reason alone to suggest further inquiry.

Ownership

Along similar lines, the degree of ownership we hold over specific information about us varies as widely as the distinction between privacy rights and privacy interests. Do we, in the offline world, “own” the facts about our height and weight? Does our existence itself constitute a creative act, over which we have copyright or other rights associated with creation? Does the information about our family history, genetic makeup, and physical description, preference for Coke or Pepsi, or ability to shoot free throws on the basketball court constitute property that we own? Is there any distinction between the ownership qualities of that information? If it does, then how do those offline rights and privileges, sanctified by everything from the Constitution to local, state, and Federal statues, apply to the online presence of that same information?

Note

In February 2012, The White House unveiled a blueprint for a consumer “Bill of Rights” intended to enhance protections for individual privacy and how personal information is used online. See http://www.whitehouse.gov/the-press-office/2012/02/23/we-can-t-wait-obama-administration-unveils-blueprint-privacy-bill-rights.

In fact, there are more than a dozen initiatives and programs designed to create a codified set of principles or guidelines to inform a broad range of ethical behavior online.

As open data markets grow in size and complexity, open government data becomes increasingly abundant, and companies generate more revenue from the use of personal data, the question of who owns what—and at what point in the data trail—will become a more vocal debate.

Benefits of Ethical Inquiry

These short discussions illustrate what an ethical inquiry can look like. Ethical inquiry originating from an exploration of values exposes ethical questions in a way that allows them to be answered in more useful fashions. And while aligning business values with customer values has obvious benefits, big data creates a broader set of ethical concerns. Merely echoing the currently prevailing public opinion is shortsighted at best; there are other significant benefits available through aligning values and actions as an outcome of explicit ethical inquiry. Organizations fluent in big-data ethics can contribute much to broader discussions of how they are impacting people’s lives. The strategic value of taking a leadership role in driving the alignment of ethical values and action has benefits both internally and externally.

Those benefits can include:

  • Faster consumer adoption by reducing fear of the unknown (how are you using my data?)

  • Reduction of friction from legislation from a more thorough understanding of constrains and requirements

  • Increased pace of innovation and collaboration derived from a sense of purpose generated by explicitly shared values

  • Reduction in risk of unintended consequences from an overt consideration of long-term, far-reaching implications of the use of big-data technologies

  • Social good generated from leading by example

These benefits are achieved, in part, through an intentional set of alignment actions. And those are necessarily informed by an understanding of what shared values members of a common enterprise hold. Discovering those values through explicit inquiry and developing a common vision of the actions an organization takes in support of those values influences how you conceive of and treat individual identity, personal privacy, and data ownership, and how you understand potential impacts on customer’s reputations in the design, development, and management of products and services.

In reality, these ethical discussions can be avoided completely. It’s easy—just don’t have them. After all, it’s easier to ask for forgiveness than permission, right? And if you don’t ask the questions, you’re not responsible for not having the answers. But policy decisions are made, technical innovations are designed, and new product features are rolled out, resulting in ethical implications, regardless of whether they’re considered—ignoring them doesn’t make them disappear. Avoiding those discussions only means that decisions get made without consideration for their ethical consequences, and in a way that may not accord with your values. Unfortunately, such a lack of attention to the ethical aspects of decision-making about data-handling practices is common.

Currently only two of the Fortune 50 corporations make any explicit, public policy statement citing any reason for the existence of their privacy policy other than, “You care about privacy, so we do, too.” Which implies that, although most companies understand that people care about their privacy, they don’t have a clear statement of which values their privacy policies support or why they support them.

Although it’s entirely possible that any given policy actually does align with an organization’s values, there is no way to know. The resulting confusion generates uncertainty and concern, both of which undermine long-lasting and trusting relationships.

What Do Values Have to Do with Anything?

Values inform the foundation for a framework of ethical decision-making simply because they are what we believe in. And we believe in all sorts of things. Truth, justice, the American way, Mom, apple pie, and Chevrolet are all familiar examples.

Historically, business has been more about the development of strategic plans for action and optimizing the execution of those plans to create profit. The forcing function of big data is expanding the ethical impact of our business operations further into the personal lives of its employees and customers. It is a direct result of the sheer volume, velocity, and variety of information big data allows businesses to utilize. Businesses used to make do with relatively shallow amounts of historical buying behavior, often limited to broad categories of information, such as how many of what products were purchased at a particular location during a specific timeframe. They could answer questions like “what color car is purchased most often in Texas in the summer?” or “this coffee shop on that street corner sells more than other locations.”

Now a business can answer detailed questions like “how much toothpaste did your family buy from us in 2010—and what brands, at what frequency, and at exactly which time and place?” Reward cards associated with all kinds of goods and services know the detailed history of your purchases. That information can generate both savings benefits and annoying junk mail. Marketers of many flavors want very much to correlate that information with their products and services in hopes that they can target more compelling marketing messages. They want to turn information about your behaviors and actions in the world into knowledge about how to better influence your future decisions—and, thus, how to better inform their business strategies.

This is pure business gold. It is valuable across many business functions, ranging from designing new products and services (learning no one likes pineapple-flavored toothpaste) to building better supply chain and manufacturing models and processes to reduce costs (zip-code-level forecasting capabilities), and opening up whole new entire markets (on-demand discount offerings for last-minute hotel property cancellations).

It is increasingly difficult to “opt out” of the expansion of business operations into our lives. One can choose not to subscribe to a grocery store reward program—and accept the loss of the discounts those programs can provide. Although there is no requirement to join a social network, there can be a stigma attached to not doing so.

In 1987, Robert Bork’s nomination to the Supreme Court was hotly contested, in part by using his video rental history as evidence in support of arguments against his confirmation. His reputation as a qualified candidate for the Supreme Court was being assessed, in part, by making judgments about the movies he watched. The resulting controversy led to Federal legislation enacted by Congress in 1988. Called the Video Privacy Protection Act, the VPPA made it illegal for any videotape service provider to disclose rental history information outside the ordinary course of business and made violators liable for damages up to $2,500.

In September 2011, Netflix posted a public appeal to customers to contact their Congressional representatives to amend the VPPA to allow for Netflix users to share their viewing history with friends on Facebook (http://blog.netflix.com/2011/09/help-us-bring-facebook-sharing-to.html). It was a mere 23 years between the passing of the VPPA, where Congress took action to protect consumers from having their purchase history used to judge their professional capabilities, and a major American business asking for customer support to allow that very same information to be shared legally.

Without big data, no business would even be in a position to offer such a capability or make such a request, and the question of whether we should change the law would be moot. And this is just one small example: the big-data forcing function extends business operations into the nooks and crannies of our lives in ways we have yet to discover.

In the 23 years between the VPPA and the Netflix request, big data has influenced our actual values and what we think is important, or not, to be able to share—and via which mechanisms and for what purposes. And it is precisely the force of that extension into our daily lives and the influence that it has on our actual values that motivates a call for more explicit discussion about the ethical use of big-data technologies.

At those moments when we do uncover another expansion of the influence of big data on our lives, ethical decision points help provide a framework for getting a handle on what we value and which actions are acceptable to us—all of which helps to create a balance between the benefits of innovation and the risk of harm.

Ethical Decision Points

Ethical decision points provide a framework for exploring the relationship between what values you hold as individuals—and as members of a common enterprise—and aligning those values with the actions you take in building and managing products and services utilizing big data technologies. We’ll briefly introduce the vocabulary of ethical decision points here and describe in more detail how they can work in your organization in Chapter 4.

Ethical decision points consist of a series of four activities that form a continuous loop: Inquiry, Analysis, Articulation, and Action.

Inquiry: discovery and discussion of core organizational values

An understanding of what our values actually are (not what we think they are, or more removed, what we think others think they are)

Example: We value transparency in our use of big data.

Analysis: review of current, actual data-handling practices and an assessment of how well they align with core organizational values

The exploration of whether a particular use of big data technology aligns with the values that have been identified

Example: Should we build this new product feature using big data?

Articulation: explicit, written expression of alignment and gaps between values and practices

Clear, simple expressions of where values and actions align—and where they don’t—using a common vocabulary for discussing whether proposed actions align with identified values

Example: This new product feature that uses big-data technology supports our value of transparency.

Action: tactical plans to close alignment gaps that have been identified and to encourage and educate on how to maintain that alignment as conditions change over time

Example: If we build this new product feature, we must explicitly share (be transparent) with our customers and ourselves how that feature will use personal data.

Ethical decision points generate a new type of organizational capability: the ability to conduct an ethical inquiry and facilitate ethical dialog. Such inquiry and discussion is frequently difficult, not only because it comes loaded with people’s own personal value systems but also because business historically has not been focused on developing organizational capabilities to facilitate such activities. Big data is bringing values and ethics into product and service design processes, and this impacts a wide variety of operational capabilities that business historically has not developed a mature capacity to manage.

These ethical decision points can be identified by several methods. One familiar, if not entirely reliable or satisfactory, method is the “creepy” factor. This consists essentially of a visceral, almost automatic and involuntary feeling that something isn’t quite right. It is often accompanied by an uncomfortable shifting in your chair or that slight tingling on the back of your neck. It’s one of the feelings you can get when what you’re experiencing is out of alignment with your expectations. Millions of people recently had that feeling when they realized that Target could tell when someone was pregnant merely based on buying behavior (http://www.nytimes.com/2012/02/19/magazine/shopping-habits.html?pagewanted=all).

“Creepy” is a useful but slippery concept. And the challenge to calculating the Creepy Quotient of value-to-action alignment in the context of your business model and operations is highly context-dependent. Exactly how dependent on context varies by factors too numerous to identify completely here, but general examples include variations in industry regulations, technology stack, or platform; existing or planned business partnerships; and intended usage. Healthcare has different regulatory requirements than retail sales. Some social networks provide built-in tools to rank a person’s “reputation,” but you don’t expect financial management software to share your credit rating (one aspect of your financial reputation) with other individuals or organizations without your explicit permission.

So, although it’s a familiar feeling and “creepy” can help us identify when we’re facing an ethical decision point, it isn’t quite robust enough to help guide us into a more comfortable ethical space. Questions follow immediately about what kind of creepy we’re concerned about and exactly what to do (what action to take) about that feeling.

More helpful is to develop new methods and capabilities to explore the intuitions that form the basis of a visceral creepy response. There are natural avenues of inquiry into the precise nature of what can make us feel uncomfortable with certain aspects of big data. Motivated by individual moral codes, we can explore those values explicitly and uncover ways to bridge the gap between individual moral codes informed by our intuition and how we agree to proceed as members of a common enterprise. Encouraging the development of these methods is the broadest goal of this book.

One additional consideration is how to parse “creepy” into more useful terms. Big data itself creates an expanding series of “concentric circles of influence.” The complex interactions and connections of big data create an ecosystem of ethics, at any given point of which there is a unique set of circumstances that influences how values show up and the implications of various actions taken using that data.

In this ecosystem, as particular pieces of data are used, reused, combined, correlated, and processed at each point of expansion, the impact of value alignment factors can vary considerably—and thus the creepy factor evolves the farther away you get from the point of origin. On the first use of a particular piece of data, creepy may be different than it is three or four steps down the road. What might be creepy if you do it today may be more or less creepy if you, or someone else farther down the data trail, do it three days from now. The fact that an online retailer knows that you buy a lot of outdoor equipment is less creepy when that same retailer uses that information to provide you with discounted merchandise offers than it would be if an unaffiliated third party sends an unsolicited offer for discounted spare parts to the exact model of camp stove you bought last year. Conversely, it might seem less creepy if an unaffiliated national environmental organization makes unsolicited contact to request a donation—especially if you share the same values.

Not to mention that negotiating the use of customer data with business partners brings an entirely new set of values into consideration. If it is complex to align your own organization’s values and action, business partnerships increase the complexity with each touch point between your organization’s use of customer data and theirs.

Other topics and vocabulary that often arise during ethical decision points include:

Intention

The intentions of those who through direct or surreptitious means have access to the data in question

Security

The security of this data in the hands of each entity in the data chain

Likelihood

The probability that access to specific data would result in either benefit or harm

Aggregation

The mixture of possibilities derived from correlating available data

Responsibility

The various degrees of obligation that arise at each point in the data chain

Identity

The single or multiple facets of characteristic description(s) that allow an individual to be uniquely individuated

Ownership

The status of who holds what usage rights at each point in the data chain

Reputation

The judgment(s) that may be derived from available data

Benefit

The specific contribution or value available data is expected to make

Harm

The sort of harm that might come from access to specific data

What Does All That Really Mean?

There are such things as values. We use and refer to them all the time. We even use them to make decisions about what actions we should or should not take in a wide variety of situations. We discuss them everywhere and often, and they form a critical part of the foundations for our laws, expected norms of social behavior, political action, financial behavior, and individual and group responsibility, and they, we hope, inform our vision of the future.

Big-data technology is expanding the sphere of where we need to apply value thinking. Not because big-data technology is inherently dangerous or because it is poorly understood, but because the volume, variety, and velocity of the data it produces and provides has reached the point where it has seeped into our daily lives in ways we have never seen before.

It touches those social, political, financial, and behavioral aspects of our lives with new considerations for the very way in which we understand and agree about the meaning of important words like identity, privacy, ownership, and reputation. The goal is not to understand how to amend those words to incorporate the changes big data brings. The goal also is not to change big data to incorporate our historical understanding of those words.

The goal is to develop a capacity to incorporate ethical inquiry into our normal course of doing business. An inquiry that is a way of talking about our values in the context of the actions we take in relationship to the opportunities that big data provides.

Learning to recognize and actively engage ethical decision points is one way to start building that capability. The basic framework helps us identify what our values actually are, understand whether they align with how we’re using (or intend to use) big data, and develop a common vocabulary to discuss how best to achieve and support that alignment.

There are significant benefits to being able to talk about values in the usage of big data. A sense of shared values in an organization reduces barriers to productivity and innovation. Rather than debating whether we should do something (i.e., whether we collectively value the objective), we get right to taking action to achieve the objective (i.e., collectively working to reach the goal).

Consider any social, political, or religious organization. The Audubon Society and the National Rifle Association have very different goals, and their organizational values could hardly be more different.

But there is one characteristic that they, and many other organizations, share: their respective members share a common set of values. There may be disagreement among the ranks on occasion, and those values may shift and evolve over time, but it is clear that each organization spends a great deal of time explicitly engaged in discovering, articulating, and taking action based on their respective set of common values.

At least one of the many reasons for this engagement is that those organizations know that being clear and explicit about a shared set of common values increases their operational effectiveness. And, in the same way, as you learn how to maximize operations using big data, aligning your values with your actions also decreases the risk of unintended consequences. It won’t eliminate those risks, of course, but an explicit ethical inquiry sustains a legitimate methodology for mitigating them—and often can provide organizations with a clear place to start when a response is required.

Let’s explore now how values are currently being represented in the course of doing business today.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.139.237.82