Building a Transaction-Level Data Set

The old adage “you can't manage what you don't measure” goes a long way to explaining why companies have struggled to improve their profitability. Companies often find themselves facing a glut of information with little or no underlying structure to make it useful. A wave of enterprise resource planning (ERP) implementations in recent decades has led companies to store massive amounts of data backed by armies of individuals who manage, manipulate, and monitor a variety of metrics, measures, aggregates, and outcomes. The challenge is not simply to measure activities appropriately, but to measure the right data at the right level of detail in the appropriate format.

Effective pricing and profitability management must begin with a solid foundational fact base. The lifecycle of each transaction must be traced in detail as well as the revenues and costs associated with it. This is where profitability-related efforts often go off course. The data in most systems are based on the financial reporting timeline; that is, the date when the transaction is recorded to the general ledger. Unfortunately, this practice biases the information toward the balance sheets because each notation reflects when the transaction was paid or received. For example, a product pallet purchased in January would appear in accounts receivable that same month, while the annual rebate earned on this same pallet will appear as a cost in accounts payable some 12 months later. This spreading about of critical data is further complicated by the sheer number of cost-to-serve elements associated with any transaction, whether they be costs for shipping, customer service and technical support, or sales and marketing expenses.

Because gathering, cleansing, interpreting, formatting, and packaging all this data presents such a complex challenge, a dedicated team must be assembled, which will follow a carefully designed plan of action. To manage profitability, data must be tracked in a holistic way at the transaction level. The sheer size of the task can seem overwhelming. For many firms a year's worth of business can produce millions (or tens of millions) of transactions, and electronic records of it can be dozens of gigabytes in size. In addition, this information will most likely need to be retrieved from numerous sources, including ERP, invoicing, order management, contract management, and shipping and distribution management as well as from various financial systems, including payments, accounts receivable, and credit. One must then determine how best to gather, decipher, and assign data so that profitability insights can be uncovered.

Data Management Strategies

Although the process of advanced analytics seems daunting, some tips and strategies for companies assembling the data and laying the groundwork follow.

Realize That Perfect Data Don't Exist

Instead of trying to assemble the perfect data set, firms should go with what they've got. Though they may discover that they lack the data to test a particular hypothesis, simply going through the exercise of doing pricing analyses with imperfect data may generate important findings along the way (execution issues or operational inefficiencies or best practices, which can be leveraged across the organization) that can be acted on.

Know When to Say Stop

There is a tendency to try to include every cost or customer element and every product or service demographic. This will result in an unwieldy data set and also will likely generate assumptions and calculations that fail to resonate with key stakeholders. The focus should be on the elements that contribute the most to revenue or cost and ignore (or make simple assumptions for) those that do not. Detail can always be added later.

Avoid the “It Can't Be” Trap

Tribal knowledge is valuable to any organization. A healthy skepticism is always useful when considering a data set and the analytic outcomes it produces; it does not, however, trump the data, which may challenge accepted truths about the business.

Assign, Don't Allocate

Cost elements that are not customer or product specific, such as business overheads, should be included. To incorporate these in the transaction-level data set, businesses often use arbitrary mechanisms or complex calculations to allocate them. In the end, these elements are spread throughout the business, resulting in profit reductions for all products and customers. If a business takes this approach, then it will have little or no ability to understand whether particular elements pose problems (or opportunities) that need to be managed. Businesses should strive, instead, to assign all costs to a customer, product, or transaction.

Find Natural Reference Points and Examples

Large data sets are particularly hard to manage, and successful analytics are often built on millions upon millions of records. To make data sets more manageable, natural reference points can be selected that are easily verifiable from memory (e.g., monthly revenue totals). One way to make an internal team feel comfortable with the data is to select the company's highest-earning product and show numbers that everyone is familiar with. The total revenue for, say, a seafood company's tuna fish category may be the most recognizable figure, so using it as a benchmark can provide buy-in to the analytic exercise. Examples that support recommendations or hypotheses should be found. For instance, a large chemical company cited the price for a certain product as $3.00 per pound. However, overall profitability for the item was low because a single customer was purchasing a very large volume at less than half that figure. Although the price was reasonable considering the volume, subjecting a high-profile product to such a granular analysis produced a valuable insight that contradicted conventional wisdom in the company. In this case, one large customer's low profitability had obscured the high profitability of others.

Develop a System to Capture and Implement Data Improvements

As people learn from the data, they should find ways to incorporate that knowledge into the data set itself; for example, a new allocation method to improve the distribution of costs or a completely new data collection process to improve outcomes. A system should be put in place to capture findings, assumptions, and wish lists. Where appropriate, all of these should be incorporated into the data set.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.117.103.5