The problem with data

  1. Accessing data: One of the bottlenecks with data (or information) is that it is usually hoarded. Access to data is often limited to technology/data teams or to a few exclusive users. So, stakeholders come to depend on the technology/data teams to provide them with data. They raise requests for data or reports, which data teams provide based on how much time they have to hand. The data/technology teams make decisions on the fly about when to share data, who to share data with, and in what formats to share the data. When a more powerful stakeholder requests data, it is assumed that the need is urgent and data teams may drop everything else and attend to this. When someone not as powerful requests data, teams may deprioritize this task and not respond as swiftly. These requests also come in sporadically, so there could be redundant requests from different teams and so on. Working on these requests takes time and requires that technology/data teams switch context from product development into addressing ad hoc requests. This is one instance of a feature black hole that we saw in Chapter 10, Eliminating Waste – Don't Build What We Can Buy.

    It is imperative that today's product teams start with a data mindset. Data strategy and accessibility must be built into a product team's DNA. We cannot assume that we will handle this if the need arises. In many cases, stakeholders don't know the power of data until we show them. Stakeholders also hold themselves back from seeking data because the process of getting data is hard and cumbersome, especially when it feels like they are imposing on the technology team's time. So, it becomes a Catch 22. Technology teams don't build a data strategy because they don't see stakeholders asking for data. Stakeholders don't ask for data because there isn't an easy way to access data.

    Product strategy must proactively think, plan, and set up ways to collect data and share data transparently without elaborate procedures. The discussion on success metrics is a good indicator for the type of Key Performance Indicators that should be captured. An effective data strategy sometimes doesn't even need complicated digital tools to capture data. Simple paper-based observations are sometimes enough. Key metrics around revenue, acquisitions, sales , and so on. can even be shared on a whiteboard with a person assigned exclusively to doing this. This works in a small team with an early stage product, but finding digital tools in the market that allow real-time visualization isn't very hard either.

  2. Running incorrect experiments: In the nonprofit organization where I worked, the finance team wanted us to build an ability for our investors to be able to donate or invest money every month in the rural borrowers listed on our platform. The problem was that investments/donations were sporadic. There was no way to predict how many investors would invest every month. So, because the investment amount was not predictable, we could not determine how many borrowers we should be onboarding. Indian businesses (with the exception of a few utility services) do not have the ability to automatically bill credit cards. So, our best option to get consent once and receive money automatically was to set up monthly direct auto-debits from bank accounts. However, the banks required paperwork to be signed and submitted before enabling this.

    The finance team was convinced that investors were not investing every month because we hadn't made this process easier for investors. The product team was asked to pick up on this as a priority, and we started designing the feature. We soon realized that this was a huge feature to implement (purely based on the amount of complexity of rolling this out, and the dependencies on banks to deliver this successfully). We didn't have to estimate story points to figure out how big this was. Also, the paperwork aspect was a government regulation and outside of our control. So, while we could build requests for auto-debits into the workflow of the product, the paperwork still had to be done.

    The team was getting pressurized into delivering this, so we started to gather some data. Why did the finance team think this feature would be so impactful in ensuring predictable monthly investments? The finance team insisted that every single customer they had spoken to wanted this option. Now, 100% of consumers wanting to invest every month is too compelling to ignore. Everyone in the leadership team was now convinced that implementing this feature was crucial for us to get repeat investments. Yet as we dug deeper and looked at our data, we found out that we had a very miniscule percentage of our investors who were investing through direct bank debits. The finance team had apparently spoken to only 15 people over the past three months. In a consumer base of over 9000 folks, 15 (the numbers are only indicative and not actuals) was not a sample big enough to base our product decisions on. Essentially, this was a decision not based on facts, but more on an opinion arising out of a limited context. Did it make sense for us to invest in a feature that was impacting so few of our consumers? If all our investors, who were investing through other payment options, such as credit cards, debit cards, and payment wallets, had to transition into paying through auto-debit, it presented a huge operational burden for us, given the paperwork involved. It was clear that given our finance team's capacity, this was not doable.

    Once we had invalidated the basis on which the impact on business outcomes had been made, we ran a new experiment. We were now trying to validate if our investors (who were investing through other payment options such as credit cards, debit cards, and payment wallets) were even inclined to invest in us every month. If so, how many such investors were ready?

    We built something very simple to validate this. We introduced an option for users to tell us whether they wanted a reminder service that would nudge them to invest in rural entrepreneurs every month. It took us half a day to add this option to our investment workflow. If they chose this option, we informed them that we hadn't yet built the feature and thanked them for helping us to improve our product. After three months of observation, we found that ~12% (the numbers are only indicative and not actuals) of the consumer base (who transacted on our website) opted in.

    This was a big improvement from our earlier target base. While it was a good enough indicator and worth exploring we were still limited by our ability to automatically charge credit cards. So, we limited our solution to a reminder service to send out automated emails on specific dates to the customers who had opted in for a re-investment and tracked conversions from those. We explored our data to see if there was a trend in investments peaking on certain days/dates each month. We found that data trends indicated certain dates when there was a peak in investment. We scheduled our reminder emails to be sent on the peak investment date of each month.

    After three months of observing conversions from reminder emails, we figured that this strategy was working well enough for us. We continued to sign up more investors and to socialize the payment reminder on our website.

  3. Learning from the wrong data: What if we have compelling data, but our data is flawed in how we chose to collect it? Design has a great influence on how people use products, for instance, using coercive design versus persuasive design. These concepts boil down to simple things such as which option presented to the user is checked by default. If we choose to select an option to donate $1 to charity by default, and we keep it hidden at the bottom of a page, where no user has seen it, then we can't claim that visitors to our website are very generous.

    Basing product decisions on data alone is not enough. It is necessary to collect ample verifiable evidence, but it is also important to capture this data at a time when the consumer is in the right context. For instance, asking for feedback on a website's payment process two weeks after a customer purchased something trivial, may not work very well. Context, timing, content, and sample size are key to finding data that is relevant for use.

  4. Bias: Gathering data is only half the battle. Interpreting data is the dangerous other half. Human cognitive biases form a big part of the incorrect decisions that we make based on data. We feel great that we have used data to base our decisions on, which means that we don't even recognize the inherent biases we bring into making our decisions.

    For instance, my biases influence how I configure my social feeds. I found that a lot of content on my feeds was not appealing to my tastes or opinions. I started unfollowing a lot of people. I got picky about the groups and people I followed. Voilà, my social feed was suddenly palatable and full of things I wanted to hear.

    This personal bias could potentially trickle into how we make recommendations on product platforms. We make recommendations of songs/movies/products/blogs based on our consumer's own likes and dislikes. This means that we are essentially appealing to the confirmation bias of our consumers. The more content we show them that appeals to their existing interests, the more likely they will be to engage with us. This shows us a positive trend in our engagement rates, and our recommendation strategy gets further strengthened. In the long run, though, we are slowly but silently creating highly opinionated individuals who have very little tolerance for anything but their own preferences.

    Whether this is good or bad for business is dependent on the business intent itself. However, the bigger question to ask is: how do we learn something new about our customers, if we don't go beyond their current preferences?

    Our bias also influences how we interpret data. For example, we might start with a hypothesis that women don't apply for core technology jobs. This might mean that our ads, websites, and social content have nothing that appeals to women. Yet, if the messaging and imagery on our careers website is well-attuned to middle-aged men in white-collar jobs, then can we claim that we can't find women who are qualified to work with us? Does this prove our hypothesis correct?

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.188.252.23