Today’s businesses generate an enormous amount of data simply by performing their normal operations. It’s coming in from a number of departments: sales, product development, admin, IT, marketing, finance, and others. Like a double-edged sword, though, this flow of information can hurt or help a company.
When it’s analyzed, data is a great mine of business insight. On the flip side, data that is not being efficiently handled can slow down your efforts to stay current in a fast-changing industry.
Previously, the question was ‘Do we have the data?’ The new question is ‘Are our data silos really the best way to move forward?’
Just like various departments contribute to the top and bottom lines of an organization, data across business functions contributes to a realistic view of the whole. For smoother and smarter decision making, data should be drawn from all over the company, not from just a single branch of it.
As organizations see the value in data-driven decision making, they are investing heavily in analytics. Bigger analytics teams are being created, and investments in technologies and infrastructure are being made with the goal of deriving better insights from ever-increasing funds of data. And yet, one problem still looms: efficiency.
Significant Data, Inefficient Process
On the face of it, strictly separating data by function seems like a good idea. However, the reality is that time- and money-drains can inadvertently occur, perhaps even without our realizing it. Commonly, this includes:
- Redundancy – Many analytical teams working within various business functions find themselves recreating the same datasets. They may unknowingly be reinventing the wheel, applying similar rules and processes that their counterparts have already used.
- Inaccuracy – When multiple teams (in-house and outsourced) are deployed, various inaccuracies can creep into reports despite the use of similar KPIs.
- Inefficiency – Data preparation typically consumes 30-40% of the total effort spent on analysis. This figure is even higher for standard business reports. When the same data is being prepped several times over, this leads to higher costs and a delay in the consumption of insights over time.
Let’s take this concept into the real world. One leading technology manufacturer had more than 100 analytics team members supporting its global marketing operations. They figure that their average data analyst spends at least 15 hours a month doing data processing and transformation. Assuming that each analyst makes $25 per hour, this inefficiency was costing them around $450,000 annually.
Creating a Smarter Analytics Dataset Process
There is a more efficient way: having a central analytical dataset for each business function. This can be updated periodically and shared to all stakeholders. All analytics or business reports should feed in to this dataset. By default, the redundancies, inaccuracies, and inefficiencies of the above model are eliminated.
When creating this central dataset, there are a few points to consider. For the sake of demonstration, let’s say we are approaching this from the perspective of the marketing team. We would need to ask:
- What key business functions (sales, customer service, social media, etc.) interact with marketing? Including more business elements would increase the complexity of the process, so this needs to be given careful thought.
- What KPIs should we track?
- At which level should the dataset is to be created? (Customer, Business Segment, Business Region, Business Unit, Product / Service, etc.)
- What business questions will we answer using this data? Brainstorm on this with various stakeholders both inside and outside the marketing team.
- How far back into the past do we need our data to reach to perform this analysis?
Again, let’s look at an actual example of how this works. The customer engagement team of a leading global hotel chain decided to follow this approach. They created a comprehensive data mart which could be used for various levels of business analysis. After several iterations and changes, the dataset had matured to the point where more than 80% of business KPIs measurements and ad-hoc analyses could be carried out using a single dataset. The greater efficiency saved their organization more than $1.5 million annually.
Single Dataset Pros and Cons
Aside from increased efficiency, one of the major advantages of creating such dataset is its user-friendliness. Managers and other non-analytics team members can directly connect to the dataset and perform ad-hoc analysis using tools like Tableau and Spotfire without really worrying about the data or how the analytical process works.
Initially, however, there might be some resistance to creating this massive and comprehensive dataset. Internal technology and other support teams may be especially hesitant, given the starting cost and complexity. In the long run, the cost of maintaining such a system will be trivial compared to its benefits.
If your team members are spending a lot of time preparing data and generating reports, take a look at your system. It could very well be that the inefficiencies and redundancies of a function-bound system are slowing down your progress rather than speeding it up.
Authored by Deepesh Kothari, Hospitality Consultant at Absolutdata