Are you planning on making 2016 your big data analytics breakthrough year? Then this series is for you. We’ve already discussed some of the fundamentals of data analytics and the technology behind it; in this post, we’ll talk about common data complications.

Ask a data scientist if the process of data analytics is problem-free and we can guarantee the answer will be an emphatic ‘No!’ The challenges that data scientists face are many, but they can be described in a single word:


According to one survey, simply preparing data for analysis took up to 90% of the available time for a significant number of data scientists. What’s taking so long? In part, the sheer volume of information that analytics processes is an issue. But that’s far from the whole story. Notice what other factors ranked higher in the list than data volume:

  • Integrating data from multiple sources
  • Transforming, cleaning, and formatting data
  • Combining relational and non-relational data

That’s a lot of work, and a lot of time.

Why Data Takes So Much Time

To understand where all this time is going, we have to understand two things:

  1. Data comes in many different forms.
  2. Data needs to be in compatible forms to work together.

Think back to grade school math class – specifically, to fractions. Imagine that you have a problem that contains fractions without common denominators as well as numbers with decimal points. To work this problem, you’ll have to convert the decimal points to fractions. Then you’ll have to find a common denominator for all the fractions in your problem. Then you get to convert all the fractions to the common denominator. Only then do you get to actually start the math – and you’re still one step away from getting the answer.

This is rather an oversimplification of data processing, especially when you consider that data scientists often work with data that’s incomplete or inaccurate. Imagine that math problem again, but this time with some numbers missing. It’s going to take much longer to figure out.

Given that data generally isn’t perfectly clean and ready to use as-is, we have to admit that there’s no magic solution to data problems. But there are ways to mitigate the issues with proper management.

Time Management is Data Management

The data scientist’s best friends are time and prioritization. The question isn’t if problems will arise; it’s how to be prepared for them when they do arise. Here are the best techniques for averting a data deluge:

  • Start Early. Fundamental tasks like tagging and correction aren’t the most exciting thing you can do with data. But they’re 100% unavoidable, so do them early in the process.
  • Prioritize Your Most Important Data. Don’t get distracted over the massive amount of data being dumped into your platform. Identify what will have the highest predictive potential or that will meet your most pressing business need, and focus your data cleaning efforts there (at least initially).
  • Don’t be afraid to get help. It’s not always feasible or even advisable to keep all the analytics skills you’ll need in-house. Hiring outside experts can free up your own data scientists to do what they’re best at. It can also provide you with scalability against future demand.

Data challenges are unavoidable, but by the same token, they’re not unconquerable. Good management techniques can not only help you manage your data, they can put you on the fast track to getting results from your big data journey. Our final post of the series will talk about the impact big data analytics can have on your business decisions.

Authored by Titir Pal, Director – Analytics at Absolutdata and Sukhmeet Sethi, Senior Technical Architect at Absolutdata