Business analytics: 3 trends for 2016

Data, and by extension data analytics, are becoming increasingly important for business. At the same time, the data deluge makes making sense of it all a bigger challenge every day.

Here are three trends to should keep in mind for 2016.

1. Don’t shoot from the hip

Numbers are becoming more popular for most people, but the more numbers we get the more useless most of these seem to be. Especially if they are drawn out of a hat; why would you take that into consideration in your decision-making process?

Polling your audiences is fine.
But that is not a statistic that adds up exactly to something like 97%, is it?
Or are you keeping tallies of your straw polls and then doing the statistics?

Comment by DrKPI on Adrian Dayton’s Clearview Social blog

View slide on Flickr – measure-for-impact – DrKPI

Google Flu Trends is an example that illustrates this problem further. For instance, it:

– looks at historical data – descriptive analytics and research, and
– tries to predict what might happen – predictive analytics – with the help of a model that was developed.

The results are supposed to help us better understand how the flu will spread next winter. Unfortunately, in the Google flu trends versus National Institutes of Health (NIH) challenge, the winner is? NIH! Google estimates are simply far off from the actual data the NIH produces for policy makers and health professionals.

2. Bad data result in bad decisions

Publishing rankings or product tests is popular. Since some readers devour such rankings, publishers can sell more copies, which keeps advertisers happy.

A real win-win situation, right? Not so. Wrong decisions can result in outcomes that are not desirable. For instance, attending the wrong college or polluting more than the test results indicate (think Volkswagen and #dieselgate) is not something we want.

Lucy Kellaway felt so incensed about the ever growing acceptance of making errors in corporate circles, that she wrote:

…I would be exceedingly displeased to learn that the bankers to whom I was handing over a king’s ransom were being taught that errors were perfectly acceptable.

This mistake-loving nonsense is an export from Silicon Valley, where “fail fast and fail often” is what passes for wisdom. Errors have been elevated to such a level that to get something wrong is spoken of as more admirable than getting it right.

By collecting data and using flawed methods we produce rankings or test results that will can seriously hurt people. For instance, when drug certification tests are done improperly and the regulator has no idea, unknown side effects can kill people.

Using the wrong test results to approve or certify a car can result in dismal effects as well. Volkswagen is accused of manipulating tests, and the public got more pollution than it bargained for. VW is working on fixing the 11 million vehicles affected by the diesel cheat, but this will not un-do the damage to the firm’s reputation and our health.

3. Check before you trust the method used

It is always wise to take 5 minutes to do an acid test with any study report we see, such as:

– what does the methodology tell us (e.g., we asked university deans to rank their competitors); and

– does the measure or measures used make sense (e.g., one question about how university developed / improved study programs – result = ASU is more innovative than Stanford or MIT… who are you kidding?).

The Art Review publishes an annual ranking of the contemporary art world’s most influential figures. In short, it helps if you live in London or New York so the Art Review editors or journalists are aware of who you are.

I asked for an explanation of how these numbers develop:

Dear Sir or Madam
I would like to know more about the methodology you used for the ArtReview’s Power 100 List.
Can you help… this would be great to use with my students in a class.
I could not find anything on the website that I could show my students.
Professor Urs E. Gattiker, Ph.D.

14 days later I got an answer from the makers of the ranking:

Subject: Re: Message from user at

We are not following a grid of criteria per se, and the list emerges from a discussion between a panel of international contributors and editors of the magazine, who each advocate for the people they feel are most influential in their region. The influence of the selected people on the list is based on their accomplishments in the past 12 months. I have attached here the introduction to the Power 100, which might help you in defining our approach.
I hope that helps,
Best, Louise

A grid of criteria, what is that? Of course, the office clerk answering me has no clue about research methodology used, as the answer indicates. One could start believing that this Top Art list came from a discussion or using a straw poll. Totally chaotic approach.

You can view the attachment that explains this sloppy method below.

Download the ArtReview criteria with this link.

A friend of mine smiled, and said:

For me this is a great list, Urs. Those on the list rarely if ever represent value for money for serious art collectors. Instead you get buzz and have to pay for their image. The list tells me who we do not need to work with. We use other experts. These give us more value for money. They help us to complement our award-winning collection.

Bottom line

We all know that data quality is important and frequently discussed. In fact, the trustworthiness of data directly relates to the value it can add to an organisation.

As the image above suggests, doing quality research takes a decent method that results in data that permits careful analysis. Sloppy data are cheap to get, but dangerous if used in decision-making. Such findings are neither replicable nor likely valid.

However, we are increasingly required to present findings in order to attract more readers. Some master this very well like Inc. Another example of theirs I came across was:

Though truly quantifying “best” is impossible, the approach Appelo’s team used makes sense, especially when you read the books that made the list.

The 100 Best Business Books of 2015 by Jeff Haden

And here’s the methodology:
The purpose of our work was to find out which people are globally the most popular management and leadership writers, in the English language.
Step 1: Top lists
With Google, we performed a lot of searches for “most popular management gurus”, “best leadership books”, “top management blogs”, “top leadership experts”, etc. This resulted in a collection of 36 different lists, containing gurus, books, and blogs. We aggregated the authors’ names into one big list of almost 800 people.
Step 2: Author profiles
Owing to time constraints, we limited ourselves to all authors who were mentioned more than once on the 36 lists (about 270 people), though we added a few dozen additional people that we really wanted to include in our exploration. For all 330 authors, we tried to find their personal websites, blogs, Twitter accounts, Wikipedia pages, Goodreads profiles, and Amazon author pages.

So you defer to 36 people and their lists and include those that are mentioned more than once. Fine, if that does then not include the ones you believe should be on the list because you read these books and liked them, no worries. You add a few dozen people (60) and voilà, you have 330 authors (how they ranked them is totally unclear, but interesting – blog reputation, Twitter followers, etc.).

3 checks you should undertake before accepting a study's findings

1. Evidence-based management and policy advice

A sloppy method is like following no method.
Can you find a method section, and does the method make sense to you? For example, did the study use a long-form questionnaire to get employment data? Or was it just based on scans of Internet job boards? If the latter, the problem lies with double counting when relying on websites or job search engines.

If the method section does not instill you with confidence that it was done properly, watch out. And, most importantly, don’t complain about a study before you read it carefully!

Interesting read: CRDCN letter to Minister Clement – Census long-form questionnaire (July 9, 2010) explains why Statistics Canada needs to get the funds to collect data for the census to provide evidence-based policy data.

2. Minestrone: Great soup but wrong research method

So the study has a decent method section that makes sense and explains things accurately. What are the chances that somebody else could follow the methodology and get the same result?

To illustrate, if it was done the same way I put together a Minestrone (Italian vegetable soup), you can forget it. I take whatever vegetables are in season, plus, each family’s soup is seasoned differently, guaranteed. This neatly illustrates the fact that if no systematic method is used, it is not science. For the soup this means it turns out different each time anyone makes it.

Without a recipe or method followed, you cannot repeat the performance or generalise from your findings.

3. Buyer beware: Click biting studies using navel gazing metrics

Usage of Sainsbury’s #ChristmasIsForSharing being higher than John Lewis’ #ManOnTheMoon by just 4% is interesting. However, Social Bro’s verdict is based on 50 votes (26 versus 24) from a Twitter poll. In turn, the analytics company uses this data to decide on 2015’s Most Creative Christmas Campaigns. What? Are they real, is their analytics work also that sloppy?

Apparently, even analytics companies like Social Bro have to defer to such navel gazing metrics to get more traffic. Such samples are neither representative nor big enough to draw any inferences.

Just because something is interesting or suggests it is a bit better based on 3 more votes on Twitter, does not mean you should invest your hard earned cash that way. Investing your marketing dollar based on such nonsense is plain dumb.

What is your take?

– what will you change in your data #analytics and #analysis work in 2016?
– what is your favourite example for 20015, illustrating GREAT analytics work and research?
– how do you deal with this data deluge?
– what would you recommend to a novice (ropes to skip)?

More insights about analytics, analysis and big data.

Urs E. Gattiker

Professor Urs E. Gattiker - DrKPI is corporate Europe's leading social media metrics expert (see his books). He continues to work with start-ups. Urs is CEO of CyTRAP Labs GmbH.

3 thoughts on “Business analytics: 3 trends for 2016

  • 31. December 2015 at 11:34

    Dear Urs

    Thanks for this interesting article. I recently published (in German) something regarding the combination of qualitative and quantitative methods in doing research: (download pdf file)

    For me the best way to handle the many methodological concerns we must address to produce useful data is to use a mixed-method study. A description is given here:
    Nattabi, Barbara/Li, Jianghong/Thompson, Sandra C./Orach, Christopher G./Earnest,
    Jaya: „Family Planning among People Living with HIV in Post-Conflict Northern
    Uganda: A Mixed Methods Study“. In: Conflict and Health, 2011, 5: 18. Online: (Stand 31.12.2015)

    Hope this is useful

    • 31. December 2015 at 11:57

      Dear Jianghong

      Thanks so much for this information. Of course, I read your article right away since I get the WZB Mitteilungen regularly in the mail. I particularly like the example you made for respondent-driven sampling. An approach for hidden populations (e.g., sex workers, sans-papiers immigrants, etc.) is surely respondent-driven sampling, defined here as:

      Respondent-driven sampling (RDS), combines “snowball sampling” with a mathematical model that weights the sample to compensate for the fact that the sample was collected in a non-random way.


      The above resource also points out:

      The dilemma is that if a study focuses only on the most accessible part of the target population, standard probability sampling methods can be used but coverage of the target population is limited. For example, drug injectors can be sampled from needle exchanges and from the streets on which drugs are sold, but this approach misses many women, youth, and those who only recently started injecting. Therefore, a statistically representative sample is drawn of an unrepresentative part of the target population, so conclusions cannot be validly made about the entirety of the target population.

      Thanks Jianghong for this help. Happy New Year.

      Another useful resource about this methodology of respondent-driven sampling (RDS) is this one:
      Matthew J. Salganik; Douglas D. Heckathorn (2004). Sampling and Estimation in Hidden Populations Using Respondent-Driven Sampling. Sociological Methodology, Vol. 34. (2004), pp. 193-239. Retrieved, December 31, 2015 from

      • 8. January 2016 at 7:30

        Reading the above blog post and the comment written by Jianhong Li.

        As the above suggests, big data and analytics are changing the face of marketing. Nevertheless, this does not make marketing more strategic (What is strategy).

        However, it can help making marketing more precise. That is to say, instead of advertising in many places we can try to target these ads better to reach our target audience on- and off-line (i.e. digital and print channels).

        This makes marketing more analogous to guided bombs that have and continue to change air battles. Unfortunately, as military experts tell us, air-strikes do not win the war against terrorists or ISIS in Syria / Iraq.

        The above illustrates that marketers today have many “weapons” and “recon” in terms of tools and access to data. But what they seem to often lack is effective planning. The ability to assess future scenarios regarding enemy / competitor moves and countermoves matters. It facilitates planning and making sure that the strategy can be pursued. As importantly, this helps secure the company’s Unique Selling Proposition against the enemy / competitor attacks or encroachment.


Leave a Reply

Your email address will not be published. Required fields are marked *