6 Myths About Business Research You Might Still Believe

What makes research not just good, but great?  How can you ensure the best return on your time, and the most impact for your organization?

By Angela Bradbury on 22nd March 2017

Business research has many facets – competitor analysis, product feedback and new market exploration to name a few.  But what makes research not just good, but great?  How can you ensure the best return on your time, and the most impact for your organization?

There are lots of challenges regarding where to focus: quantitative vs qualitative, external vs internal, history vs predictions.  Nobody can tell you the right balance for what you’re trying to achieve, but there are a number of myths surrounding business research that it can help to be aware of. This article will aim to debunk six of the most common.

Myth 1: The quality of the research is down to the quality of the resource or tool

People in business research often subscribe to the notion that it’s “a poor workman who blames his tools” - that the calibre of a researcher dictates the quality of a report, rather than the quality of the tools and resources they are using.

However, this is over-simplistic.  Being a good researcher necessitates having certain skills, yes, but also selecting the right tools for the project and employing them in an effective manner.  Even before that, it’s important to assess your existing resources and track down or gather the new resources you need to complete the project.

Of course, the most beautiful and comprehensive dataset in the world is of little use if it is not carefully analyzed and then acted upon.  Once you’ve gathered the right resources and tools, it’s crucial to carefully analyze your dataset in order to gain actionable, unbiased insights into the opportunities and challenges for your organization. Finally, communicating your conclusions effectively to senior management will ensure your work has full impact.

Myth 2: Case studies can be the foundation of strategy

Reasoning by analogy is a powerful tool that involves drawing parallels with previous situations in order to apply lessons learned in the past to new and unfamiliar business challenges. It is used frequently and often with great success; Taiichi Ohno famously invented his world renowned kanban system after drawing on his experience of watching a US shelf stacking operation. It allows decision makers to quickly arrive at a solution without the need for expensive iterations of trial and error.

As the foundation of a strategy, however, it can fall short.

Firstly, because reasoning by analogy is such an intuitive way of arriving at a solution, decision makers often do not realize they are using it, whilst still attributing their decisions to deductive reasoning. As a result, conclusions may not be properly challenged, and many (potentially better) solutions may not be explored.

Additionally, looking to examples of ‘best in class’, rather than encouraging more innovative approaches, leads to simply copying what has already been achieved.  Improvements thereafter can only be incremental, rather than step changes.  If you really want to get away from doing things the way they’ve always been done, there’s little alternative to taking the risk of pioneering something new.

In any case, it’s important to understand the limits of case studies.  They are more often stories than data.  They may not give the whole picture of what was involved, or long term effects of a change implemented.  They may not translate to the specifics of your particular project.  And beware of confirmation bias: you may be unconsciously seeking out case studies that validate your existing hypothesis.

Reasoning by analogy is certainly powerful, but it must be used with caution and not at the expense of other research techniques, as we’ll explore in the next myth.

Myth 3: Research can replace innovation

In a mature or stable industry where information is in abundance, it is all too easy to fall back on the tried and tested method of business research. The move towards ‘Big Data’ has compounded the issue as companies are now able to crunch vast amounts of data in order to produce easy-to-digest charts, diagrams and reports.

The problem with this approach is that it can confine the solution space to what is already in existence. A mature organization might make tweaks and adjustments, but by and large what comes next is an extension of what has already gone before. If, as the director of a large firm, you couple this with the fact that the barriers to entry in many industries are as low as they’ve ever been, you could quickly find yourself in direct competition with a set of brand new, disruptive competitors that are doing things you had never even dreamed of - take Blockbuster or Kodak, for instance.

It is therefore essential for incumbent businesses to push innovation as well as research, and there are plenty of ways to do so. The consulting firm McKinsey & Company recognizes this and provides ways in which incumbent companies can reframe themselves to remain competitive.

Myth 4: The whole is always at least as great as the sum of the parts

This is a lesson that has been reinforced across the business world over the past decade. The modularity of businesses, usually functional or geographic, mean that an organization can highly optimize its individual parts. A small team can quickly become experts and hence efficient in carrying out the role they have been set up to do. This does not however always guarantee increased efficiency at an organizational level, as Harvard Business Review notes in a piece on internal system optimization.

The takeaways from this should also be applied to the sphere of business research. Simply carrying out an extensive, in-depth investigation into a certain area of the market cannot guarantee a business success in its decision making. The quality of the research report depends on the researcher’s ability to look upon the situation more broadly and form conclusions based on what is, and sometimes what is not, in front of them. This is a theme that carries through to our next myth.

Myth 5: If you can understand and beat your competitors, you’ll win the war

The sister of myth 3 (where we spoke about the over-reliance of firms on crunching their huge sets of historic data rather than focusing on new innovation), the same lesson can be applied to ‘competitive intelligence’ (CI).

This is an activity a lot of companies have invested in heavily over the past decade or so, but it’s usually treated as an information gathering exercise; a race to see how much information and data they can glean on their competitors.  This is often done without consideration of the wider market conditions or even without the intent to analyse the dataset in any meaningful manner once gathered.  Millions are wasted by firms compiling vast databases that don’t yield any useful insight into the challenges and opportunities they face.

It doesn’t need to be that way. CI should involve identifying opportunity and risk that comes about as a result of changing market conditions early enough for an organization to effectively react. This could be the adaptation of strategy, the development of a new product or the entry into a new market, for example. Used properly, CI can yield huge strategic ability.

Myth 6: Specialist expertise is like the branches of a tree

Ancestry, workplace hierarchies, probability, even software design – we use the framework of a tree’s branches to map out so many ways in which we seek to understand the world around us.  The instinct is understandable; many issues are complex, and it helps to break it down into smaller, more manageable pieces.  And it can be a powerful tool.  However, it is usually an oversimplification. For example, a hierarchical view of the workplace may accurately depict reporting lines and give an indication as to who works with whom. However, it doesn’t say who likes working with whom, or which teams eat lunch together, or which colleagues play squash on the weekends.

The same is true of research.  Compartmentalizing into tasks or analyses might help you focus and get it done, but you should always be looking for the connections between them.  A trend in one dataset might correlate with a different trend in another.  To evaluate causation, you might need to read analyst reports or speak with a few experts.  Simply drawing individual conclusions from each data source separately will never get you to that holistic view that makes the difference between good research and great research.

Are there any that you think we’ve missed? Let us know by tweeting us on @ChimeAdvisors.