Decision-Makers' Guide: The 12 killer Questions

Towards helping non-marketing intelligence specialists within client companies make informed consumer-research based decisions
Dr David Smith, DVL Smith Group

The guide takes the form of a hierarchy of twelve questions, through which we would recommend decision-makers work in order to arrive at an informed view of the robustness of the consumer evidence they are using for decision-making.

The 12 killer Questions

1. What impact will this consumer research evidence make on your final decision?

Is this a piece of evidence that is likely to affect your decision-making, or will your decision be primarily based on judgement and/or other information? Would the absence of research evidence mean the decision held an ‘unacceptable’ level of risk, a ‘manageable’ level of risk, or hardly make any difference? Use your answers to these questions to decide how much time you should now invest in working through our checklist of questions to assess the robustness of the evidence.

2. How objective was the source of this evidence?

Thinking about the agency or research department that conducted the research, is there any way they are linked with the need to produce a particular outcome for political, financial, or methodological reasons? In short, it is important to evaluate how likely it is that this piece of evidence has been subject to systematic ‘spin’ aimed at presenting a particular view of the world, including possibly using ‘selected soundbites’ to distort the overall evidence in order to promote a particular point of view.

3. Does the new incoming evidence square with your prior understanding of this subject?

Remember, your assessment of whether this evidence makes sense is critically important. You must trust your judgement. Your experienced eye is the critical starting point for making sense of the evidence. So, what reliance would you place on the consumer evidence? What would your score be out of 10, where 10 is ‘total belief in the research evidence’ and 1 means you are ‘extremely unsure about how much reliance to place on the meaning of the research evidence’, given what else you know?

If this study confirmed your existing views, ask yourself whether you need to ask the agency that supplied the evidence whether sufficient fresh thinking has been applied to challenge this dominant view. In short you need to reassure yourself that this is indeed the true position.

If the study has produced completely fresh unexpected insights on the problem, then ask what ‘checks and balances’ have been put in place to ensure that these more surprising observations truly reflect what is really going on in the market place

4. Can you apply the evidence from your study to more general future scenarios?

It is important to develop a view as to whether the consumer evidence you are reviewing only relates to a specific, highly focused business scenario, or whether there are general lessons that can be applied in a way that will help your broader decision-making challenges.

For example, if you are concerned that findings of a study conducted in one particular country may not be easily ‘generalised’ out to wider parts of the globe, why not ask someone who lives in another country to ask the local equivalent of the UK’s ‘ordinary man on the Clapham Omnibus’ whether a one-line summary of your key research findings makes sense?

You also need to assess whether the evidence collected at a point in time could apply to different business situations and scenarios over a period of time? Specifically, when making decisions based on analysis like this, it is important to know whether the evidence collected related to a moment frozen in time, or whether it is the kind of evidence that will still be pertinent over a longer period. This is important as many decisions taken will go wrong because the evidence that was collected, quickly becomes outdated. So, it is important to have a clear perspective on the extent to which your research provides a strategic understanding that holds good over time, as opposed to having a temporary insight into a changing phenomena.

5. What is the success track record of the method being employed?

It is worth exploring how many times has the methodology or technique, being employed in this particular study, been used by the research team in the past, and what observations the research team would make about the past record of the success and/or failure of this particular technique and also ask the agency what overall lessons have been learnt over the years in deploying this technique.

A good tip is to ask one of your research team to ‘Google’ writers and ‘pundits’ (from the business and academic communities) who have prepared models, frameworks or schemas that help explain how this particular method ‘works’. This will add power to your understanding of your current consumer research evidence.

6. Now some specific technical questions to ask about issues that are known to affect the robustness of survey research results:

A. Establishing the frame of reference within which respondents answered the survey question?

As everyone knows, surveys can sometimes produce ‘flaky’ evidence. This is often because the respondents taking part are cast in a totally unfamiliar situation and/or asked about issues that are not particularly important to them. However they will, nonetheless, give an answer. But this will be to questionable evidence. So it is important to check that the survey asked questions to respondents who had some knowledge and engagement with the topic.

B. Has the right balance been struck between the overall coverage of the topic and the depth to which each issue was explored?

A weakness of many surveys is that although they cover a broad agenda they do so in a fairly shallow way. This is acceptable on certain studies. But on other studies this can produce ‘thin’ evidence that oversimplifies key issues.

C. Were the survey questions asked in an objective way?

A useful tip here is to ask a member of your research team to answer the questionnaire themselves, as if they were a respondent. Then ask this person whether they thought the interview had done justice to what they knew and/or wanted to say about this topic. Then ask what these observations mean for the way you use the results.

D. Are there people to whom we should have spoken to in the survey who we have excluded? (sample bias)

If you were to compare the profile of the ‘perfect’ / ‘ideal’ target sample population of respondents to whom you should have been speaking to in this study, with the profile of those you actually interviewed, are there any critical points of departure between these two profiles? If so, explore what implications these points of difference have for the interpretation of the findings. Specifically, ask whether enough has been done to ‘compensate’ for this discrepancy. In short, just how ‘safe’ is it to proceed with this evidence if it is based on an off-target, unrepresentative sample?

E. How sensitive is the survey evidence to a particular technical feature of the methodological approach used?

Thinking about the most critically important piece of evidence being presented in this survey, explore just how sensitive this finding is to a ‘technical irregularity’ that could wildly steer the findings off course. If so, then establish if you can adjust for this. An example here would be a survey that involved weighing the responses of respondents who will make up your target market, To better reflect their correct incidence in the population, but doing this on flawed population information.

7. How close was your study from the ‘ideal’ study?

Market research is about arriving at a fit-to-purpose design based on making different trade-offs about: the statistical precision required; the depth of understanding needed of the issue; various practical and ethical considerations; timetabling; and budgetary issues.

Although a fit-to-purpose design is the goal, it is helpful to first (theoretically speaking) construct the ‘ideal’ research solution that could be employed to deal with this problem. Then assess just how far away the research approach actually chosen is from this ‘ideal’. Here, it is particularly important to assess whether there are any fundamental omissions in the evidence that, in an ideal world, would have been collected to help you understand this issue.

It is worth imagining just what this critically important missing data might have told you, and to weigh up the implications of taking a decision without this evidence. Then ask: should your decision should be delayed while you attempt to plug this missing gap, or do you feel that you are able to proceed with a decision even though there is a gap in your knowledge?

In explaining this issue of the difference between the design used and the ‘ideal’ design it is useful to review whether the method used to collect the data – online, telephone, face-to-face – was appropriate? Specifically, to what degree, if at all, do your research team think they would have obtained different results with a different method? If different, would these be substantially different or slightly different? If substantially different, has this been factored into the current interpretation?

In summing up this line of enquiry it is helpful to recognise that survey evidence – often drawn from a less than ideal study - needs to be interpreted in light of what we know about how the survey process really ‘works’. In other words it is acceptable for a researcher, based on their judgement, to ‘compensate’ for shortfalls in the evidence. This makes for informed, rather than naïve or unsophisticated, interpretations of the evidence. So, if we know this research method underestimates usage of a product we must make adjustments. But the user of the data is entitled to demand that this ‘compensation’ process is made explicit and transparent to you as the client.

8. Have you ‘tested to destruction’ the initial interpretation placed on the evidence?

Having evaluated the overall core robustness of the survey evidence, we now take the first step in interpreting the evidence. This involves checking whether the survey evidence been ‘tested to destruction’ to ensure that the conclusions have been logically argued.

Specifically, you need to be reassured that the conclusions are based on logical reasoning. This would include checking whether any of the original survey questions had a hint of ambiguity that could open the door for misinterpretation at the subsequent analysis stage. Here, the start point is to generate a checklist of where there might have been any possible opportunities for ‘error’ creeping into the logical reasoning process.

One tip here is to take the evidence and present it (on a dry run basis) to a ‘devil’s advocate’ who has been instructed to ‘test to destruction’ the robustness of each piece of evidence. They must scrutinise every aspect of the evidence you want to raise that could be open to different interpretations given the way it has been collected, analysed and initially presented.

Another tip to help you test to destruction the survey evidence is to take a particular recommendation and then role play the implications of this specific recommendation from the perspective of the ‘customer from hell’

In addition, it is often helpful to take the interpretation of a piece of survey evidence and ask a dispassionate third party what they think of this particular ‘take’ on the meaning placed on this item of evidence.

9. Have you ‘stretched’ the evidence enough to help you solve your problem?

Now we should expand our analysis horizons and attempt to ‘stretch’ the consumer evidence - within an acceptable boundary - to extract the maximum creative potential from the survey feedback.

So far we have put the focus on testing the robustness of the evidence itself and ensuring that the way the decision-maker is evaluating the evidence is not based on any misunderstandings. But it is also important to ensure that the decision-making process is also informed by a phase of what we might call ‘data stretching’. We want to make maximum use of the creativity of the analyst.

What can happen with decision-making is that individuals get gripped with what we might describe as ‘corporate think’, whereby they become risk averse. They become nervous of more ambitious interpretations of the data, given concerns about what this might mean for their career, and generally operate in a way that is away from their ‘gut instinct’.

So, the optimum evidence-based decision-making process is one that will combine all of the above discipline and rigour but also include some techniques for getting users of the evidence to lighten up and look at the opportunities presented by the findings.

Let us briefly look at some techniques to help you get maximum value out of your evidence by enriching it – applying some ‘creative enablers’.

  • Why not deliberately make the decision-making ‘personal’ and ask people what they would do if they were making a decision that involved spending their own money?
  • It also perhaps worth getting feedback on different ideas on an anonymous basis, thereby encouraging people to get in touch with their true feelings and provide ‘gut feel’ responses, free from any corporate constraints.
  • And encouraging individuals to think more conceptually - to raise the level of abstraction with which issues are being discussed - can also pay dividends. This helps decision-makers get out of the trenches of ‘win’/‘lose’ and ‘black/white’ decisions, and look at issues from what we might call the third corner. Just how would this putative decision look from the perspective of different stakeholders?
  • Other techniques that can help liberate decision-makers from a too narrow take on the evidence include ‘visualisation’ exercises. Get your decision-making team to cast forward five years in their mind and look at what different decision outcomes might look like from this standpoint.
  • Related to this it is helpful to work through different ‘what if’ scenarios, in terms of your company’s own performance, and what competitors might do. Frameworks, such as ‘game theory’, that help individuals to grasp the bigger picture and better see what the competitors might do in relation to different decision scenarios, can be helpful.
  • Linked to this, there is value in mapping out of some of the perceived uncontrollable factors that could affect this decision and reviewing closely the degree to which these are likely to kick-in and affect different types of decisions. Here, one technique is to work through the ‘nightmare’ outcome scenario, and compare and contrast this with the ‘dream’ outcome scenario.

The process of ‘enriching’ survey based findings that are used to create profiles of customer types, allows the black and white survey responses to create much richer, more colourful ‘customer personas’ can often transform the value you are getting from your evidence.

In sum, there are a range of techniques to get people out of their organisational comfort zone and go beyond their corporate ‘default’ position of accepting the safest and/or most literal interpretation of the evidence. The purpose of adding this dimension to your decision-making process is to ensure that decisions are based on maximum strategic vision, and not just the result of short term, tactical consideration of data that limits your horizons.

10. Has a sufficient amount been done to help the decision-maker navigate the decision ‘minefields’ associated with the genre of survey evidence?

The agency you used to undertake your survey should not just throw the survey findings ‘over the wall’. They should help you by framing the choices the evidence is leading you to make. And this means guiding you through some of the minefields associated with making decisions using this kind of research evidence.

So you should expect to be alerted to the typical traps that individuals/organisations fall into when using this kind of evidence to make this type of decision. A tip here is to ask the agency what the top three lessons are that they have learnt over the years when applying this category of research evidence to this type of decision.

For example, we know that decision-makers often tend to reduce complex choices down to two simplistic alternatives, which can lead to naïve decision-making i.e. rather than exploring a third route. We also know that certain decision-makers attach too much importance to the most accessible or easy-to-understand evidence, thereby taking some of the subtlety out of the decision process. These people will listen only to an (unrepresentative) focus group, but fail to closely study a highly robust statistically sound survey. Related to this, we know that certain decision-makers will be overly influenced by powerfully delivered emotional messages, rather than concentrating on the wider, more representative -but dull- facts.

Decision-makers can also ‘talk up’ selective evidence that confirms their initial judgements, even in the face of other telling counter-statistics.

Similarly, we know that decision-makers will be reluctant to let go of a project in which they have been heavily involved in the past, even though it is beginning to falter. This is called ‘anchoring’.

We also know that decision-makers might set up arbitrary benchmarks or criteria to making the decision that have no relation to the problem. For example, ‘let us go 50/50’ seems a fair attempt at a resolution, but nonetheless, on certain issues, this may have no basis in logical reasoning.

The list of common decision-making flaws continues, but there is not the space in this document to review them all. But the above provides a flavour of points we are making. As a research user make sure – working with the agency or research department – that you familiarise yourself with the major evidence-based decision traps so you can avoid them.

11. Have you struck the golden compromise between ‘reason’ and ‘emotion’ in your decision-making?

Building on the above point about familiarising yourself with the ‘decision mind traps’, we now need to ensure we strike a balance between the rigour we have applied in testing our evidence to destruction and the creativity we have deployed in stretching this interpretation. On the one hand it is easy to end up being ‘too rational’. For example, actuaries who run an insurance company tend to warm to the literal hard numbers, but may miss a more risky, but key, opportunity. But, on the other hand, ‘creatives’ brought up in an advertising agency environment, may not automatically engage with rather anonymous, bland, quantitative statistical evidence, but instead go off on a ‘frolic of their own’, based on a skimpy emotionally charged argument.

So, your evidence needs to be, on the one hand, tested to destruction from the standpoint of its rigour and robustness. But, at the same time, also set in the context of its compatibility with wider, more instinctive, intuitive observations. Given this, we now need to ask whether these two sides of the decision-making ‘equation’ have been put together in a formal way and dispassionately evaluated in arriving at an overall judgement.

Here it is useful to construct a balance sheet of pros and cons that includes the rational and emotional perspectives that allows both responses to be evaluated in a considered way. Firstly, we should check that the ‘rational’ facts have been verified. Has the right level of critique and critical thinking been applied to the evidence? Then we should move on to the ‘emotional’ side of the equation: have all of the feelings, hunches and emotions been drawn out from the facts? Have all the implications that flow from these facts and insights been explored? Have all of the creative possibilities, alternatives, new angles, perspectives, concepts, perceptions and lateral thinking been applied?

It is also helpful to think about your decision in terms of what are the three top pieces of evidence that most suggest that we should go ahead with this venture? And what are the top three pieces of evidence that suggest we should not go ahead, or proceed cautiously, or rethink the project? And, critically, which one piece of evidence do we personally most believe to be telling us the ‘truth’?

So, in sum, you must ensure that the right balance between all of these different types of cautionary and creative thinking, has been brought to bear in a balanced way in arriving at the final recommendation. Did the analytical framework deployed to analyse the evidence achieve the appropriate balance between rigour and creativity?

12. Who is taking personal responsibility for ensuring there is not any misinterpretation of your evidence-based decision as it becomes actioned?

The final ingredient of successful decision-making is about ensuring the actions that flow from your decision are faithful to what the evidence was actually saying. This is often a graveyard for many evidence-based decisions. Having ensured that sufficient rigour has been taken to evaluate the robustness of the evidence, and creative risks have been taken in stretching the meaning of the literal evidence, now is the time to make sure your decision will be actioned in an appropriate way.

This is largely about taking personal responsibility for ensuring that the follow through on the decision is closely monitored. It is all too easy to assume someone else will apply the vigilance needed to achieve a successful outcome. So here is it all about standing up and being counted. Always ensure that your voice is heard. In short, take personal responsibility for achieving successful outcomes and do not let ‘drift’ occur between the decision and the actions that are subsequently taken: so take control.