Basic Research Standards for Evidence-Based Decision-Making in Business Environments

by Marvin Cheung, Head of Research and Strategy

Research in Business Environments

We will add on to the previous section on the relationship between frameworks in this section, and focus on working within a specific framework. In other words, instead of looking at the relationship between problem-solution pairs, we will examine the general best practices for resolving a single problem-solution pair. More specifically, we will explore the ways in which we can evaluate a solution.

There are two key challenges to resolving a problem-solution pair regardless of which framework you use. First, there is so much flexibility in problem formulation that it can be quite daunting. Second, we have to navigate incomplete, inaccurate, and even incorrect information in the real world. 

To address these two challenges, we will formulate guidelines based on academic research methods. It is important here to understand the differences between academic research and research in a business environment. While the quality of data and the amount of resources available is an obvious difference, there are more nuanced differences. 

Research in business environments does not have to be generalizable. For example, you can use existing theories to understand a phenomena you are observing eg. decreasing customer satisfaction, or you can confirm whether a piece of information you found online applies to your company. It does not matter whether or not your findings can be applied to a larger population.

We tend to avoid conducting generalizable research in business environments because it can be expensive, but there are times when generalizable research is necessary. For example, in R&D functions where a new technology or theory can advance business goals or when generalizable research is a prerequisite for operating in the industry, such as when clinical trials are needed. These research projects need to conform to strict industry standards. We recommend involving domain experts in these scenarios. 

We can, however, develop general best practices for everyday research. How we conduct research and the standards we adopt for our insights significantly influences our ability to make sound decisions. As you will see, non-generalizable research is not necessarily easier. People can be very scrupulous when several million dollars is on the line. 

There are several stages common across all problem-solution pairs. We will elaborate on each in the subsequent parts:

  1. Problem formulation: Are you asking the right question?

  2. Solution generation: Are your findings useful?

  3. Communication: Are you delivering your findings intentionally?

Part I: Problem formulation

While in earlier sections we have spoken about a starting condition and a series of problem-solution pairs in abstract through the pre-HCD Design Thinking framework, we want to now apply the innovation process to help us structure our inquiry.

In the first layer: we have the overarching research question. Oftentimes, you can get a solid research question just by adding the question word “how” in front of the relevant goal or metric. For example, “how might we find product-market-fit”, “how might we increase revenue”, or “how might we decrease churn”.

In the second layer: we want to specify the framework we will use to break down the research question. What is the angle? A well formulated problem needs to have a clear subject and should say something where you believe the bottleneck is. It also needs to be answerable, ie. testable and falsifiable to an acceptable degree of certainty within resource constraints.

A problem in the second layer tends to take one of these forms:

  1. Exploratory research: what do we think about the subject?

  2. Descriptive research: what are the characteristics of the subject?

  3. Evaluative research: how effective is the subject?

To formulate a problem well, you want to ask:

  1. What do we know to be true with a high degree of certainty?

  2. What can we infer right away based on past research or experiences?

  3. What are the areas that require further research?

In the third layer: we identify several interconnected variables that require further investigation and delineate their relationships. Resolving these relationships should provide specific, actionable outcomes. For example, a sentence in the copy needs to be replaced, or a new image is needed for the website. As you resolve problem-solution pairs, you will iterate and move between layers of abstraction. Changing frameworks, variables you study etc. is both common and expected.

This is a very broad description of how we formulate problems. For people unfamiliar with managing complexity, it can be easier to start by getting a sense of what a streamlined process looks like. Problem formulation in abstract can be difficult to grasp. We have included Bain and Co’s Case Interview Preparation page in the list of recommended readings. They have case studies available with video walkthroughs. These case studies take out the complexity and nuances of a situation, for example, stakeholder disagreements and other uncertainties outlined in Part IIC: Managing Uncertainties, but they nevertheless offer guidance on how to begin investigating problem-formulation.

Part II: Solution Generation

The solution generation process is in some ways more straightforward than the problem formulation process. The steps are fairly similar across most problem types:

  1. Examine easy-to-access existing literature. This can include news articles, academic essays, blog stories, government reports, corporate publications etc.

  2. Connect available literature to operational data. Do your best to understand the problem you have with the data available. (We will elaborate on how to work with operational data in later Coursebooks.)

  3. Create custom solutions to resolve the problem, if necessary. This can include integrating new monitoring solutions, building data pipelines, custom dashboards, and so on. It is important to weigh resource considerations with the associated risks. Sometimes it is better to accept the risk than to build a custom solution.

When we evaluate the credibility and usefulness of a solution, we examine it across several factors:

  1. Is the research ethical?

  2. Is the research comprehensive?

  3. Has the researcher accounted for different uncertainties?

  4. Are there errors in the data or analysis?

Part IIA: Ethics

Unethical methods damage the credibility of the researcher, the institution, and the findings. Ethical best practices are established to help prevent behaviours that might harm organizations, researchers, research subjects, and the public. 

If you are in a leadership role, you will be responsible for your organization’s ethical standards. Even if you are not in a leadership role, you should always voice your concern through proper channels and in accordance with your employee handbook, if you believe that your work will intentionally or unintentionally promote an unethical agenda. This can include promoting unhealthy behaviours or creating detrimental financial, physical or mental health impacts to children, teenagers, and even adults. 

Within a research project, there are two overarching questions:

  1. The ends: Will the research be used to promote unethical or illegal behaviours?

  2. The means: Will the research put anyone in harm’s way?

“Principlism” or the “Four Principles Approach” by Tom Beauchamp, Ruth Faden, and James Childress from the 1970s continues to provide guidance to researchers:

  1. Respect for autonomy: we should not interfere with the subject’s intended course in life and should avoid violations ranging from manipulative under-disclosure of relevant information to rejecting the subject’s refusal as a research subject.

  2. Nonmaleficence: we should avoid causing harm.

  3. Beneficence: the risk of harm presented by interventions must constantly be weighed against possible benefits for subjects and the general public.

  4. Justice: benefits, risks, and costs, should be fairly distributed - we should not recruit subjects unable to give informed consent or subjects who do not have the option to refuse participation.

To be clear, there is no circumstance in everyday research where you should prioritize your research over the participant’s safety. For example, if you are considering an ethnographic study examining people’s behaviour in supermarkets and you see a tin can about to fall on your participant’s head - please intervene.

If your research requires you to put participants at risk in any way, you should stop and seek legal advice. Some product tests are regulated by government agencies including the FDA and its European counterparts the EFSA, EMA, and ECHA. This includes but is not limited to: human foods, human drugs, vaccines, blood, biologics, medical devices, radiation-emitting electronic products, cosmetics, as well as animal and veterinary products.

There are a few additional best practices we have adapted from CITI’s Responsible Code of Conduct (RCR), originally designed for academic researchers:

  • Authorship: The general consensus is that authorship is based on intellectual rather than material contribution eg. data or funding. The research team should collectively decide who qualifies as an author, ideally before the project begins. Each author is responsible for reviewing the manuscript and can be held responsible for the work that is published. Those who do not qualify for author status can be recognized in the acknowledgements.

  • Plagiarism: Although it may seem obvious, it is important to avoid plagiarism. Always put quotation marks around direct quotations, and attribute an idea you referenced to the original source. Missing citations make it difficult for others, including people who need to sign off on a project, to check the work. You should also be prepared to cite the source of a piece of information when you are conducting a presentation.

  • Conflicts of interest: While there will always be a financial conflict of interest when you are conducting a study as an employee of an organization, you should still be wary of personal biases. For example, if you are a strong advocate for an idea, are you asking leading questions or bullying the interviewee into agreeing with you? As organizations mature, dedicated researchers can help maintain objectivity.

  • Data management: Ask for and record as little Personally Identifiable Information (PII) as possible. In most circumstances, an anonymous transcript of a user interview is sufficient. A screen-recording with voice of how users interact with your product can be helpful, but very rarely will recording the face of the interviewee add value to your research. You should clearly communicate the data that will be collected, as well as how it will be stored, and used. The tendency here is to over-collect, but the amount of PII needs to be balanced with the accuracy of the answers. Social-desirability bias can lead to an over-reporting of desirable behaviour and an under-reporting of undesirable behaviour. Please consult your legal team for details of the appropriate data privacy practices.

Part IIB: Comprehensiveness

One of the most common questions we receive is “When do I know I have enough research?” The simple answer is that you should exhaust all resources available to you within the resource constraints. There are some signs, however, that may indicate comprehensiveness before you reach that point:

  1. If your new sources are beginning to repeat information you already know, that is the first sign that your research process is close to completion. There are often 3-5 most important reports and authors on the subject that everyone references. Can you identify them and discuss their relationship?

  2. If you can identify errors in your sources’ reasoning and begin to develop your own perspective, this is the second sign that your research process is close to completion. At this point, the socratic method can be helpful. Depending on the size of the project and your own workflow, you can either have an informal discussion of your ideas before you start writing, or you can have a discussion after your first draft.

  3. The final sign is when you finish writing. Depending on the context, you might need a final project sign-off from your stakeholders. If they sign off, brilliant. You can also continue to validate your ideas through presentations and roundtable discussions. Publication is rare, since most works are either confidential or not up to publication standards due to resource constraints.

Part IIC: Managing Uncertainties

Uncertainties arise when we work with incomplete, inaccurate, and even incorrect information. To craft a credible and useful solution, we need to account for known, unknown and unknowable uncertainties, a framework by Clare Chua Chow and Rakesh K. Sarin first published in 2002 in the journal Theories and Decisions. There is no simple metric or combined uncertainty metric that can tell us when we need to eliminate a piece of information entirely. We can, however, still identify common uncertainties. Managing them well will require experience and good judgement.

Known uncertainties are the easiest to manage. Their presence is easily detectable and they skew findings towards a predictable direction: 

  1. Conflicts of interest: corporate reports and research funded by corporations tend to advocate for specific private interests. Some are helpful but it is important to be critical of any omissions, research methodologies, and gaps in reasoning.

  2. Missing research methodologies: reports, especially those by corporations, have in the past included very narrow and bizarre studies with odd metrics to prove a point. Sample selection bias, social desirability bias, and the hawthorne effect are examples of threats to a study’s internal validity. Review a study’s methodology or the legal fineprint on marketing materials whenever possible.

  3. No acknowledgement of the limitations of the study: some studies make overly generalized and unsubstantiated claims. This calls into question the research’s external validity. A closer look at the relationship between the study’s research methodology and the conclusions will often reveal any gaps in the author’s reasoning.

  4. Fuzzy language and buzzwords: what does it mean when a company says they use artificial intelligence or make sustainability claims? Be wary of ambiguous or poorly defined terms. 

  5. Social impact claims: we are incredibly careful when an organization makes social impact claims. Social problems are wicked problems where accounting for the second and third order impacts of an intervention is both difficult and expensive. We typically expect to see results from an ethnographic study to understand the potential impacts of an intervention on a specific community, and a randomized controlled trial (RCT) to understand the efficacy of an intervention.

Unknown uncertainties are more difficult to manage. Though their presence can be detected, they skew findings towards an unpredictable direction:

  1. Incomplete research: with resource constraints or limited expertise, some research may simply not meet the comprehensiveness criteria. Effects of incomplete research can include failing to take into account a confounding variable ie. a variable that affects both the dependent and independent variable, creating a spurious correlation, when there is no clear causal relationship. There are also projects we consider to be unrealistic, when certain aspects clearly go against the known logics of the industry. Formulating an answerable problem, acknowledging the limitations of a study, and consulting experts are key.

  2. Misinformation and disinformation: this is particularly problematic when working with pop culture or news sources. We have explored this in further detail in the recommended reading “Media — To sell a crisis: Understanding the incentives and control system behind sensationalist news and misinformation”.

  3. Uncalibrated tools: people often assume that digital tools, such as Google Analytics, are perfect. You can get a sense of how accurate your tools are if you do a few pre-tests eg. How often does it fail to track a click? When does it fail? What is the margin of error?

  4. Presence of systemic corruption: corruption muddies data, reports, and findings. We recommend being extra cautious when referencing a report on countries that are highly corrupt. The Transparency International’s Corruption Perceptions Index (CPI) is a good reference. 

Unknowable uncertainties are incredibly difficult to identify. Their presence is difficult to detect and skew findings towards an unpredictable direction:

  1. Unofficial narratives: these are the details left out of official reports. Stakeholder disagreements, details under confidentiality agreements etc. can be the root cause of an action without ever showing up on reports. This requires an insider’s perspective.

  2. Errors: mistakes and errors during the research process at a reputable organization are rare but can happen. The most common error is miscommunication when information flows up the chain of command. It is important to do a gut check when you read reports. 

Part IID: Errors

We want to describe some of the errors we commonly observe, with reference to the framework by Andrew W. Brown, Kathryn A. Kaiser, and David B. Allison in the article “Issues with data and analyses” published in the Proceedings of the National Academy of Sciences of the USA in 2018. Some errors are minor and do not impact the findings significantly, while others can invalidate the entire project.

  • Errors in design: poor data collection methods, research design, or sampling techniques can produce bad data. The most common mistake is when there is a mismatch between the concept being studied and how it is operationalized or measured. For example, we have seen papers that use residential real estate water usage figures to estimate commercial real estate water usage. Any conclusions from then onwards are questionable.

  • Errors in data management: this can be as simple as having one or two typos in the code you use to analyze the data. The bigger challenge, however, is when people fail to recognize the expiration date of their data. You need to review the validity of your data whenever there are big changes. This can include drastic changes in the macroenvironment, eg. the pandemic, or the product itself, eg. a rebrand.

  • Errors in statistical analysis: it is true that if you torture the numbers long enough, they will say anything. We are especially cautious when we read papers that look at the statistical correlation between two or more macroeconomic indices without fully considering the nuances of the content, the limitations of individual indices, and the limitations of the statistical methods applied. 

  • Errors in logic: at a basic level, you can either disagree with the premises or the conclusion. The most common problem is when researchers make an unjustified generalization eg. because a certain demographic responds well to a product in North America, the product will perform well in Asia as well. Confusing correlation and causation is also common and problematic. The Stanford Encyclopedia of Philosophy describes other common logical fallacies in detail.

  • Errors in communication: this happens generally towards the end of a paper when there is a mismatch between the conclusions of a study and the ambitions of the author. Overzealous authors or bloggers can extrapolate and exaggerate the impacts of a study. Sensationalized language, and the overuse of hedging eg. might, could etc. are two of the signs we pay attention to.

Part III: Effective Communications

Writing is a great way to think through a problem and clarify the relationship between the variables you have identified. However, there are times when a full paper is not needed. Think carefully about what you want to spend time and resources on. There are many faster alternatives:

  1. Share a quote: sometimes all you need to do is share a snippet from an article you have read to a colleague.

  2. Write notes on a presentation slide: get to the point; keep it short and simple

  3. Others: notes on the company white board, group slack messages etc. are all great options.

In the event a more formal document is needed, it should be as short as possible. By the time you reach two to three pages, you should include a one-paragraph executive summary. Longer papers may benefit from a one-page memo. These summaries should provide an overview of the topic and details of the recommendations. You should also assume that unless there are specific questions related to the methodology or the reasoning of the report, no one except your manager or an investor in a due diligence process will read the paper in full. Unfortunately, your colleagues are busy people too.

Getting the summary right is critical and we generally look for these components:

  1. Context: this should include any relevant information surrounding the problem. For example, people or partnering organizations who have been involved in the research, the inspiration behind the project etc.

  2. Research question: this is the overarching question described in the first layer from Part I. It is the high level question with reference to the business goal.

  3. Method: this should include the sources, data, analytical methods you used to explore the question.

  4. Recommendations: a specific course of action you recommend based on your research, which may include further research if necessary, as well as the limitations of the study.

There are a few tips we recommend when it comes to business writing, which applies to both short and long-form essays:

  1. Keep it simple. Write simple sentences in an active voice with a clear subject, verb, and object. As a researcher, it is your responsibility to communicate your findings to the readers. A report that is easier to read is more likely to be read.

  2. Structure it well. A generic structure is entirely okay and even encouraged. Start with an introduction explaining the context, the significance of the research, definitions for key terms, and details of the framework you will use. Then write a few paragraphs explaining your findings and end with a paragraph with your recommendations. Keeping it simple is key.

  3. Write good topic sentences. Each paragraph should have a clear, self-contained idea. The topic sentence should identify or introduce the idea, with sentences behind it for support or clarification. 

  4. Avoid overly long sentences. Consider separating a sentence into two separate sentences if it is longer than two lines. 

  5. Always define the technical terms you use. Write as though you are talking to somebody who is not familiar with the topic - we would not need the research project if we already know everything about it. A successful report will be read by many executives across departments. Clearly defining technical terms will help you communicate with your investors and stakeholders too.

Recommended readings:

Previous
Previous

“How much UX Design do I really need?”

Next
Next

Innovating systematically in complex conditions through guided trial and error