Studies Say, Question Articles about Studies

1931 Science and Mechanics Magazine CoverStart with the Source

In previous posts I talked about attempts to close the management knowledge-practice gap through evidence-based management (EBM), and the problems with the information sources most managers use. Science is only one source in EBM, but it is the least understood, so in this post I will give you the tools to decide how much faith to put in a single study you read or hear about. The key is to notice what sources your sources are using.

Articles published in “peer-reviewed” scientific journals have been critiqued by other scientists regarding the methods or results, and edited by skeptical experts in the field. They also provide details that let you spot potential problems and complete lists of their sources, which allow you to check the authors’ conclusions. In contrast, fact-checkers at magazines, and even at publications that sound scientific like Harvard Business Review, don’t have that depth of knowledge, and their editors can’t be experts in every subject covered. Few newspapers today have science reporters. Blog posts and business conference presentations usually aren’t reviewed or edited at all, at least by a knowledgeable third party. Popular periodicals and blogs often use sensational headlines more interested in capturing your attention than being strictly accurate, a problem if that’s all you see.

Despite what general writers reporting on studies seem to believe, most studies are only pieces of a large puzzle. Those studies always point this out, but the warnings rarely make it into articles about them. (In the rest of this post, I will use the word “study” to mean both the research and the journal article reporting it, and “article” to refer to press and blog reports or nonacademic presentations.)

All Studies are Not Born Equal

As you read about a study, look for clues indicating what kind it was and whom was studied. There are two types most likely to apply to your workplace. A “meta-analysis” takes the data from all relevant studies the researcher found and runs the numbers as if they were one big study. This filters out many of the limitations present in any one study, discussed further below. Also, a “structured literature review” (versus “narrative” reviews) goes through the texts of previous studies using techniques meant to reduce bias from the researcher. For example, it might use an algorithm, or at least several humans who must agree on ratings, to analyze the words used.

With other studies, give more weight to those with more of the following characteristics:

  • Larger numbers of people and/or organizations in the “sample.”
  • Samples selected by the scientists, as opposed to letting anyone choose to respond like a typical online survey.
  • “Longitudinal” or “panel” studies, which are more likely to prove what caused an outcome.
  • Studies that compare the tested group to a “control” group—a similar group that did not do whatever is being tested. Otherwise the result might have happened even if nothing different was done.
  • Combinations of laboratory and “field” studies, giving a better indication that the lab results, which can better eliminate other explanations for those results, also apply to the “real world.”
  • Authors who are or who worked with professors, and thus are less likely to interpret results to support something they are selling.
  • Studies done in organizations like yours (same industry, country, type of organization, type of customers, etc.) or crossing those lines, and thus more likely to apply to your situation.
  • Studies “replicating” or “extending” prior knowledge, rather than being the first to look into the specific topic.

Next pay attention to the claims made by the author about the study’s findings. News outlets, consulting firms, and professional groups often over-generalize. I frequently see the phrase “workers believe” even though the sample was skewed toward white males working for large tech corporations who found the survey and cared enough about the topic to fill it out. If you manage a diverse blue-collar crew in a small family owned company, those survey results could be the exact opposite of what your “workers believe.” The article should also note questions raised by the researchers about their own findings, meaning things they don’t know yet, so that the results don’t seem definitive.

Notice the size of any correlations reported and whether they were “statistically significant,” meaning the math indicates they weren’t from random luck. Correlations, showing whether two factors moved together, range from –1.0 to +1.0. (A –1.0 means Factor A went up the same amount Factor B went down.) A correlation of 0.15 might be statistically significant but is pretty weak. And again, correlation doesn’t prove causation. Factor B might have gone down because A went up; or A up because B went down; or other factors might have caused both changes. In articles, I often see phrases like “B went down when A went up,” suggesting A caused B, even when the study can’t have said that given its design.

Be wary, too, of scientific-sounding titles. Almost every headline I have read on the general Web starting “The Science of…” was effectively a lie. A company reporting on its own data is not science.

Even big-name companies can produce crappy results. Google claimed its internal research proved teams perform better if they have team leaders, but their reported methods could not have proved that. The company did not, apparently, set up leaderless teams and compare them to similar leader-led teams, or use objective performance metrics. All their “study” proved was that teams who like their managers and those managers think their teams are more productive—as best I can tell, given that Google did not detail its methodology.[1]

Even the best reporters might leave out important details or misunderstand something. I once prompted a change in a Today Show online article by pointing out the word the reporter was using (“nice”) was not the one used in the study (“agreeable”) and that meant something different to researchers.[2]

When an article names a study, paste the study title into a search engine. Sometimes a link to the full-text version will appear. Abstracts usually do, which might be enough to make you question the article.

Follow the Map

Though studies can be a dull read, you may be surprised at how easy most are to follow. It helps that they tend to use a standard format, often with the exact section names I present here:

  • Abstract—A short summary of the entire study. Frequently you can find what you need here. The problem is, depending on how terms were defined and measured, the results may not mean what they seem to. If you can download the whole article, it helps to skim the sections below to make sure your understanding matches the researchers’.
  • Literature Review and Hypotheses—In developing the hypotheses they want to test, scientists read as many relevant studies as they can find. They outline those in this section to justify their hypotheses, and the study itself. Reading these narrative “lit reviews” in a couple of relatively recent studies on a topic will provide you with the current scientific consensus, if there is one. In effect, the authors have done your research for you!
  • Methodology—This section details how they conducted the study. Most of this you can skip, but note: the type of study, for the reasons covered above; who the data was gathered from (the “sample,” “subjects,” “participants,” etc.); and how the key terms were defined and measured. For example, was “product success” measured using objective standards like financial benefits to the firm or customer satisfaction surveys? Or were the survey respondents free to rate their own success in their own ways?
  • Analysis and Results—Here appear detailed explanations of the math done and the specific answers found. Most readers can skip to the next section and backtrack here if that one raises questions for you.
  • Discussion—This is a narrative explanation of the findings, and will be the most useful to managers. The scientists tell you what they learned.
  • Advice for Managers—Sometimes the researchers make recommendations about how managers might apply the results in the workplace. This part could be a subsection under “Discussion,” or just a few sentences instead of a separate section.
  • Limitations—Important to skim through before taking the scientists’ advice, this section lays out reasons the study results might not apply to your workplace.

Once you’ve read through a couple of studies, it gets much easier to find the good stuff. Hopefully then you’ll feel brave enough to proactively search out studies related to questions you have, as explained in the next and last post of this EBM series.

Subscribe

Please share this post at the bottom of the page.


[1] Google, “Guide: Identify What Makes a Great Manager,” 2018, https://rework.withgoogle.com/guides/managers-identify-what-makes-a-great-manager/steps/learn-about-googles-manager-research/.

[2] See: “The Birth of a Myth: Niceness Does Not Pay?

Tell the world: