Home Blog How to Read a Study

How to Read a Study

Written on March 15, 2021 at 8:34 pm, by Eric Cressey

Today’s guest post comes from the bright minds at Examine.com. I love their stuff, and if you want unbiased nutrition research you can trust, I’m sure you will too.

Because they are research experts I trust, I asked their team if they could help educate everyone on how to become more adept at reading and discerning published research. -EC

If you have ever had the pleasure (displeasure?) of reading through a scientific study, your eyes may have been attacked with confusing jargon such as “confidence interval”, “P-value”, and “subgroup analysis”.

Confused yet? In this post, we will give you the 101 on how to approach, question, and interpret a scientific study.

Why should I learn to read a study?

To avoid wasting money on ineffective products (like some supplements) or interventions (such as a particular training method), you need to be able to assess different aspects of a study, such as its credibility, its applicability, and the clinical relevance of the effects reported.

To understand a study, as well as how it relates to other available research on the topic, you need to read more than just the abstract. Context is critically important when discussing new research, which is why abstracts are often misleading.

A paper is divided into sections. Those sections vary between papers, but they usually include the following.

  • Abstract
  • Introduction
  • Methods
  • Results
  • Discussion
  • Conflicts of Interest

We’re going to walk you through each of these sections and give you pointers on what to look out for.

Abstract

The abstract is a brief summary that covers the main points of a study. Since there’s a lot of information to pack into a few paragraphs, an abstract can be unintentionally misleading.

Because it does not provide context, an abstract does not often make clear the limitations of an experiment or how applicable the results are to the real world. Before citing a study as evidence in a discussion, make sure to read the whole paper, because it might turn out to be weak evidence.

Introduction

The introduction sets the stage. It should clearly identify the research question the authors hope to answer with their study. Here, the authors usually summarize previous related research and explain why they decided to investigate further.

For example, the non-caloric sweetener stevia showed promise as a way to help improve blood sugar control, particularly in diabetics. So researchers set out to conduct larger, more rigorous trials to determine if stevia could be an effective treatment for diabetes. Introductions are often a great place to find additional reading material since the authors will frequently reference previous, relevant, published studies.

Methods

A paper’s “Methods” (or “Materials and Methods”) section provides information on the study’s design and participants. Ideally, it should be so clear and detailed that other researchers can repeat the study without needing to contact the authors. You will need to examine this section to determine the study’s strengths and limitations, which both affect how the study’s results should be interpreted.

A methods section will contain a few key pieces of information that you should pay attention to.

Demographics: information on the participants, such as age, sex, lifestyle, health status, and method of recruitment. This information will help you decide how relevant the study is to you, your loved ones, or your clients.

Confounders: the demographic information will usually mention if people were excluded from the study, and if so, for what reason. Most often, the reason is the existence of a confounder — a variable that would confound the results (i.e., it would really mess them up).

Design: Design variants include single-blind trials, in which only the participants don’t know if they’re receiving a placebo; observational studies, in which researchers only observe a demographic and take measurements; and many more. This is where you will learn about the length of the study, intervention used (supplement, exercise routine, etc.), the testing methods, and so on.

Endpoints: The “Methods” section can also make clear the endpoints the researchers will be looking at. For instance, a study on the effects of a resistance training program could use muscle mass as its primary endpoint (its main criterion to judge the outcome of the study) and fat mass, strength performance, and testosterone levels as secondary endpoints.

Statistics: Finally, the methods section usually concludes with a hearty statistics discussion. Determining whether an appropriate statistical analysis was used for a given trial is an entire field of study, so we suggest you don’t sweat the details; try to focus on the big picture.

Statistics: The Big Picture

First, let’s clear up two common misunderstandings. You may have read that an effect was significant, only to later discover that it was very small. Similarly, you may have read that no effect was found, yet when you read the paper you found that the intervention group had lost more weight than the placebo group. What gives?

The problem is simple: those quirky scientists don’t speak like normal people do.

For scientists, significant doesn’t mean important — it means statistically significant. An effect is significant if the data collected over the course of the trial would be unlikely if there really was no effect.

Therefore, an effect can be significant (yet very small) — 0.2 kg (0.5 lb) of weight loss over a year, for instance. More to the point, an effect can be significant yet not clinically relevant (meaning that it has no discernible effect on your health).

Relatedly, for scientists, no effect usually means no statistically significant effect. That’s why you may review the measurements collected over the course of a trial and notice an increase or a decrease yet read in the conclusion that no changes (or no effects) were found.

There were changes, but they weren’t significant. In other words, there were changes, but so small that they may be due to random fluctuations (they may also be due to an actual effect; we can’t know for sure).

P-Values

Understanding how to interpret P-values correctly can be tricky, even for specialists, but here’s an intuitive way to think about them.

Think about a coin toss. Flip a coin 100 times and you will get roughly a 50/50 split of heads and tails. Not terribly surprising. But what if you flip this coin 100 times and get heads every time? Now that’s surprising!

You can think of P-values in terms of getting all heads when flipping a coin.

A P-value of 5% (p = 0.05) is no more surprising than getting all heads on 4 coin tosses.
A P-value of 0.5% (p = 0.005) is no more surprising than getting all heads on 8 coin tosses.
A P-value of 0.05% (p = 0.0005) is no more surprising than getting all heads on 11 coin tosses.

A result is said to be “statistically significant” if the value is under the threshold of significance, typically ≤ 0.05.

Results

To conclude, the researchers discuss the primary outcome, or what they were most interested in investigating, in a section commonly called “Results” or “Results and Discussion”. Skipping right to this section after reading the abstract might be tempting, but that often leads to misinterpretation and the spread of misinformation.

Never read the results without first reading the “Methods” section; knowing how researchers arrived at a conclusion is as important as the conclusion itself.

One of the first things to look for in the “Results” section is a comparison of characteristics between the tested groups. Big differences in baseline characteristics after randomization may mean the two groups are not truly comparable. These differences could be a result of chance or of the randomization method being applied incorrectly.

Researchers also have to report dropout and compliance rates. Life frequently gets in the way of science, so almost every trial has its share of participants that didn’t finish the trial or failed to follow the instructions. This is especially true of trials that are long or constraining (diet trials, for instance). Still, too great a proportion of dropouts or noncompliant participants should raise an eyebrow, especially if one group has a much higher dropout rate than the other(s).

Scientists use questionnaires, blood panels, and other methods of gathering data, all of which can be displayed through charts and graphs. Be sure to check on the vertical axis (y-axis) the scale the results are represented on; what may at first look like a large change could in fact be very minor.

The “Results” section can also include a secondary analysis, such as a subgroup analysis. A subgroup analysis is when the researchers run another statistical test but only on a subset of the participants. For instance, if your trial included both males and females of all ages, you could perform your analysis only on the “female” data or only one the “over 65” data, to see if you get a different result.

Discussion

Sometimes, the conclusion is split between “Results” and “Discussion”.

In the “Discussion” section, the authors expound the value of their work. They may also clarify their interpretation of the results or hypothesize a mechanism of action (i.e., the biochemistry underlying the effect).

Often, they will compare their study to previous ones and suggest new experiments that could be conducted based on their study’s results. It is critically important to remember that a single study is just one piece of an overall puzzle. Where does this one fit within the body of evidence on this topic?

The authors should lay out what the strengths and weaknesses of their study were. Examine these critically. Did the authors do a good job of covering both? Did they leave out a critical limitation? You needn’t take their reporting at face value — analyze it.

Like the introduction, the conclusion provides valuable context and insight. If it sounds like the researchers are extrapolating to demographics beyond the scope of their study, or are overstating the results, don’t be afraid to read the study again (especially the “Methods” section).

Conflicts of Interest

Conflicts of interest (COIs), if they exist, are usually disclosed after the conclusion. COIs can occur when the people who design, conduct, or analyze research have a motive to find certain results. The most obvious source of a COI is financial — when the study has been sponsored by a company, for instance, or when one of the authors works for a company that would gain from the study backing a certain effect.

Sadly, one study suggested that nondisclosure of COIs is somewhat common. Additionally, what is considered a COI by one journal may not be by another, and some journals can themselves have COIs, yet they don’t have to disclose them. A journal from a country that exports a lot of a certain herb, for instance, may have hidden incentives to publish studies that back the benefits of that herb — so it isn’t because a study is about an herb in general and not a specific product that you can assume there is no COI.

COIs must be evaluated carefully. Don’t automatically assume that they don’t exist just because they’re not disclosed, but also don’t assume that they necessarily influence the results if they do exist.

Beware The Clickbait Headline

Never assume the media have read the entire study. A survey assessing the quality of the evidence for dietary advice given in UK national newspapers found that between 69% and 72% of health claims were based on deficient or insufficient evidence. To meet deadlines, overworked journalists frequently rely on study press releases, which often fail to accurately summarize the studies’ findings.

There’s no substitute for appraising the study yourself, so when in doubt, re-read its “Methods” section to better assess its strengths and potential limitations.

One study is just one piece of the puzzle

Reading several studies on a given topic will provide you with more information — more data — even if you don’t know how to run a meta-analysis. For instance, if you read only one study that looked at the effect of creatine on testosterone and it found an increase, then 100% of your data says that creatine increases testosterone.

But if you read ten (well-conducted) studies that looked at the effect of creatine on testosterone and only one found an increase, then you have a more complete picture of the evidence, which indicates creatine does not increase testosterone.

Going over and assessing just one paper can be a lot of work. Hours, in fact. Knowing the basics of study assessment is important, but we also understand that people have lives to lead. No single person has the time to read all the new studies coming out, and certain studies can benefit from being read by professionals with different areas of expertise.

Note from EC: As I’m busy, I try to rely on sources I can trust to help me carve out time (and sanity). That’s why whenever people ask me how to stay on top of nutrition research, I always refer them to Examine. 

At the end of the day, we’re busy individuals, and Examine keeps me on top of the cutting edge of research in 1/20th the time it would take me to do it myself. Instead of stressing out about screening, curating, reading, and summarizing research, Examine does it for me. Their membership is a great investment.

Sign-up Today for our FREE Newsletter and receive a four-part video series on how to deadlift!

Name
Email


LEARN HOW TO DEADLIFT
  • Avoid the most common deadlifting mistakes
  • 9 - minute instructional video
  • 3 part follow up series