Home Posts tagged "How to Read Fitnesss Research"

How to Read a Study

Today's guest post comes from the bright minds at Examine, which just turned ten-years old (and is having a sale to commemorate). I love their stuff, and if you want unbiased nutrition research you can trust, I’m sure you will too.

Because they are research experts I trust, I asked their team if they could help educate everyone on how to become more adept at reading and discerning published research. -EC

If you have ever had the pleasure (displeasure?) of reading through a scientific study, your eyes may have been attacked with confusing jargon such as “confidence interval”, “P-value”, and “subgroup analysis”.

Confused yet? In this post, we will give you the 101 on how to approach, question, and interpret a scientific study.

Why should I learn to read a study?

To avoid wasting money on ineffective products (like some supplements) or interventions (such as a particular training method), you need to be able to assess different aspects of a study, such as its credibility, its applicability, and the clinical relevance of the effects reported.

To understand a study, as well as how it relates to other available research on the topic, you need to read more than just the abstract. Context is critically important when discussing new research, which is why abstracts are often misleading.

A paper is divided into sections. Those sections vary between papers, but they usually include the following.

  • Abstract
  • Introduction
  • Methods
  • Results
  • Discussion
  • Conflicts of Interest

We’re going to walk you through each of these sections and give you pointers on what to look out for.

Abstract

The abstract is a brief summary that covers the main points of a study. Since there’s a lot of information to pack into a few paragraphs, an abstract can be unintentionally misleading.

Because it does not provide context, an abstract does not often make clear the limitations of an experiment or how applicable the results are to the real world. Before citing a study as evidence in a discussion, make sure to read the whole paper, because it might turn out to be weak evidence.

Introduction

The introduction sets the stage. It should clearly identify the research question the authors hope to answer with their study. Here, the authors usually summarize previous related research and explain why they decided to investigate further.

For example, the non-caloric sweetener stevia showed promise as a way to help improve blood sugar control, particularly in diabetics. So researchers set out to conduct larger, more rigorous trials to determine if stevia could be an effective treatment for diabetes. Introductions are often a great place to find additional reading material since the authors will frequently reference previous, relevant, published studies.

Methods

A paper’s “Methods” (or “Materials and Methods”) section provides information on the study’s design and participants. Ideally, it should be so clear and detailed that other researchers can repeat the study without needing to contact the authors. You will need to examine this section to determine the study’s strengths and limitations, which both affect how the study’s results should be interpreted.

A methods section will contain a few key pieces of information that you should pay attention to.

Demographics: information on the participants, such as age, sex, lifestyle, health status, and method of recruitment. This information will help you decide how relevant the study is to you, your loved ones, or your clients.

Confounders: the demographic information will usually mention if people were excluded from the study, and if so, for what reason. Most often, the reason is the existence of a confounder — a variable that would confound the results (i.e., it would really mess them up).

Design: Design variants include single-blind trials, in which only the participants don’t know if they’re receiving a placebo; observational studies, in which researchers only observe a demographic and take measurements; and many more. This is where you will learn about the length of the study, intervention used (supplement, exercise routine, etc.), the testing methods, and so on.

Endpoints: The “Methods” section can also make clear the endpoints the researchers will be looking at. For instance, a study on the effects of a resistance training program could use muscle mass as its primary endpoint (its main criterion to judge the outcome of the study) and fat mass, strength performance, and testosterone levels as secondary endpoints.

Statistics: Finally, the methods section usually concludes with a hearty statistics discussion. Determining whether an appropriate statistical analysis was used for a given trial is an entire field of study, so we suggest you don’t sweat the details; try to focus on the big picture.

Statistics: The Big Picture

First, let’s clear up two common misunderstandings. You may have read that an effect was significant, only to later discover that it was very small. Similarly, you may have read that no effect was found, yet when you read the paper you found that the intervention group had lost more weight than the placebo group. What gives?

The problem is simple: those quirky scientists don’t speak like normal people do.

For scientists, significant doesn’t mean important — it means statistically significant. An effect is significant if the data collected over the course of the trial would be unlikely if there really was no effect.

Therefore, an effect can be significant (yet very small) — 0.2 kg (0.5 lb) of weight loss over a year, for instance. More to the point, an effect can be significant yet not clinically relevant (meaning that it has no discernible effect on your health).

Relatedly, for scientists, no effect usually means no statistically significant effect. That’s why you may review the measurements collected over the course of a trial and notice an increase or a decrease yet read in the conclusion that no changes (or no effects) were found.

There were changes, but they weren’t significant. In other words, there were changes, but so small that they may be due to random fluctuations (they may also be due to an actual effect; we can’t know for sure).

P-Values

Understanding how to interpret P-values correctly can be tricky, even for specialists, but here’s an intuitive way to think about them.

Think about a coin toss. Flip a coin 100 times and you will get roughly a 50/50 split of heads and tails. Not terribly surprising. But what if you flip this coin 100 times and get heads every time? Now that’s surprising!

You can think of P-values in terms of getting all heads when flipping a coin.

A P-value of 5% (p = 0.05) is no more surprising than getting all heads on 4 coin tosses.
A P-value of 0.5% (p = 0.005) is no more surprising than getting all heads on 8 coin tosses.
A P-value of 0.05% (p = 0.0005) is no more surprising than getting all heads on 11 coin tosses.

A result is said to be “statistically significant” if the value is under the threshold of significance, typically ≤ 0.05.

Results

To conclude, the researchers discuss the primary outcome, or what they were most interested in investigating, in a section commonly called “Results” or “Results and Discussion”. Skipping right to this section after reading the abstract might be tempting, but that often leads to misinterpretation and the spread of misinformation.

Never read the results without first reading the “Methods” section; knowing how researchers arrived at a conclusion is as important as the conclusion itself.

One of the first things to look for in the “Results” section is a comparison of characteristics between the tested groups. Big differences in baseline characteristics after randomization may mean the two groups are not truly comparable. These differences could be a result of chance or of the randomization method being applied incorrectly.

Researchers also have to report dropout and compliance rates. Life frequently gets in the way of science, so almost every trial has its share of participants that didn’t finish the trial or failed to follow the instructions. This is especially true of trials that are long or constraining (diet trials, for instance). Still, too great a proportion of dropouts or noncompliant participants should raise an eyebrow, especially if one group has a much higher dropout rate than the other(s).

Scientists use questionnaires, blood panels, and other methods of gathering data, all of which can be displayed through charts and graphs. Be sure to check on the vertical axis (y-axis) the scale the results are represented on; what may at first look like a large change could in fact be very minor.

The “Results” section can also include a secondary analysis, such as a subgroup analysis. A subgroup analysis is when the researchers run another statistical test but only on a subset of the participants. For instance, if your trial included both males and females of all ages, you could perform your analysis only on the “female” data or only one the “over 65” data, to see if you get a different result.

Discussion

Sometimes, the conclusion is split between “Results” and “Discussion”.

In the “Discussion” section, the authors expound the value of their work. They may also clarify their interpretation of the results or hypothesize a mechanism of action (i.e., the biochemistry underlying the effect).

Often, they will compare their study to previous ones and suggest new experiments that could be conducted based on their study’s results. It is critically important to remember that a single study is just one piece of an overall puzzle. Where does this one fit within the body of evidence on this topic?

The authors should lay out what the strengths and weaknesses of their study were. Examine these critically. Did the authors do a good job of covering both? Did they leave out a critical limitation? You needn’t take their reporting at face value — analyze it.

Like the introduction, the conclusion provides valuable context and insight. If it sounds like the researchers are extrapolating to demographics beyond the scope of their study, or are overstating the results, don’t be afraid to read the study again (especially the “Methods” section).

Conflicts of Interest

Conflicts of interest (COIs), if they exist, are usually disclosed after the conclusion. COIs can occur when the people who design, conduct, or analyze research have a motive to find certain results. The most obvious source of a COI is financial — when the study has been sponsored by a company, for instance, or when one of the authors works for a company that would gain from the study backing a certain effect.

Sadly, one study suggested that nondisclosure of COIs is somewhat common. Additionally, what is considered a COI by one journal may not be by another, and some journals can themselves have COIs, yet they don’t have to disclose them. A journal from a country that exports a lot of a certain herb, for instance, may have hidden incentives to publish studies that back the benefits of that herb — so it isn’t because a study is about an herb in general and not a specific product that you can assume there is no COI.

COIs must be evaluated carefully. Don’t automatically assume that they don’t exist just because they’re not disclosed, but also don’t assume that they necessarily influence the results if they do exist.

Beware The Clickbait Headline

Never assume the media have read the entire study. A survey assessing the quality of the evidence for dietary advice given in UK national newspapers found that between 69% and 72% of health claims were based on deficient or insufficient evidence. To meet deadlines, overworked journalists frequently rely on study press releases, which often fail to accurately summarize the studies’ findings.

There’s no substitute for appraising the study yourself, so when in doubt, re-read its “Methods” section to better assess its strengths and potential limitations.

One study is just one piece of the puzzle

Reading several studies on a given topic will provide you with more information — more data — even if you don’t know how to run a meta-analysis. For instance, if you read only one study that looked at the effect of creatine on testosterone and it found an increase, then 100% of your data says that creatine increases testosterone.

But if you read ten (well-conducted) studies that looked at the effect of creatine on testosterone and only one found an increase, then you have a more complete picture of the evidence, which indicates creatine does not increase testosterone.

Going over and assessing just one paper can be a lot of work. Hours, in fact. Knowing the basics of study assessment is important, but we also understand that people have lives to lead. No single person has the time to read all the new studies coming out, and certain studies can benefit from being read by professionals with different areas of expertise.

Note from EC: As I’m busy, I try to rely on sources I can trust to help me carve out time (and sanity). That’s why whenever people ask me how to stay on top of nutrition research, I always refer them to Examine.

 

Their Membership is 33% off for the next X days, and I highly recommend that you consider signing up. At the end of the day, we’re busy individuals, and Examine keeps me on top of the cutting edge of research in 1/20th the time it would take me to do it myself. Instead of stressing out about screening, curating, reading, and summarizing research, Examine does it for me.

Sign-up Today for our FREE Newsletter and receive a four-part video series on how to deadlift!

Name
Email
Read more

How to Stay on Top of the Latest Research

I pride myself on having a training philosophy that is rooted in both “in the trenches” anecdotal experience and evidence-based practice. Both can be challenging to develop, but for different reasons.

Building a sample size in your head helps you to make judgment calls when the research isn’t necessarily there, or you need to make inferences based on limited information. As an example, as I've written previously here, research has demonstrated that lat strains that are managed conservatively have a return to pitching timeline of ~100 days. That information is great if you’re seeing an athlete from Day 0 post injury, but where should that individual’s progress be at Day 40? That’s where in the trenches experience helps. Unfortunately, it takes a ton of time - and learning from mistakes along the way.

Evidence-based information can be accessed much easier and without the need for years of experience. Unfortunately, though, there is a ton of it to sift through. There are countless scholarly journals out there, and full-text access isn’t always easy to come by. Moreover, We often take for granted that study designs are all acceptable if something makes it to publication. The truth is that some scholarly journals have much lower publication standards than others. it could be a full-time job just pouring over all these journals, but it could be five full-time jobs to make sure they’re all legitimate.

Who has time for that? Certainly not me. Luckily, the good folks at Examine.com have built out an amazing team whose focus is particularly in this evidence based arena. And, they’ve got an awesome new resource - Examine Personalized - I’m excited to tell you about because I’m going to be utilizing it myself. Here's how it works:

I love this approach because it's both curated content: just like you follow certain people on social media to get the information you want, this allows you to select which categories mean the most to you. Here are the 25 categories you can select from for your targeted education:

The July update covered 275 studies over 149 pages in these 25 categories. This is going to save me a lot of time and, more importantly, make me a more informed professional. And, it'll help me to come up with ideas for content for my writing and videos on this site, as some of my most popular articles of all time have related to me building on what I've learned from evidence-based research. You can learn more HERE.


 

Sign-up Today for our FREE Newsletter and receive a four-part video series on how to deadlift!

Name
Email
Read more

Can You Trust the Research You’re Reading?

Today's guest post comes from the bright minds at Examine.com, who just released their new continuing education resource, Examine Research Digest. I love their stuff, and I'm sure you will, too. -EC

The internet is one of the last true democracies.

It’s a place where anybody with the necessary tools (a computer and an internet connection) can actively shape the perception of information...even if they have no qualification to do so.

Though the democratization of information is a good thing, one would assume that certain topics like scientific research would remain steeped in their foundations, because...well...that's how they remain reliable.

Unfortunately, in efforts to keep up with the demands for new, sexy content, many writers have taken to regurgitating information with little to no understanding of its context or how it affects you: the end reader. This is one of the many ways information gets skewed.

It’s often said that misinformation is a symptom of misinterpretation. The very definition of words can mean different things to different people.

questi8-n

One example of this is during research when a conclusion is reported as "significant." When scientists use this term, it implies "statistical significance." What this means is that the probability of the observations being due to the intervention is much greater than simply by chance.

This is very different than the general understanding of “significant.” Think of it this way: if your deadlift goes up from 405 to 410, that could be considered statistically significant in science. Would you say "my deadlift went up significantly," though? Probably not!

Now imagine how this simple misunderstanding of a term can impact the interpretation of a study. Something that may mean very little to a researcher is taken out of context by a well meaning blogger, eventually ending up as a eye-catching headline in your Facebook timeline.

A second way that information becomes misinformation is through the process of simplification.

When scientific studies are written, they are done so to most effectively relay their findings to other scientists, facilitating future studies and discoveries on the topic in question. If you’ve ever read a research study, you know that this approach to writing hinges on the use of precise terminology and complex verbiage so that nothing gets misinterpreted.

Unfortunately, this approach is less than ideal for relaying important findings to the people who can apply it. This leaves a few options:

1. "Dumb down" the content, hoping nothing gets lost in translation.

2. Keep as-is, with the understanding that it won't be able to reach as many people as intended.

3. In the most egregious option, data gets turned into "sound bites" that are easily transmitted by traditional media outlets.

Once one or more of these things happen, all traces of relevance to the original source get lost and misinformation starts to get spread. Moreover, another equally insidious way misinformation gets spread is by shifting focus onto one study (cherry-picking) rather than the entire body of evidence.

The internet has rapidly increased the speed of the news cycle. Information that once had time to be verified has taken a backseat to "as-it-happens" tidbits on Twitter. For the media to keep up, more factually inaccurate information gets disseminated in far less time.

Now, appreciate the fact that a news organization only has so much air time or so many words to talk about a new publication, and you can see how there isn't enough time to allow an adequate in-depth analysis of past studies or how the new study fits into the overall body of evidence.

Remember the media screaming “a high-protein diet is as bad as smoking?” Or that “fish oil caused prostate cancer?” These are perfect examples of two well-intentioned studies blown way out of proportion.

Pills

This leads to the fourth and final way misinformation gets spread: the reliance on controversy to gain an audience.

Earlier this year a blog post theorizing the connection between creatine consumption and cancer took social media by storm. The writers were savvy enough to understand that a title proclaiming creatine to be harmful had far more appeal than yet another post confirming its athletic performance benefits.

This sort of thing isn’t a new occurrence, but for some strange reason, audiences never tire of it. Once an controversial article starts getting shared, a case of broken telephone comes into play, transforming once-quality research into misinformation. As an industry, this is a problem we need to address.

"Epilogue" from EC

In spite of all this misinformation, there are people still fighting the good fight - and that's why I’m a big fan of Examine.com. They wrote our most popular guest post ever (on the science of sleep). And, whenever people ask me about supplementation, I refer them to Examine.com.

To that end, for those who want to be on the cutting edge of research, and want something that counters the overwhelming amount of misinformation, I'd recommend Examine.com's fantastic new resource, the Examine Research Digest (ERD).

ERD-intro-images

Before a study is presented in ERD, it's analyzed and reviewed by the researchers, then all references and claims are double-checked by a panel of editors. Subsequently, a final pass is done by a review panel of industry and academic leaders with decades of experience. Because you have a panel from different backgrounds, you know that you’re getting the complete picture, not the analysis of a single person.

Needless to say, I'm excited to take advantage of this resource personally to stay on up-to-date on some of the latest nutrition and supplementation research - and its practical applications for my clients and readers. I'd strongly encourage you to do the same, especially since it's available at a 20% off introductory price this week only. You can learn more HERE.

Sign-up Today for our FREE Newsletter and receive a four-part video series on how to deadlift!

Name
Email
Read more

Should Pitching Coaches Understand Research Methods and Functional Anatomy?

Quite some time ago, I met a pitching coach who made a bold statement to me:

"Most Major League pitchers have terrible mechanics."

I don't know if he meant that they were mechanics that could lead to injuries, or simply mechanics that would interfere with control and velocity development, but either way, I shrugged it off.  Why?

Their mechanics are so terrible that they're in the top 0.0001% of people on the planet who play their sport.  And, they're paid extremely well to be terrible, I suppose.

Kidding aside, this comment got me to thinking about something that's been "festering" for years now, and I wanted to run it by all of you today to get your impressions on it.  In other words, this post won't be about me ranting and raving about how things should be, but rather me starting a dialogue on one potential way to get the baseball development industry to where it needs to be, as it clearly isn't there yet (as evidenced by the fact that more pitchers are getting hurt nowadays than ever before).

The way I see it, mechanics are typically labeled as "terrible" when a pitcher has:

1. Trouble throwing strikes

2. Pitching velocity considerably below what one would expect, given that pitcher's athleticism

3. Pain when throwing

4. Mechanical issues that theoretically will predispose him to injury 

In the first three cases, anyone can really make these observations.  You don't need to be trained in anything to watch the walk totals pile up, read a radar gun, or listen when a pitcher says, "It hurts."  Moreover, these issues are easier to coach because they are very measurable; pitchers cut down on their walks, throw harder, and stop having pain.

Issue #4 is the conundrum that has lead to thousands of pissing matches among pitching coaches.  When a pitcher gets hurt, everyone becomes an armchair quarterback.  The two biggest examples that come to mind are Mark Prior and Stephen Strasburg.

Prior was supposed to be one of the best of all-time before shoulder surgeries derailed his career.  After the fact, everyone was quick to pin all the issues on his mechanics.  What nobody has ever brought to light is that over the course of nine years, his injuries looked like the following (via Wikipedia):

1. Hamstrings strain (out for 2002 season)
2. Shoulder injury (on-field collision - missed three starts in 2003)
3. Achilles injury (missed two months in 2004)
4. Elbow strain (missed 15 days in 2004)
5. Elbow injury (missed one month in 2005 after being hit by line drive)
6. Rotator cuff strain (missed three months in 2006)
7. Oblique strain (missed two starts in 2006)
8. Rotator cuff strain (ended 2006 season on disabled list)
9. Shoulder surgery (missed entire 2007 season, and first half of 2008)
10. Shoulder capsule tear (out for season after May 2008)
11. Groin injury (missed last two months of 2011 season)

By my count, that is eleven injuries - but four of them were non-arm-related.  And, two of them (both early in his career) were contact injuries.  Who is to say that he isn't just a guy with a tendency toward degenerative changes on a systemic level?  How do we know one of the previous injuries didn't contribute to his arm issues later on?  How do we know what he did for preventative arm care, rehabilitation, throwing, and strength and conditioning programs? We don't have his medical records from earlier years to know if there were predisposing factors in place, either.  I could go on and on.

The issue is that our sample size is one (Mark Prior) because you'll never see this exact collection of issues in any other player again.  It's impossible to separate out all these factors because all issues are unique.  And, it's one reason why you'll never see me sitting in the peanut gallery criticizing some teams for having injured players; we don't have sufficient information to know exactly why a player got hurt - and chances are, the medical staff on those teams don't even have all the information they'd like to have, either.

Strasburg has been labeled the best prospect of all-time by many, and rightfully so; his stuff is filthy and he's had the success to back it up.  Of course, the second he had Tommy John surgery, all the mechanics nazis came out of their caves and started berating the entire Washington Nationals organization for not fixing the issue (an Inverted W) proactively to try to prevent the injury.  Everybody is Johnny Brassballs on the internet.

To that end, I'll just propose the following questions:

1. Did Strasburg not do just fine with respect to issues 1-3 in my list above?

2. Would you want to be the one to screw with the best prospect of all-time and potentially ruin exactly what makes him effective?

3. Do we really know what the health of his elbow was when the Nationals drafted him?

4. Do we know what his arm care, throwing, and strength and conditioning programs were like before and after being drafted?

There are simply too many questions one can ask with any injury, and simply calling mechanics the only contributing factor does a complex issue a disservice - especially since young athletes are growing up with more and more physical dysfunction even before they have mastered their "mature" mechanics.

The Inverted W theory is incredibly sound; Chris O'Leary did a tremendous job of making his case - and we certainly work to coach throwers out of this flaw - but two undeniable facts remain.  First, a lot of guys still throw with the Inverted W and don't have significant arm issues (or any whatsoever).  They may have adequate mobility and stability in the right places (more on this below) to get by, or perhaps they have just managed their pitch counts and innings appropriately to avoid reaching threshold.  I suspect that you might also find that many of these throwers can make up for this "presumed fault" with a quick arm combined with a little extra congenital ligamentous laxity, or subtle tinkering with some other component of their timing.

Second, a lot of guys who don't have an Inverted W still wind up with elbow or shoulder injuries. Good research studies bring issues like these to light, and nobody has really gotten a crew of inverted W guys and non-inverted W guys together to follow injury rates over an extended period of time while accounting for variables such as training programs, pitch counts, and pitch selection (e.g., sliders vs. curveballs). We don't know if some of these other factors are actually more problematic than the mechanics themselves, as it's impossible to control all these factors simultaneously in a research format.

As such, here we have my first set of questions:

Don't you think that pitching coaches need to make a dedicated effort to understand research methods so that they can truly appreciate the multifactorial nature of injuries?  And, more importantly, wouldn't learning to read research help them to understand which mechanical issues are the true problem?  

The Inverted W is certainly an issue, but there are many more to keep in mind. Just my opinion: I think the baseball industry would be much better off if pitching coaches read a lot more research.

Now, let's move on to my second question.  First, though, I want to return to the Inverted W example again. I have not met more than a few pitching coaches who can explain exactly what structures are affected by this mechanical flaw because they don't understand what functionally is taking place at the shoulder and elbow.  They don't understand that excessive glenohumeral (shoulder) horizontal abduction, extension, and external rotation can all lead to anterior glide of the humerus, creating more anterior instability and leading to injuries to the anterior glenohumeral ligaments and labrum.  Meanwhile, the biceps tendon picks up the slack as a crucial anterior stabilizer.  They also don't appreciate how these issues are exacerbated by poor rotator cuff function and faulty scapular stabilization patterns.  And, they don't appreciate that these issues are commonly present even in throwers who don't demonstrate an Inverted W pattern.

At the elbow, they also can't explain why, specifically, the Inverted W can lead to problems. They don't understand that the timing issue created by the "deep" set-up leads to greater valgus stress at lay-back because the arm lags.  They can't explain why some players have medial issues (UCL injuries, ulnar nerve irritation, flexor/pronator strains, and medial epicondyle stress fractures) while other players have lateral issues (little league elbow, osteochondritis dissecans of radial capitellum) from the same mechanical flaws.  They can't explain why a slider thrown from an Inverted W position would be more harmful than a curveball.

I can explain it to you - and I can explain it to my athletes so that they understand, too. I've also met a lot of medical professionals who can clearly outline how and why these structures are injured, but we aren't the ones coaching the pitchers on the mounds.  The pitching coaches are the ones in those trenches.

To that end, I propose my second set of questions:

Don't you think pitching coaches ought to make an effort to learn functional anatomy in order to understand not just what gets injured, but how those injuries occur?  Wouldn't it give them a more thorough understanding of how to manage their pitchers, from mechanical tinkering, to pitch selection, to throwing volume?  And, wouldn't it give them a more valid perspective from which to contribute to pitchers' arm care programs in conjunction with rehabilitation professionals and strength and conditioning coaches? 

The problem with just saying "his mechanics suck" is that it amounts to applying a theory to a sample size of one.  That's not good research.  Additionally, this assertion is almost always taking place without a fundamental understanding of that pitcher's functional anatomy.  It amounts to coaching blind.

To reiterate, this was not a post intended to belittle anyone, but rather to bring to light two areas in which motivated pitching coaches could study extensively in order to really separate themselves from the pack.  Additionally, I believe wholeheartedly in what Chris O'Leary put forth with his Inverted W writings; I just used it as one example of a mechanical flaw that must be considered as part of a comprehensive approach to managing pitchers.

With that said, I'd love to hear your opinions on these two sets of questions in the comments section below. Thanks in advance for your contributions.

Sign-up Today for our FREE Baseball Newsletter and Receive Instant Access to a 47-minute Presentation from Eric Cressey on Individualizing the Management of Overhead Athletes!

Name
Email
Read more

How to Read Fitness Research

If you read this blog on a regular basis, I'm sure you know that while I'm undoubtedly an "experiment in the trenches" kind of guy, but I'm also very evidence-based in a lot of things I do.  As such, I spend a lot of time reading research.  Doing so not only affirms or refutes what I'm doing, but also provides me with consistent content ideas for this blog: read more, write more! Without even thinking of it, I rely pretty heavily on what I was taught in graduate school research methods courses and what I learned during my own master's thesis training intervention, data collection/analysis, and subsequent publication in The Journal of Strength and Conditioning Research.  Unfortunately, a lot of folks never get this training in school - or they get it at a time when it isn't interesting or applicable, because all they're doing is cramming for the next test or counting down the hours until a big party weekend.  Then, down the road, when it comes time to interpret research on a scholarly journal, they overlook key elements of a study, misinterpret results, or let poor research practices slide. Additionally, a lot of people simply don't know where to look when it comes to finding new research in their field of expertise.  Especially within the fitness industry - where one may need to cover everything from nutrition, to sports medicine, to strength training - things can be tough to locate. The good news is that my buddy Mark Young just released a product called How to Read Fitness Research to address these problems.

I won't lie to you: reading about research methods isn't sexy, and you probably won't be able to watch all the webinars straight through like you would the Rocky or Jaws movies.  However, if you put the time in to cover this material, you'll be rewarded with a better understanding of how to approach continuing education in the fitness industry.

The general fitness enthusiasts in the crowd shouldn't worry about picking this one up, and neither should those of you who've been through college exercise science research methods classes (and actually paid attention).  Those of you who entered the fitness industry as a second career or as a first career without a college education should absolutely check this out, though.  At $37, it's a great value.

Check it out: How to Read Fitness Research

Related Posts

How to Attack Continuing Education in the Fitness Industry

Want to be a Personal Trainer or Strength Coach? Start Here.

The Lucky 13: Cressey's Top Reading Recommendations

Sign-up Today for our FREE Newsletter and receive a deadlift technique tutorial!
Name
Email
Read more
Page
LEARN HOW TO DEADLIFT
  • Avoid the most common deadlifting mistakes
  • 9 - minute instructional video
  • 3 part follow up series