Active Low-Carber Forums
Atkins diet and low carb discussion provided free for information only, not as medical advice.
Home Plans Tips Recipes Tools Stories Studies Products
Active Low-Carber Forums
A sugar-free zone


Welcome to the Active Low-Carber Forums.
Support for Atkins diet, Protein Power, Neanderthin (Paleo Diet), CAD/CALP, Dr. Bernstein Diabetes Solution and any other healthy low-carb diet or plan, all are welcome in our lowcarb community. Forget starvation and fad diets -- join the healthy eating crowd! You may register by clicking here, it's free!

Go Back   Active Low-Carber Forums > Main Low-Carb Diets Forums & Support > Low-Carb Studies & Research / Media Watch > LC Research/Media
User Name
Password
FAQ Members Calendar Mark Forums Read Search Gallery My P.L.A.N. Survey


Reply
 
Thread Tools Display Modes
  #1   ^
Old Thu, Jul-17-03, 07:51
Bookery's Avatar
Bookery Bookery is offline
Registered Member
Posts: 78
 
Plan: Atkins
Stats: 197/165/130 Female 5'4"
BF:??/29/20
Progress: 48%
Location: Massachusetts
Default Significance in scientific studies

I've noticed some people getting frustrated with the tendency of the anti-Atkins media (or even the ambivalent-Atkins media) to describe differences or results of pro-Atkins studies as "not significant." I just want to clear something up for all those of you who are not True Science Nerds Those of you who *are* True Science Nerds, feel free to tune this out or correct me. Those of you who aren't, give me a smack if this sounds preachy, I'm trying to be precise:

"Significant", in the context of a scientific study of some kind, is a loaded word. It doesn't mean what it means in normal, everyday conversation. It means that when a statistical "test" -- basically, some kind of formula invented by somebody who likes math a lot more than me -- was performed on the data, there was a very low probability of the results of the study being random and meaningless. You can get that low probability in basically two ways. The first way is if the difference in the study is really big. This would be something along the lines of having 50 low-fat dieters and 50 low-carb dieters, where the low-carb dieters all lost 50 pounds and the low-fat dieters all lost 5 pounds. The second and more common way is if there are lots of small differences. So 5,000 low-carb dieters all lost 20 pounds, and 5,000 low-fat dieters all lost 15.

Now, here's where it gets annoying. An "insignificant" difference would be something along the lines of 5 low-fat dieters losing 15 pounds each, and 5 low-carb dieters losing 20. There's still a difference, it's just not *statistically* significant. So while we all know the truth, the scientists aren't allowed to give as much weight to that study.

Unfortunately, in the translation from a scientific study to the mass media, "insignificant" loses that special meaning and people start thinking it means "not important" or "very small". Of course I'm preaching to the choir here, but I think low-carb works way way better than low-fat, and so I'd bet you at least a dollar that in the case of low-carb studies, most "insignificance" is just due to a lack of participants. Solution? Let's all start signing up for low-carb studies -- and let's make sure the anti-Atkins people know what "significant" really means. After all, most of *their* studies aren't significant, either. Even the ones *with* lots of people.
Reply With Quote
Sponsored Links
  #2   ^
Old Thu, Jul-17-03, 08:08
TarHeel's Avatar
TarHeel TarHeel is offline
Give chance a chance
Posts: 16,944
 
Plan: General LC maintenance
Stats: 152.6/115.6/115 Female 60 inches
BF:28%
Progress: 98%
Location: North Carolina
Default Good point!

Quote:
Unfortunately, in the translation from a scientific study to the mass media, "insignificant" loses that special meaning and people start thinking it means "not important" or "very small".


Thanks for posting that, Bookery. Even for folks who know the basics of "statistical significance", it is frustrating to see journalists use the terms signicant or insignificant in articles.

Kay
Reply With Quote
  #3   ^
Old Thu, Jul-17-03, 12:57
mnbooger's Avatar
mnbooger mnbooger is offline
Contributing Member
Posts: 92
 
Plan: Atkins
Stats: 302/350/150 Male 69 inches
BF:
Progress: -32%
Location: Shakopee,Minnesota
Default

When bad science happens many things can get worse. There was an EPA study (after the federal courts looked at it they said that it was biased and flawed because they picked and chose the data they wanted) that showed second hand smoke will kill you, but the results were statistically insignificant. Well, the non-smokers just looked at the results they wanted. So now alot of cities are banning bar and restaurant smoking based on a flawed study. And even though the courts said this was a flawed study it is still cited by people who want to show smoking is bad. And the number of people who die from secondhand smoke keeps getting larger...you just have to get creative with you data gathering. If your roommate smokes and you get hit by a bus...another victim of secondhand smoke. I am just woried that someone is going to start banning any source of smoke ie campfires, barbeque grills just to keep people safe from that evil smoke.

Before you flame me, I know that smoking is not good for you, but secondhand smoke has a very small (if any) effect on others.
If they want to do a better study, they should check the cancer rates of bar and restaurant workers, casino workers, and others that have to work in smoke filled enviornments. And then compare that to the general population.
Reply With Quote
  #4   ^
Old Thu, Jul-17-03, 21:07
liz175 liz175 is offline
Lowcarb since 7/2002
Posts: 5,991
 
Plan: Atkins
Stats: 360/232/180 Female 5'9"
BF:BMI 53.2/34.3/?
Progress: 71%
Location: U.S.: Mid-Atlantic
Default

Just to add one more wrinkle to the discussion of statistical significance: in addition to statistical significance being a function of the sample size and the difference in effect size between the two samples, whether or not a difference is statistically significant is also dependent upon the variance in the data. In other words, a difference of five points between two different groups may be significant in a sample of size X if the data are very tightly clustered -- everyone in one group lost between 4 and 6 pounds while everyone in the other group lost between 9 and 10 pounds -- and may not be significant in a sample of the same size if the data are more spread out -- everyone in one group lost between 1 and 9 pounds while everyone in the other group lost between 6 and 13 pounds. In both cases, the average difference between the two groups is five pounds, but in the case where the data are more spread out, you need a bigger sample to be certain that the difference was not due to random chance.

What statisticians are doing when they are measuring statistical significance is computing the probability that the difference between two groups is greater than 0. A difference is usually considered to be statistically significant if we are 95 percent certain that the difference is greater than 0. The reason we have to do this is that anytime you select a random sample, there is a chance that it is not really representative of the population. However, the more people you select into the sample, the more likely the sample is to be representative. If you think about tossing a coin -- if you toss is three times you may get heads all three times. If you toss is 300 times, it is unlikely you will get heads more than half the time -- it evens out. The same thing happens when you pick a sample -- when you pick a large one it is more likely to be representative and thus it is easier to prove the statistical significance of your results. Unfortunately, larger samples cost more money!

In a good statistical study, statisticians will compute the expected effect size first -- they will figure out how much of a difference between the two groups there is likely to be -- and then make sure that they have a sample big enough to show significance at that difference. Otherwise, there is little point in doing a study. However, unfortunately this is sometimes not done or the effect size (the difference between the two groups and the variance in the data) may be smaller than expected. Therefore, we may not be 95 percent confident the difference is statistically significant -- we may only be 75 or 80 percent confident that the difference is statistically significant and that is usually not good enough for publication. It doesn't mean that the difference isn't real; it just means that we aren't sure whether or not the difference is real.

Last edited by liz175 : Thu, Jul-17-03 at 21:11.
Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Similar Threads
Thread Thread Starter Forum Replies Last Post
Good essay VALEWIS LC Research/Media 4 Mon, Feb-23-04 10:03
"The skinny on low-carb diets"/"What scientific studies say about low-carb diets" gotbeer LC Research/Media 5 Thu, Feb-05-04 04:29
Here is an article bashing 'big fat lie': fairchild LC Research/Media 18 Mon, Sep-08-03 16:37
Current and Potential Drugs for Treatment of Obesity-Endocrine Reviews Voyajer LC Research/Media 0 Mon, Jul-15-02 18:57
Eating fat doesn't cause body fat Voyajer LC Research/Media 0 Sun, Jun-09-02 15:14


All times are GMT -6. The time now is 10:28.


Copyright © 2000-2024 Active Low-Carber Forums @ forum.lowcarber.org
Powered by: vBulletin, Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.