a statistical test cannot absolutely prove anything - all a statistical test can do is quantify the likelihood that an observed result in a study is a real effect rather than due to chance
tests of significance (hypothesis tests) in clinical studies are undertaken to assess the probability that an observed difference between interventions could have occurred by chance - the tests actually check the hypothesis that no difference exists between interventions (referred to as a 'null hypothesis')
the p-value is the probability that no difference exists between interventions for a given endpoint ('null hypothesis')
probability can take any value between zero (no chance at all) and 1.0 (certainty), and this is also true the p-value
there is an arbitrary convention of using a p-value of 0.05
this means that if the p-value is < 0.05 (which means that the probability of the effects of two interventions being the same is 1 in 20 or less) the effects of two interventions are said to be statistically significantly different and the 'null hypothesis' is refuted (i.e. there is evidence that a difference exists between the interventions)
conversely, if the p-value is >0.05, this, by convention, would indicate there is no statistically significant difference in effect between the interventions
note that significance tests alone do not indicate the magnitude of the observed difference between treatments that is needed to determine the clinical significance of study results
Add information to this page that would be handy to have on hand during a consultation, such as a web address or phone number. This information will always be displayed when you visit this page