Most good papers have a table that shows how the cohorts or groups are alike in a number of areas. This allows the reader to see just how different they might be when compared to ask the question about how their differences matter. It also can show areas of bias that change the answer because some treatment or disease outcomes can be changed by the selection of a subtly different cohort.

Statistical data is collected and put on a graph, even if the graph isn’t in the paper. When data is plotted with trial subjects it usually follows a “bell curve”, with most of the people getting one result, and as you move away from the middle, fewer and fewer people get more or less of a result. In other words, most people driving on the interstate are going 75. Fewer are going 70 or 80. Even fewer are going 65 and 80 and so on. It is rare for someone to go 100, and also rare for 40 (unless on the ramp!). The people going at 40 or 100 are on the “tails” of the curve. The majority going 65-80 are in the fat bell-shaped middle.

If the question being studied really makes a difference, then the bell curve will move for the group getting that difference. How much movement is reflected by a value called the “p-value”, which gets very small when the difference is higher. If the difference is very low, then the item studied doesn’t matter much, and the p-value is high. Most people think that a p-value <0.05 makes the result "significant" or not due to a random event. This isn't always so but it is a good rule of thumb. Look for "confidence intervals" which aren't as black/white as a low p-value. These are very good because they average the statistical results and give a range of probable truths. You might see this as saying that drug A worked for 10% (95% CI -8 - +20). This means that it worked about 10% of the time, but to get 95% confidence it could be as low as -8% (they got worse!) to +20% (even better). Drug A might not be so hot. On the other hand, if it is drug B worked 65% (95% CI +62 - +80) then there is going to be a lot of people wanting that item! The bigger the trial, the smaller the spread of CI values. Note that this is a 95% CI - the tails are left out because those values are happening to the fewest numbers of people. The statistics will talk about the one-tailed or two-tailed test. If something can go up OR down, then a two-tailed test is needed because the people at the top AND the bottom of the curve can be moved around. They will also talk about paired tests. In general if you measure something once, and then measure it again later, you need a paired test because the results are probably related to each other. A good example is pulse rate; the guy with a pulse of 50 at the beginning will likely have a lower pulse later than the guy that starts at 90 - the data is paired. Sometimes the statistics seek to say that something caused something else, and a lot of information about correlation will be presented. An example of this is using statistics to say that drunk driving causes accidents, or "correlates" with them. An "r-value" will be presented but this isn't always used correctly. Be skeptical. Decide if it makes sense that it correlates. Look for other things to explain the finding.