You often hear the academic axiom 'publish or perish' but why is publication of scientific results so vital to the scientific enterprise? While we envision science as being performed by lab coat wearing individuals who toil in the lab, in reality science is an ongoing discussion amongst scientists working on related subjects. There are a number of modes of communication in these discussions with conferences and journal articles being the primary components. Conferences are generally reserved for brand new and potentially preliminary results. You either present a poster or give a talk and it allows you to get feedback before the prime-time of peer review.
Journals are generally peer reviewed, meaning that when you submit an article it gets reviewed by scientists who work on topics related to the paper. These critiques can range from supportive and encouraging to down right nasty. Often the really nasty ones come from bitter grad students who are the ones that professors typically ask to review papers for them. This peer review process is a blinded process from the perspective of the submitter. You don't know exactly who the reviewers are. In many fields, you have a good guess because the number of people who are qualified to understand and review your work is small. Reviewers know who the authors are but are supposed to be impartial and just evaluate the paper on its merits (namely how good is the data and does it support their claims). Reviewers are also not supposed to use information obtained from unpublished manuscripts to further their own research. This can lead to some nasty politics. Suppose as a reviewer you come across a paper that has a new technique that would really accelerate the research of one of your students who is struggling. Do you share the information with the student or keep it a secret until it is published? Professors often share the information if they feel that their own area of research is different enough that they aren't direct competitors. Another ethical concern is when you are reviewing a paper that would scoop something that your own group is working on. The unscrupulous thing to do is to give the paper a bad review to delay its publication and then hastily get your own work published somewhere.
The peer review process helps to maintain the integrity of the scientific process. As an investigator, you can sometimes over-interpret results or feel that your data is stronger than it is. Reviewers help to determine both the veracity and the strength of the results.
Are all journals the same in terms of quality? Definitely not. There are literally hundreds to thousands of generals on many topics, yet people in a field may only read the top 20-30 journals or so. Are the results in lower tier journals necessarily wrong? No, but they often have problems in their experimental design, weak statistics, or lack novelty (i.e. it has basically been shown before). Typically, the science that's reported on in the media originates from a high profile journal (like Science, Nature, Cell, Proceedings of the National Academy of Science (PNAS), PLoS..). These journals typically have high quality papers and present novel findings with deep impact.
Statistical analysis is an important part of interpreting scientific results. Often you'll find a difference in response to some change in the system but you want to know is that difference significant? Meaning, what is the likelihood that this result would have occurred by chance? This is sometimes called hypothesis testing because you are testing whether to accept the 'null' hypothesis (that the result occurred by chance) or reject (it is highly unlikely it occurred by chance). An important aspect of the scientific process is that one never 'proves' their hypothesis. They merely demonstrate results that are significant. The more results there are that agree with this hypothesis the greater the likelihood that it is a real phenomena. It is for this reason that publication is extremely vital. It announces the new findings to the scientific community and allows others to verify the findings in their own research, which can either lead to rejection of the results or further belief in its veracity.
When interpreting the veracity of a particular result, the sample sizes used is vital. A study that looks at something 1000 times or in 1000 people is much less powerful than a study that looks at something 1 million times or in 1 million people. This is especially true with studies pertaining to humans and human health. Unlike lab animals, which are inbred to be as close to genetically identical and possible, we are genetically heterogeneous. To understand general trends in human behavior or health, one needs large sample sizes in order to ensure that your results aren't due to sampling bias. The larger the sample size is, the less likely your result is to be biased. Generally, the statistical power of a study increases as the square root of the sample size so a study with 1 million people (10^6) is roughly 30 times more powerful than a study with 1000 people. In other words, when you are doing hypothesis testing a result from a study with 1 million people is 30 times less likely to have occurred by chance. So sample size is hugely important when determining the veracity of a result. With human studies, its also hugely important that the experimental design (who is included in the study, how the study is conducted) controls for confounding variables as much as possible. Say you are trying to prove that smoking causes cancers but you unwittingly enroll people in your study that have been exposed to toxic chemicals that also increases their incidence of cancer. You would find an increase in cancer rates and attribute it to smoking but it would not be a clean result because you hadn't controlled for other risk factors. So again, sample size is hugely important because to interpret results one needs lots of control groups to allow you to remove the effects of other risk factors or variables. Poorly designed studies may be of limited sample size or lack necessary controls.
The media often latches on to results without regard for these things. This can create distortions in the public mind. Nevertheless, when large scientific agencies concur with a result and numerous studies also concur it’s highly likely that a particular result is indeed true. Are the studies that are contrary to the accepted dogma wrong? Maybe, but they might just be suffering from sample bias or some other problem that confounds the results. They aren't wrong in the sense that there aren’t people that display that phenotype/behavior, but it can't be generalized in the same way that a study that has a large sample size and controls for confounding variables can. In general, with both cells and organisms things naturally follow a certain distribution. You can use that natural distribution to determine whether there has been a significant change in a population with respect to a particular measure. So you if you knew the height distribution in the 19th century and compared it to the height distribution in the 20th century you would notice a significant shift in the distribution toward taller heights. Did human genetics change? Nope. Nutrition improved so people grew taller.
I'm developing a mobile LJ client for WebOS devices (Pre/Pixi...). What do you consider to be 'must have' features in an LJ client?
This is a test post via XMLRPC!