One of the ways in which scientific journals are compared is by their "impact factor". How are these factors calculated? It is actually pretty simple. Let's say I publish an article based on microbe X in a peer reviewed journal. Other labs working on microbe X may cite my work when they publish their findings, if my work has directly impacted their research. My article on microbe X now has one citation. The average number of citations for articles in a particular journal denote it's impact factor. So, if a journal has an impact factor of 10, each article gets cited, on average, 10 times. Presumably that means that the work published here is of higher quaility or more interest than articles published in a journal with an impact factor of 5. Of course, the pitfalls here are obvious. Not all research will be of interest to all labs, and a disproportionate amount of weight is given to journals that publish reviews along with research articles, as reviews are cited more often. A study release this week in Science (impact factor 31.36) (link to article HERE) focuses on another huge problem with this system. Some journals basically force authors to cite other previously published work from the same journal to boost the impact factor. These citations are added without actually having contributed any useful information to the current study. While the article focuses on economics, business, sociology and psychology, the problem is likely endemic to all journals. As an added bonus, in the supplemental tables they list the most coercive journals!
Please cite this article often and randomly. My impact factor is at risk of fading into oblivion.