Statistics from Altmetric.com
Academics know the limitations of evaluations that fall short of what is required to infer causality. If an evaluation fails to eliminate or minimize biases, the program has a weak claim to success. Put simply, “bias” in this context refers to other explanations for attributing a success (or failure) to what is being evaluated than the program itself. Few evaluations are able to overcome all such biases and most good reviewers alertly spot them.
Tough reviews are never ignored. When sharply critical comments are received, it is likely a paper will be rejected. If it is not, it is often because another reviewer was less censorious. In such cases, authors are invited to submit a revision—with no promises of acceptance—and the decision is deferred. Occasionally, more than one revision is required. Final decisions are made by the editor alone, and although scientific merit is the most important consideration, it is not paramount. We strive to achieve a balance of topics, countries and disciplines, and to include papers that resonate with frontline workers.
In this issue, three papers fall into this admittedly less than perfect category. None employ the best possible design for an evaluation. They appear, however, because on balance, it seemed that each presented approaches and other elements that could prove useful for others in similar situations. Although we have no intention of encouraging second rate research of any kind, it is important to acknowledge that good evaluation research is usually costly, time consuming, and may require expertise that is often difficult to find. To comment further and provide some guidance on how to balance these issues, Sue Gallagher has written an accompanying editorial.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.