This is the 2nd post in a series on how your scholarship is evaluated in various academic evaluation processes. I was inspired by the comments on a blog post on Melville and the knowledge that some of my readers do blog and worry about how this will affect their careers. The first post is here.
As I said in my last post, the abstract concepts we are evaluating — quality and impact — are often obscured in everyday discussions of the evaluation of scholarship, in favour of discussing the indicators themselves. In this post, I want to make the link back to the bigger concepts to make it easier to see how evaluation processes might be adapted to incorporate more diverse publication types and perhaps even wider criteria.
Common sense about academic publishing.
It has become common sense in academia that you have to publish in peer reviewed journals.
In most disciplines this is further qualified by the terms “high ranking”, “high impact” or similar. Some disciplines have a clear sense of a list, which may be referred to by categories or levels.
In humanities disciplines, the common sense is that you must publish a monograph with a good press. This may be a university press. Or there may be other discipline specific ideas of what counts as publishing well.
There is an assumption that a good scholarly press will have a peer review process.
New journals, new forms of publishing (including blogs), and market pressures on presses have raised concerns about this ‘common sense‘.
Quality.
Peer review is widely stated to be the gold standard. However, peer review itself is only a proxy indicator of quality. It is more accurately an indicator that there is a process in place to evaluate quality, prior to publication.
Because the peers who review your work for publication are similar to the peers reviewing your work for promotion or awarding a grant or whatever, it is assumed that the first evaluation is applicable. There is no need for this committee to duplicate the work of the reviewers for the journal or press.
Interdisciplinary scholarship and publications outside of your discipline can thus raise questions about quality, if you are being evaluated by peers within a discipline. The “peers” in both processes are not the same.
The practice is not to review work and make decisions in this process. Therefore they don’t know whether to interpret this publication as ‘great research’ published outside the discipline, or as low quality work that reviewers in a different discipline don’t recognize as low quality. (i.e. they don’t know if the standards are equivalent) The tendency is to default to assuming the latter unless you have publications in discipline-specific journals that validate the [disciplinary] quality. ARRRGH!
The quality of the journal or press is the actual indicator of the quality of the article. It is assumed that each journal/press has a minimum standard for acceptance, and that the peer review process has determined that you exceed that minimum standard. If pressed, things like rejection rates and reputation will be brought in as further indicators of this quality standard.
A journal that rejects a high proportion of the manuscripts submitted is considered to have higher standards. Journal editors take this into consideration when deciding whether to increase the number of issues or the number of articles per issue even if they believe, based on their knowledge of the overall quality of submissions, that there are sufficient high quality articles to sustain increased publication volume.
New journals fall foul of almost all of these measures of quality, even if peer reviewed. Without a sufficient track record it is very hard to know where the quality bar is set. How can a journal have a reputation in its first few issues? And yet someone has to publish in those first issues if it is ever going to establish a reputation, preferably people who produce high quality research.
Impact/Significance.
Impact (or significance) is a separate issue. In most evaluation processes, we are concerned with the impact on the advancement of knowledge (called “significance” in the UK REF process), usually within the discipline. Sometimes the impact beyond the discipline and even beyond the academy is valued, though most often as additional to impact within the discipline.
That article in an interdisciplinary journal or in the journal of another discipline can be a bonus. As long as the basic quality issue is established this can be read as spreading good work in [your discipline] to the heathens in those other disciplines. Not that you wanted to be a missionary or anything.
Citations are the main indicator of impact. If someone has cited your work, clearly you have had an impact on how they are thinking about whatever it is. The problem with citations is the considerable delay between publication and citation. People have to read your work, think about it, see its relevance to their work, do their own work, and publish it before there is a citation. Each stage of that process takes a long time.
As an example, I looked up my own 1995 book on Google Scholar and checked the dates of the citations.
There were 51 citations ranging from 1997 to 2010. In the first 5 years after publication (up to and including 2000), there were only 16 citations.
Looking at that list, I can see that I have had a certain impact, but I am most struck by the fact that my book appears to be relevant 15 years after it was published, something that would not even register in most evaluation processes.
The “impact factor” is a complicated statistical indicator that attempts to address this delay by calculating the likelihood that an article published in a particular journal will be cited. It thus stands in as a proxy for citations that will happen in future. Impact factors are problematic for several reasons, but that’s what they are trying to do. (Your friendly librarian can help you find the impact factors for journals in your discipline if you need to know this.)
In many disciplines, especially in the humanities, more impressionistic indicators of likely impact are probably in play. This is, at least in part, because books are still very important and citation indexes only look at journals. Impact factors are not even close to valid measures for most humanities disciplines. Your peers will instead use vague ideas like how well respected a journal is or how many people read it. (Although one could look up circulation figures, and even page loads or download figures for electronic copies, it is unlikely anyone actually does this.)
Again, new journals are at a clear disadvantage here. And anything not indexed by the main citation indexes is out of the game, including monographs. In the main citation indexes the citations of your book are only those citations in indexed journal articles. Google Scholar does include books and when it is more complete might give a better measure of citations for humanities disciplines because work cited in the monographs of others will be counted.
Wider Impact.
The other issue that arises around impact is whether “impact on the advancement of knowledge” is even what should be evaluated in a particular instance.
A big issue right now is how the heck we evaluate impact beyond the academy? No one really knows what good indicators are. You don’t know what evidence to include. They don’t know how to evaluate the evidence.
The discussion around open access journals confuses me on this score. It seems that at least some people think that if academic journals were accessible to people without paying for a subscription, then non-academics would read your work. This has been a strong element in the push for open access in medical publishing, for example. I don’t buy it. Academic articles are usually written for an academic audience — the questions, the style, everything. Why would most non-academics read them? Assuming they even have the time.
Real knowledge mobilization will require different forms of dissemination, some of which will not even resemble publishing (e.g. training workshops, informal discussions, online fora). If you are serious about having an impact beyond the “advancement of knowledge in your discipline“, we need new indicators.
Some people have been trying to figure out how to do this, but it’s been patchy and really hard work.
In 2005-06, I worked with the Canadian Health Services Research Foundation (CHSRF; as was) on a project to find out what was happening and spread that knowledge around as part of their organizational capacity building work a few years ago. We only wrote 2 issues of Recognition but they have some interesting examples. (These no longer seem to be available on line. This is a shame.)
Related posts:
Communication vs Validation: why are you publishing?
Scholarly Publishing (A Short Guide) available as eBook and Paperback includes a longer discussion of the relationship between communication and validation in choosing where to publish your scholarly work.
Google scholar is a serious alternative to Web of Science by Anne-Wil Harzing on LSE Impact blog
Birkbeck signs San Francisco Declaration on Research Assessment suggests that there is a shift in this culture underway.
Edited 30 March 2017. Related posts update 8 October 2019. Re-edited and added to the Spotlight on Peer Review, September 2022.
Leave a Reply