The quality and impact/significance of your research is usually evaluated based on where you publish. The advent of new outlets for your scholarly work has raised some interesting issues about how this is done.
A blog exchange about Melville scholarship (read the comments, and also see this discussion of that post) highlights the particular issues surrounding blogs as an outlet for scholarship. Note the reference in the comments to “responsible bloggers“, which implies that blogging per se is considered an irresponsible medium. And at the same time, the recognition that much academic scholarship is done in such a way that it is not surprising academics don’t make these kinds of connections and contributions. “The academic failure to think.” Indeed.
It strikes me that there is also an assumption that bloggers (responsible or otherwise) are not academics, and that academics are not bloggers. This is not true, though I suspect more academics would blog if they were clearer about how it will be evaluated by those who make decisions about the value of their work.
“New” forms of publishing.
It would be a mistake to think this is just about blogs. Or even to think that it is about whether or not a publication is peer-reviewed. There are some very interesting discussions taking place about the value of peer-review itself and whether blogging could be treated as a kind of post-publication peer review.
In many disciplines there have been debates about the value of peer-reviewed journals that only publish online, with no print edition. And the rapid increase in the number of new peer-reviewed journals (with or without print editions) frequently raises questions when publications are being evaluated.
It has been said that academia is conservative. When it comes to evaluating scholarship there does seem to be a conservative bias towards publishing in venues that have been around for a long time and are well known to your peers. Anything even vaguely new seems to cause all kinds of difficulties.
What are we trying to measure?
There are questions here of both:
- what is being evaluated?
- what indicators are used in that evaluation?
I’m referring here to the primary contexts in which your work will be evaluated: hiring, promotion, grant applications, and the like.
Usually 2 things are being measured: quality and impact/significance. Both of these are abstract, so they need to be operationalized such that concrete evidence can be used as a kind of proxy for the thing itself. In the social sciences, we call this thing an “indicator“. (Apologies if you know all this stuff but I’m assuming the humanities folks didn’t necessarily get this language rammed down their throats during their training.)
In many academic evaluation processes, the relationship between the indicators and the abstract concepts we are trying to measure has become obscured. It is not surprising that over many years of using particular indicators, the focus of discussion has shifted to the indicators themselves – with little thought given to what they indicate.
Are we using the best indicators?
Whenever we operationalize an abstract concept and develop indicators, we have to ask ourselves if those indicators are valid and if they are reliable.
A valid indicator is one that measures the thing you want to measure and not something else. To use an unrelated example, if you want to know how much water is in a glass, the height of the water is not a valid indicator because the circumference of the glass will affect the relationship between the volume of water and the height. You should actually measure the volume, perhaps by pouring it into a calibrated measuring cup.
A reliable indicator is one which will give you the same measurement each time. Using a calibrated ruler to measure length is more reliable than pacing off the room, for example.
The questions we need to ask ourselves about how we evaluate the quality and impact of scholarship are:
- What indicators are we using for each concept?
- Are we clear about which concept we are measuring?
- Are these indicators valid?
- Given the changes in the publishing environment, are these indicators capturing all instances of quality and/or impact?
- Have the changes in the publishing environment affected the validity of particular indicators?
- Are there other possible indicators that could be used?
- Are these indicators reliable?
- Does a particular indicator always denote the same level of quality or impact?
- If not, can it be modified in some way (perhaps paired with another indicator) to increase its reliability?
- Are there other possible indicators that would be more reliable?
Let’s talk about this.
I’d love to hear what you think. Please contribute in the comments. Try to be respectful of other commentors and feel free to engage with them as well as what I’ve said above.
Is there anything missing on my list?
What are your concerns about how your work is being evaluated?
What makes a blogger “responsible”? How does that differ for scholarly blogs?
I’m assuming we all agree that quality and impact are what we should be evaluating? Is that true? Or is the confusion even at this abstract level?
What other questions does this raise for you?
I’m going to write a few more posts using this framework, looking in more detail at both the existing processes and some of the changes that are challenging the validity and reliability of that evaluation process.
The information in this and related posts has been incorporated into Scholarly Publishing (A Short Guide), available in eBook and paperback.
Related posts:
Communication vs Validation: why are you publishing?
What it means to make a contribution to knowledge
Peer reviewed journal articles and monographs in the academic evaluation process
Edited March 30, 2017. Information about Scholarly Publishing (A Short Guide) added 8 October 2019. Edited and added to the Spotlight on Peer Review, September 2022.
David Phipps says
Thanks for keeping this conversation going. There have been a few posts on this recently.
RT @NatNetNews: “E-mail like it’s 1999!” Are academic researchers late adopters when it comes to social media? Have your say: http://bit.ly/dI0fEY #NatNet
our blog http://researchimpact.wordpress.com/2011/01/12/knowledge-dissemination-blogging-vs-peer-review/ and the great comments posted there
and a SSHRC Digital Economy synthesis report I had the pleasure of helping out on “Digital Technology Innovation in Scholarly Communication and University Engagement” found at http://tkbr.ccsp.sfu.ca/files/2011/02/Lorimer-DigInnovationScholCmn.pdf
Scholars face an entire industry (publishing, granting, peer review, T&P…) that is cemented in a foundation of culture that never anticipated nor has kept up with emerging digital literacies. Academics are supposed to be leading society in new and deeper thinking. This is why we provide them with a protected space to undertake their scholarship. Faculty have the power to redefine their culture….it’s called collegial self-governance. Scholars need leadership and the faculty unions that govern T&P need to work with leaders and not protect the laggards. It’s not a question of Open Access since there are Open Access peer reviewed journals available. It is a question of alternatives to peer review.
Bon says
One of the aspects of scholarly blogs that I find incredibly valuable is that they are participatory: that they open the conversation to input and therefore open knowledge to co-shaping and co-construction. I see this as ongoing peer review, but it is sometimes perceived as a negative indicator, a sign of lesser validity.
The traditional system of peer review tends to reify the idea of scholar as knower: this piece of knowledge has been completed, reviewed, rubber-stamped, and is now available for consumption. The idea of putting work out on a blog, often with the overt intent of getting feedback and input to improve a concept-in-progress, threatens the authority position inherent in the notion of validity residing only at the end of a prescribed process. Yet it also represents a pedagogy of shared learning and of inquiry: perhaps even a means by which the academy could close the gap between what it often claims to want to do pedagogically and actually enacts in practice.
Jo VanEvery says
Interesting. One of the most empowering things I read when I was a doctoral student was something Dorothy Smith said in The Everyday World as Problematic. She was talking about the realization that writing for academic conferences or academic publication was not about a finished piece but about participating in a conversation, getting your work out where people you would not otherwise meet could read it, and respond.
This is how I taught my undergraduate students to read academic journal articles — as contributions to a debate rather than as final statements of truth. And I still believe that this is fundamentally what is going on.
I think Gary’s point about the speed of the process is relevant here. It takes years for that debate to happen in traditional publishing. And each contribution has to be at a particular level. In a blog, that debate happens more quickly though each contribution is probably smaller and we see much smaller increments of the process than are ever visible in the traditional process. I’m writing a 3rd post for this series specifically about blogs so I hope we can continue this conversation.
Bon says
To be clear, I agree with Dorothy Smith in that the ongoing conversation is what is intended to be going on in formal academic publishing…and for a very long period of time that was the primary way in which that collaboration and cross-pollination occurred. But. The process, particularly in certain disciplines, became an end in itself, I think, so that the notion of validity by process became more important than the idea of contribution TO the process.
Thus, when social media makes it possible to speed this process up enormously it also destabilizes the validity structure, even IF the collaborative contribution to conversation is still what’s going on. Hence, in some disciplines, citation of blog posts is still seriously iffy practice: not because blog posts don’t represent contribution to the conversation, and open contribution, often passed through multiple iterations and revisions via comments, but because they have no place in the validity structure to which parts of the academy are pretty attached.
Gary Myers says
The older peer-review process is slowly evolving to adapt to the increasing value of online peer-review journals. However, the process of peer-review is still slow and has difficulty keeping up with the speed and immediacy of digital technology.
Researchers recognize that the current peer-review process is broken as “new forms of publishing” (including blogging) have emerged. http://www.genomeweb.com/peer-review-broken
The slow pace of peer review can’t keep up with the faster speed that social media publication can. http://www.nanowerk.com/spotlight/spotid=6016.php
As you point out, measuring the quality and impact of scholarship are still the most important and necessary indicators to guage the calibre and value of sholarship and research. Yet, until peer-review becomes a faster evaluation process, it still remains a limiting factor in the new knowledge economy where scholarship and research can quickly become outdated in the knowledge mobilization process.
The question of how we evaluate scholarship remains an important one, but must now include how quickly we evaluate to keep up with technology.
Jo VanEvery says
Yes. Bon has articulated something important about validation and communication/collaboration. I think that goes along with the fact that the validation process has also obscured what exactly we are validating.
If there is one thing that really frustrates me it is when people lose sight of the fact that they are communicating when they publish. And when their writing is further stifled by their fears about validation and evaluation.
Rohan Maitzen says
This is a really important conversation, and one that needs to be happening at various levels of university administration–VP Academics, take note!
I wanted to add a little to the quality / impact issue, just from my personal experience as an academic who blogs. Nobody has ever asked me how widely read or cited my conventional academic publications are: I included no information about this in my applications for tenure or promotion, for instance. I have served on various professional development and hiring committees, as well, and the issue has never come up. Nor has it been raised on any professional development or hiring committee I’ve been on. The concern is quality (or ‘validity’), and passing peer review is basically assumed to take care of that. Still, we also always read through people’s publications for ourselves. I have often been told, though, that if I want to make any kind of case that my blogging should ‘count’ as a professional contribution, I should provide evidence about how many readers I get, as if the shift in form makes quality irrelevant, or makes it something that can’t be assessed by peer evaluation (by simply reading some of the posts, for instance). Now, bloggers know that the easiest way to get ‘hits’ is to chase controversies and say startling things: that is, getting a lot of readers is not any guarantee of quality, though it may guarantee impact of a certain kind. To me, this double-standard is just one more sign of academics not really thinking about the kinds of issues you raise here–why we write and publish, what the measures of success are, and how these things are shifting as new methods open up to us.
I would absolutely second Bon’s comment about the participatory nature of academic blogging. I’m going through an example of exactly that ‘pedagogy of shared learning’ on my own blog right now, as it happens, and finding it very intellectually stimulating as well as encouraging. I can’t remember the last time I felt intellectually stimulated and encouraged at a traditional conference or colloquium! The theory may be that these are for ongoing development of ideas, but the pressure to perform and professionalize is so great now that most papers are highly polished and closed off to new ideas.
Finally (sorry to go on so long!), if anyone doesn’t know it, Kathleen Fitzpatrick’s book Planned Obsolescence includes a brilliantly thought-provoking analysis of peer review.