Top-down classification and its detractors

January 28, 2009

‘Hanging together’ blog reports on an unpopular move from ERIH:

…the proposed European Reference Index for the Humanities, funded by the European Science Foundation, which had announced it would grade journals into categories A (’high-ranking international publications’), B (’standard international publications’) and C (’publications of local/regional significance’). Rather as has happened in Australia whose league table of journals I mentioned in a previous post, there has been opposition to this idea – chiefly from academic editors of journals. So many of them have now threatened to boycott the index that the steering committee has been forced to drop the idea of the classification.

I agree absolutely with the dissenters, but maybe for different reasons than they do. I have two concerns: firstly, that the three levels are too blunt an instrument, and secondly, that information workers/librarians aren’t the right people for the job. 

It’s not (or at least, it shouldn’t be) the job of any group of information workers to act as arbiters of knowledge value – to tell users what’s valuable. It’s up to information professionals to facilitate the ways for us to tell them. To tell them, and the rest of… well, us, but that’s getting convoluted.

Herein a great paradox of being an information worker: the information worker is never an expert in the information itself. The users are. The information worker manages the system in which the information lives, but the information itself… that’s different. Why should a librarian be an expert in the actions of neurotrasmitters? They shouldn’t. Why should a neurologist be an expert in taxonomy management? They shouldn’t. 

So why should an information worker be in charge of saying what pre-defined group a journal they haven’t actually read falls into? They shouldn’t. 

An information management system works best when the two sides – information workers and information users – are in collaboration. And what a beautiful age this is that that has finally become possible. 

To demonstrate: instead of three groups, you could assess broader, deeper, and more responsive (the status of journals does change, after all) user-generated information. Journal authors and journal readers are, after all, usually the same people. And no, I’m not talking about out-of-five ratings or ‘people who read this journal also read’. How about: 

  1. How often is the journal cited in other journals?
  2. What rank of universities do the authors come from?
  3. How often have articles from this journal been downloaded from the database?

…and so on, and so on. Data like that, generated by users and collected/interpreted by information workers, especially when applied over time, could create a really useful guide to the quality and utility of journals. Dropping a journal into one of three rankings just can’t compare.

Previous post:

Next post:

Leave a Comment

Previous post:

Next post: