In this post, economist Brian Lucey exposes the fallacies of the latest addition to the academic micro-measurement industry – that of measuring research supervisory quality. But it’s also well worth reading as a recap on the limitations of citation metrics generally and in particular the drive, currently also being pursued at DCU, to exclude papers and articles not found in selected commercial databases. This practice is described in unequivocal terms:
‘This is stark raving lunacy, but it shows how dangerous a simple metric can be in the hands of the ignorant.‘
Research metrics are fraught with danger. Usually they are dangerous when they are abused. We can measure the citation history of a paper but that tells us little beyond its citation history. We can measure raw output but that tells us simply how busy someone is. We can measure lots of things but they are all limited in some way. Measurement limitation does not prevent university administration from seizing on metrics and using them appallingly. I recently was informed of an Irish academic unit where papers published in journals that are not in the ISI Web of Science are not allowed to be used as part of any promotion or other college activity. They are un-papers. This is stark raving lunacy, but it shows how dangerous a simple metric can be in the hands of the ignorant.
View original post 632 more words