We live in a culture where performance review and assessment are ubiquitous. In some occupations, the parameters for such review are relatively easy to derive. Salespeople can be assessed based on how much they sell, recruitment consultants can be assessed based on how many people they recruit, and manufacturers can be assessed on the basis of how many products they make. These ideas have been naïvely imported into academia where individual performance, especially regarding research, is assessed by means of volume and metrics.
It is understandable why this should be the case. Using numbers is easy and volume and metrics provide numbers whereby performance can purportedly be measured and compared. The problem regarding this approach to measuring performance in academia is that the numbers, while not entirely or always meaningless, can become meaningless if the wrong value is attributed to them. Also, unlike sales, recruitment or production, numbers can be gamed, and it is glaringly obvious in some sectors of academia and in some countries in Europe that this happens.
It is hard to be over-critical of people who adopt the ‘publish or perish’ mentality and game the volume and metrics system of performance review in academia. After all, they neither established the system nor set the parameters by which their performance is evaluated. If they are seeking continued employment or promotion then they can only meet the targets which are set for them. However, this does not make it right.