In the past two decades, much has been made in academic circles about global rankings of educational institutions. Bodies such as Times Higher Education and Webometrics regularly rank universities based on a set of criteria. These include internationalisation of faculty and students, cited research publications and awards won by scholars.
This ranking phenomenon has increased the pressure on academics and researchers in Africa to present their research output in publishing outlets that are perceived as highly rated.
Career progression – for instance, access to grants, appointments and promotions – is now tied to individual ranking. Student enrolment and funding from government and other bodies to institutions are equally being influenced by institutional ranking.
Since the Western world usually leads in setting the criteria, academic prestige comes from conforming to Western standards in the execution and reportage of research projects. But some African researchers are now asking questions about the fairness, transparency and reliability of these processes of evaluation and scholarly rankings. They are also concerned about the effect of Western expectations on African societies and their needs.
What matters most in scholarly evaluation is itself a matter of enquiry. Hence the need to acknowledge and accommodate the inherent limitations of funding, access, collaboration, standardisation and other constraints faced by developing countries.
The desire of scholars and institutions in Africa to fit into the Western-imposed model despite the deficit of local research support infrastructure may be counterproductive in the quest to achieve sustainable development in Africa.
I belong to a group of African researchers in Nigeria who are concerned about this situation. We reviewed the status quo and conducted a survey to get the perspectives of researchers and education administrators from developing countries.
The survey results indicate that the majority of African academics are concerned about the status quo. They would support a shift in publishing practices and the assessment of researchers. Such a shift should be supported by institutional administrators and policy makers.
Western indexing houses track how often research is cited and publish the metrics of most publishing outlets. For this reason, many African researchers feel they should do research that would be acceptable for publication in such outlets.
This can have negative consequences. For example, there’s the issue of access and copyright. A study in Africa might be of national importance. But its publication may not readily be accessible to the researcher’s contemporaries or government since the copyright might rest with a commercial Western publishing outlet.
This impairs the development of rigorous science and limits the exploration and expansion of indigenous knowledge for regional advancement.
There are other consequences to focusing on meeting Western requirements for academic research. It undermines African potential to use the continent’s resources to tackle its own challenges. And encourages “brain drain” – when experts move from Africa to the developed world.
Those who make the rules control the market. This is also true in publishing and academia. The bodies that oversee acceptable publication outlets, universal patents, registration of internet domain names and hosting servers are all located in the West. It would come as little surprise that this has an influence on the access and ranking of all to the advantage of Western systems and institutions.
Furthermore, westernisation has largely been conflated with internationalisation or misconstrued for civilisation. The negative impact of this on Africa is well documented.
What ought to be done
Our survey offers suggestions for governments and universities.
African governments should monitor and limit schemes that promote intercontinental collaboration and publications at the expense of intra-African and national publications.
Secondly, grant-giving foreign governments and agencies ought not to dictate what and how to research. Each nation must set its developmental priorities and align scientific research with them.
Thirdly, universities, grant-awarding bodies and educational ranking agencies need to revise their research evaluation methods. We came up with some new, relatively simple, but broadly useful metrics to assess research. For example:
Total citation impact: a measure of how many times a research paper has been cited per year of existence. Rather than just a number of citations as presently used, our model states the citation rate over time. Stating that an article is cited three times per year on average is more informative than noting that it has been cited six times since its publication.
Weighted author impact: a way of rating researchers, virtually independent of their respective disciplines. It evaluates the article’s impact rather than comparing the journal’s impact with other journals in its discipline.
We have also called for the establishment of an African indexing house. This would track publications and citation rates of scholarly works produced in Africa. The resultant confidence, fair play and opportunities for African and other researchers could stimulate greater productivity and national development