How do we measure knowledge?

Published On: December 5, 2019 12:46 PM NPT By: Shyam Sharma


Shyam Sharma

Shyam Sharma

The author teaches writing and rhetoric at the State University of New York in Stony Brook. He can be reached at shyam.sharma@stonybrook.edu
ghanashyam.sharma@stonybrook.edu

"To foster a meaningful culture of research and scholarship, we must reject crude measurements of scholarly production which are based mainly on the number of citations and are already questioned or discarded elsewhere"


“If you only need good grades and not the learning,” I tell my students, joking, “don’t bother using the library, learning how to use academic databases, finding and reading complex scholarly articles, and representing others’ ideas substantively and carefully in your writing.” “Just hire a good ghost writer or find another effective way to cheat me.” Students get the point quickly, and they start doing serious research and writing.

To my dismay, the above joke essentially comes into being, more frequently these days, in articles published by academic journals from South Asia. Recently, I assigned one such atrocious article, for analysis and discussion, to a Writing Support Group of Nepali scholars. Colleagues in that online workshop series quickly pointed out bizarre levels of misunderstanding and abuse of academic standards in the article: Vague generalities instead of specific issue or objective, pages filled with irrelevant summaries of scholarship, evidently fabricated research findings, and contribution of no new or significant knowledge to the profession or society. The article, on academic technologies, highlighted at the top, a journal “impact factor” of above 5.0—an impossibility for a venue that would publish such work. 

Tip of Iceberg
Fraudulent or junk publications like the above are only the tip of an iceberg. In South Asia and across the global south, universities, and often education ministries, are increasingly demanding that professors (and often doctoral students) publish a certain quantity of research in “international” and “high-impact” venues. Because the institutions do not provide new support—or additional time, resources, or incentive—to match the demand, pressure valves go off in many ways. Some scholars fall victim to predatory journals, paying to publish and tarnishing their reputation. Yet others start academic journals that ignore standard conventions of peer review for rigor and integrity. Many game the impact-factor business to gin up their numbers. Institutional leaders who blindly adopt such measures seem unaware that they are deeply flawed, as I unpack below.

Adopting the metrics will certainly produce a few outstanding figures. But focusing on numeric scores will prompt the majority to publish research that has little or no social impact. Joining international citation games will disincentivize scholars working on locally and nationally significant research areas. It will aggravate discrimination against “soft” disciplines where citation works less by count or currency and more by engagement and long-term value. It will discourage scholars from writing in languages accessible to broader local public. In general, when made uncritically, such demands can severely undermine higher objectives of academic and scientific enterprises, sideline long-term goals, and encourage bad actors and actions.

The high-pressure demand for showing more citations, especially in low-support situations, can also create shallow competition among scholars. Recently, a British scientist wrote to show how easily the system can be manipulated. After his more serious scientific research didn’t gain much attention (ie citation), he wrote an article by simply defining a key term in thirteen different ways. This fluffy new article was quickly cited by many scientists, and the numbers helped him with his tenure and promotion.

Universities can’t just follow the basic assumption of economics whereby increasing demand increases supply; if demand for more scholarship doesn’t somehow ensure quality by relevance and integrity, counterfeit products will respond to the increased demand. Indeed, universities must watch out for another law of economics: “bad money drives good money out of circulation,” or, without effective policy and leadership, poorly done publications will dominate the market.

All that’s “global” is not gold
The “journal impact factor” (JIF) score was conceived by Eugene Garfield in 1955 to help librarians identify the most influential journals based on the number of citations. While its value remains debated, the instrument has persisted due to hypercompetition, hyperspecialization, lack of alternatives, benefits to the privileged who maintain the norm, being an easy option for bureaucrats, and the appeal of “objective” numbers even as they mask hierarchy and systemic inequity. The use of this score is spreading globally. But it is also increasingly debated in the global north where it originated.

News and academic discussions in Western societies have showcased malpractices that citation-based scores have prompted, such as excessive self-citation, too many coauthors, citation cartels, coerced citation, junk publications, and so on. In fact, this practice has been called a mania, a craze, and a fool’s tool incapable of doing justice to knowledge production across the fields. Scholars have illustrated how the social value of scholarship cannot be measured by the number of times it is viewed online (encouraging clickbait), downloaded, or even cited.

Still, one may ask, isn’t this framework at least workable? After all, scholars are rational enough to cite what is of high quality and relevance. Unfortunately, what is known as the “tragedy of the commons” comes into play: When everyone pursues what’s rationally best for them, social need or even collective good can be ignored. For instance, as some estimates have indicated, scholars and especially scientists around the world waste 20 to 50 percent of their research time for writing grants that are eventually not funded. That is millions of hours of valuable time and billions of dollars in actual cost. While competition may benefit the few winners, and many others may improve their work from the effort, the overall system is not productive for society at large. Competition must be one of the factors, not a framework. Support must match demand.

Our institutions must develop their own frameworks for increasing and improving knowledge production. There are inspiring examples for them to emulate, as well as perspectives to draw upon. Japan’s Hokkaido University, for instance, uses good health and wellbeing for guiding research. Ghana’s All Nations University College uses UN sustainable development goals to direct its research toward women’s empowerment. For perspective, after long, complex debates in venues like the US-based Chronicle of Higher Education, scholars have concluded that metrics should support rather than replace expert judgment.

Universities in the global south are joining the broader research-culture party a bit late, to begin with. So, it is tragic for them to also start their game with a simple-minded adoption of the impact factor model. A combination of approaches that are responsive to local realities is necessary.

Tackling the Challenge
First, universities, governments, professional organizations, and experienced scholars must generally stop looking for easy tricks. To pursue the serious, complex, and multidimensional project of building a meaningful culture of knowledge production and application, they must provide intellectual leadership, advance robust discussions, craft well-directed policies, and, most importantly, develop well-resourced faculty support systems.

Second, universities must stop thoughtlessly demanding “impact factor” and “international” publication from all scholars. Not everyone across the disciplines can or needs to meet such demands. A veteran philosopher who has, say, guided a nation through a conflict shouldn’t have to show us how many times she was quoted in writing—or, worse, be compared to a tech-savvy and attention-seeking young chemist who’s been driven by his fancy new bean-counting show.

Third, on a practical level, universities need to deploy faculty development centers (FDCs) to help faculty publish their scholarship. In the global north, these centers (the equivalents of “staff college” for civil servants in Nepal) have traditionally focused on teaching excellence alone. In the global south, FDCs must integrate, from the outset, research and publication support as well. Such infrastructures for supporting scholars can be matched with regulation of research, as well as explicit demands and incentives for advancing scholarship with social value.

Fourth, in Nepal, universities and departments, as well as groups of scholars, have been developing myriad programs and practices for research and publication in recent years. Instead of imposing bean-counting demands from above, or lazily assigning a few credit points for publication, institutions must take responsibility to institutionalize or support the emerging grassroots culture of faculty development initiatives.

Fifth, doctoral degrees, known for being research degrees, must be made more rigorous in their curricula, teaching, and professional development opportunities. Due to requirement of and extra credit for PhD in academic hiring and promotion, more people are pursuing the degree just to meet the demand or earn the credit. Doctoral programs must be used for building the intellectual, institutional, and career foundations of meaningful knowledge production.

While growing up, grownups used to say the following when youngsters made silly choices: roti harako thaha chhaina gahunka geda ganchhu (“unaware of losing bread, still counting wheat seeds”). If we want to foster a meaningful culture of research and scholarship, we cannot afford to uncritically adopt crude measurements that are already being questioned or discarded elsewhere. And if we want to prevent our scholarship from mimicking the jokes we tell our students about producing papers just for good marks, we must grapple with complexities and pitfalls of trying to “measure knowledge,” strive to lay proper foundations, and invest sufficient resources guided by visionary leadership.

The author is Associate Professor and Graduate Program Director in Program in Writing and Rhetoric at State University of New York, Stony Brook 
Shyam Sharma  
Email: ghanashyam.sharma@stonybrook.edu


Leave A Comment