Originally published November 3, 2012 at Stillwater Historians.
Last night I was invited to attend a formal dinner with the Board of Visitors of the fine university that I am so proud to represent. The intent was to showcase some of the impressive research being done by graduate students and their faculty advisors across the various colleges and programs. The faculty mentor who accompanied me, as the representatives of the College of Liberal Arts and Sciences, has been chair of the department and has a track record with such events and looks better in a tie than I do. I’ve never excelled at the pep rally, schmooze-fest kind of affair so it was good to have a more experienced ally alongside. We were asked to give an impromptu presentation of about five to seven minutes each in which we talked about our research and our experience of the campus environment. And of course I was called upon to hit lead off, so I didn’t have the benefit of modeling my approach on that of the others. If I had I might have been as immodest as some of them were. The upside though was that I didn’t have any time to sit and stress about what I was going to say—I just had to stand up and talk about fish. I can usually do that.
During dinner I had gotten to talking with the Dean of the Graduate School, who had invited us to the event, about academic publishing. As an anthropologist he was partial to the idea that one might publish each unique chapter of their dissertation as a journal article, while withholding the piece that ties them all together. Then, once putting that piece in, the work would still be publishable as a book. While that may have worked at one time, we didn’t think history editors would really go for that anymore. We also talked about the problematic length of the cycle to publication. While some disciplines have heeded the pressures of online journals and team blogs that move research and ideas more quickly to audiences, works of history still sit in the pipeline for months, even years before they appear in print. During that time enough relevant scholarship may appear to make the author look terribly out of touch, and their arguments incomplete, even obsolete.
As the after dinner presentations moved on from the historians to the engineers and their talk of pending patents and forthcoming publications, I got to settle in to my new role as audience member. But pieces of the next two presentations, from the representatives of the engineering and business schools, really jumped out at me. The engineering professor lauded his student for a forthcoming paper in a prestigious journal, that while virtually unheard of to the lay audience in attendance, apparently had an impact factor of 13—which seemed to impress everyone in the room. It impressed me too because I hadn’t the foggiest clue what that meant. I understand the concept behind the impact factor and that it is a way to rate or rank journals within a given field based on the frequency with which they are cited in other journals. I knew I had heard some talk and read some blog posts at various times about the applicability of impact factor to history, but while I can understand the desire of tenure and promotion review committees to have some mode of quantifying scholarly output and notoriety, I always thought the whole idea of the impact factor obscured more than it revealed—overvaluing certain outputs while outright disregarding others.
Next came the guy from business and finance. He talked for what seemed like an hour detailing all manner of research projects. The only one that interested me, however, interested me for all the wrong reasons—at least from his perspective. Responding to the pandemic of corruption, deceit and incompetence in the reporting of performance for investment vehicles, he had developed an app that could calculate accurate performance and highlight the errors made by the major reporting agencies. I could see many people losing their foothold as he described the tool. But as odd fortune would have it I spent the better part of two years between MA and PhD programs in history working as an editorial consultant and later an account manager and product specialist for a software company that developed automated reporting tools for financial clients. Basically we were marrying desktop publishing tools with database tools and building a simple user interface on the front that allowed companies to move performance statistics from their analysts to their marketers without having to continually duplicate work. So, ironically, unlike the engineers and the earth and climate scientists who would come later, I could keep up with this guy pretty well. But it got me to thinking about the correlations between my conversation about academic publishing, the dropping of the impact factor to oohs and ahhs, and the effort to correct the problem of reporting agencies who have an interest in what they’re reporting. Not unlike Wall Street, its the publishing companies and the journals themselves who compute and announce their impact factors. And because the impact factor has, well, an impact in so many disciplines they are behooved by manipulation of the calculations and, more insidiously, the editorial content of the journal itself. Do we really think that there are not cases of overciting tangentially relevant material in an effort to drive up the impact factor?
So in the wake of my evening epiphany I did some poking around to see just exactly where historians come down on the impact factor. Are we using it? Are we taking it seriously? I was not surprised to find that Rob Townsend, the maestro of AHA stats and figures, had done some work with it, gauging how some of our key journals rate. If his findings are to be believed, a forthcoming article in our flagship journal would not wow the Board of Visitors—the AHR registers somewhere around a 2.2—and that’s at its peak. Environmental History actually appears to be second among history journals, but I would imagine this is largely because of the cross-disciplinary appeal it provides for those in the social and natural sciences—where the impact factor is taken more seriously. Townsend finds that historians depend more on older published material—finding that citation rates from the previous year or two were low, but that for the last ten years those rates would be significantly higher. I think this trend speaks to the nature of our work, but to some extent it must also be seen as a function of those interminable cycles to publication.
History, I become ever more convinced, is a curious place away from the rest of academia. We don’t do things quite the way others do. But we have a jobs crisis and something of an identity crisis to boot, whereby we are increasingly encouraged to eye the disciplinary lines for potential crossing points. We are seeking refuge, often, in “studies” fields heavily populated by those more inclined to embrace measures like impact factor. So shouldn’t there be a conversation (or maybe a bigger more public one) about how we approach measures like this? (see this Chronicle forum, which features a number of good contributions) Do we engage our cliometricians in a campaign to debunk the impact factor and work towards its dismissal as a measure of scholarly achievement and importance? Do we look to appropriate it with an eye towards making it a truer statistical representation of reality—such that it quantifies less conventional outputs that are blossoming from corners like public and digital history and includes the overlooked and increasingly impactful media like e-books, podcasts, and even unquantified traditional forms like chapters in edited volumes? Do we develop an impact factor of our own, that embraces the unique culture of our discipline—or do we only further isolate ourselves by doing so? Perhaps we might at least begin by wresting control of these measures from the very organizations that are being measured? But then who should control and compute it. Were we to put impact factor in the hands of our professional organizations wouldn’t that essentially serve the same ends—since most have their own affiliated journals?
Despite the rabid criticism they receive each year, and the litany of stories that detail how misleading and corrupt they are, the rankings of our colleges and universities and our graduate programs play to huge audiences when they come out each year. No amount of criticism seems inclined to make them go away. And in the next four days we will continue to be force fed polling and demographic statistics upon which our election will be decided. No amount of ambiguity or outright falsehood seems inclined to threaten the cultural hegemony of the almighty statistic. The impact factor isn’t doing historians any favors from what I can see. And it doesn’t appear that we are that concerned about it. But in the rooms of power where disciplinary eccentricities are not discussed but relative importance and return on investment are, impact factor provides a comparative power that raises eyebrows, despite the fact that the details of its calculation are obscured from view. On the occasion of dinner with the Board of Visitors I was left to wonder if anyone but the historians in the room could see the intriguing connections between the engineers claims to credibility and the outrage of the business and finance scholars over corruption in performance reporting. It may be arrogant to assume that others cant see the connection, but are we not, as historians, uniquely suited to at least tell the tale?