Report and Resolution on Use of Academic Analytics

Date

INTRODUCTION

In 2013, Rutgers University signed a four-year $492,500 contract with a company called Academic Analytics (hereafter, "AA"). Pursuant to this contract, AA has used its proprietary databases of “scholarly productivity” to develop a numerical Faculty Scholarly Productivity Index (FSPI) for New Brunswick tenure-track faculty members. Academic Analytics has also used the FSPI for each individual faculty member to compute an analogous overall productivity index for every Ph.D. program on the New Brunswick campus. Many faculty members are completely unaware of the existence of AA or that AA data are currently being used by the University administration. The University has provided very little information to faculty about AA, how its data are being used, and how its data could be used. Moreover, among faculty who are aware of AA, there is considerable concern about the errors and omissions in the AA scholarly productivity data; about difficulties that faculty have had in obtaining information on their personal FSPI computed by AA; about ways in which the AA data might be used in evaluating individual faculty (e.g., for promotion and tenure, hiring, and merit raise decisions); and about ways in which the AA data might be used in evaluating programs and making decisions about resource allocation.

Responding to these concerns, the New Brunswick Faculty Council (NBFC) formed an ad hoc committee that was asked to study the actual and potential uses of AA at Rutgers and the concerns of faculty members about AA; and to make recommendations concerning the future use of AA at Rutgers. The committee conducted a survey of New Brunswick faculty about AA (see Appendix I for details). The committee together with the Faculty Council executive committee met with Chancellor Richard Edwards to discuss AA. What follows is the committee's report.

BACKGROUND: WHAT IS ACADEMIC ANALYTICS?1

Academic Analytics (http://www.academicanalytics.com/) was founded in 2005 by Dr. Lawrence Martin of Stony Brook University in response to universities’ purported needs for business intelligence tools. According to its website, AA is a provider of “custom business intelligence data and solutions for research universities in the United States and the United Kingdom.” Its mission is “to provide universities and university systems with objective data that administrators can use to support the strategic decision-making process as well as a method for benchmarking in comparison to other institutions.” Academic Analytics provides institutional administrators with external comparisons and information on institutions’ graduate programs and their faculty members’ scholarly activity. According to AA, they “help universities identify their strengths and areas where improvements can be made.”

The database that AA uses to compare institutions, programs, and disciplines and to calculate a Faculty Scholarly Productivity index (FSPI) is known as the Academic Analytics Database (AAD).2 The AAD database contains data on what AA defines as “the scholarly research accomplishments of a faculty member.” Such data include: (i) the publication of scholarly work such as books and journal articles; (ii) citations of published journal articles; (iii) research funding by a limited number of federal agencies; and (iv) honorific awards. The relative weight given to each of these categories is determined from a National Research Council survey3 and by an advisory committee of AA (information on the composition of this committee is available publicly).4

Academic Analytics collects book publication data from the British Library and Baker & Taylor, Inc. Academic Analytics collects other publication and citation data from Scopus (a bibliographic database, owned by Elsevier, containing abstracts and citations for academic journal articles). Academic Analytics compiles federal grant data for FSPI from thirteen U.S. federal agencies.5 It credits grants from these agencies only to the principal investigator (PI) at the lead institution.

A recent article in The Chronicle of Higher Education6 highlights some of the problems that other schools have had with AA. The cautions and caveats expressed by faculty members and administrators at those schools are instructive. For example, the president of one institution said: “The most important thing is to make sure you have faculty buy-in. If you have them helping in the production of the measurement instrument, you have the best chance of coming up with an instrument that everybody’s happy with.”

Even MIT’s Lydia Snover, the director of institutional research and a member of the AA Advisory Committee, has said that administrators must put the AA data in context with other indicators of productivity, which the company’s data don’t measure.7

FACULTY CONCERNS ABOUT AA AT RUTGERS UNIVERSITY-NEW BRUNSWICK

The New Brunswick Faculty Council has become aware that many faculty members lack knowledge about AA and how their AA data are being collected and used at RU-NB. In response the committee developed an online faculty survey to assess faculty members’ awareness and knowledge of AA; their concerns about the AA data and how such data are used or might be used at RU-NB; and their opinions on the usefulness of AA metrics. Even before the survey was developed some faculty members at RU-NB had expressed disquiet about the lack of accuracy and comprehensiveness of the AA data, the current restrictions on faculty members’ access to their own data and FSPI, and the potential uses of the FSPI by the administration.

Based on the results of the faculty survey, which are described in Appendix I, consultation with the NBFC, and our own examination of publicly available information on AA, this report reflects the numerous concerns of our colleagues about AA. First and foremost is the possible use of AA data in personnel decisions. Dr. Edwards’ comments to the NBFC EC have not allayed those fears. There has to date been no clear, unequivocal public announcement from the RU-NB administration about whether or not they intend to use AA data in tenure, promotion, and merit pay decisions. Indeed the administration recently refused to sign a memorandum of agreement with the AAUP that stated:

“Rutgers University shall make no use of the data from Academic Analytics for the purpose of any and all considerations related to reappointment, promotion, tenure, faculty compensation (merit), out-of-cycle salary awards and any other assessment or evaluative process of review of faculty at the University.”

Faculty members at RU-NB, with only a few exceptions, are currently unable to access their AA data or even their FSPI. Those who have managed to see their data have reported inaccuracies, while many of those who have not, have expressed their worry that erroneous data may be used in evaluations of various sorts.8 Not surprisingly, another major concern is that the AA data are inaccurate and incomplete and that faculty members have little or no opportunity to correct any errors or omissions in their AA data.

Academic Analytics collects data only on competitive research funding by a few federal agencies (cooperative agreements are not included); funding from States or the private sector is not counted. According to AA, there is no plan to include State or private funding sources; however, AA is considering adding cooperative agreements. At present, only a PI/PD (principal investigator/project director) is credited in AA’s data; no other member of a research team receives credit for the funding received. Academic Analytics records citations dating back no more than five years. Book chapters, a common form of publication in the humanities and social sciences, are not included in the publication and citation indices/metrics created by AA.9

At present, extension publications of Rutgers University Agricultural Extension program faculty members are not included in the AA database. These faculty members receive significantly lower ratings than peers in their own departments who do not have extension appointments. Additionally, SEBS departments at Rutgers, a land-grant university, may fare poorly in comparison with departments at peer institutions that do not have that status and its resulting responsibilities. Similarly, artistic accomplishments by most faculty members at the Mason Gross School of the Arts are not included in the AA database

Faculty members are concerned about overreliance on the FSPI and similar AA indices, without also considering faculty accomplishments in teaching and service. They have also expressed concern about the limited categories of data collected and used by AA to create the FSPI and similar indices. Faculty members worry about the consequences of evaluating faculty and their departments or programs based on data that do not reasonably reflect the nature, quality, or quantity of their scholarly output. We urge Rutgers to use data that is a more accurate measure of what Rutgers faculty actually do across all our disciplines.

A minority of the respondents to the committee’s survey expressed their support for the use of metrics in evaluations of faculty productivity. Nonetheless most respondents worry about the accuracy of the FSPI and caution against relying on it so heavily.

We hope that greater transparency, along with more effective communication from the RU-NB administration to the faculty, will address our remaining concerns and create an environment where RU-NB faculty members have a meaningful say in how they are evaluated, including if it is demonstrably unavoidable, the use of AA data.

MEETING WITH CHANCELLOR RICHARD EDWARDS, FEBRUARY 12, 2016

The NBFC Executive Council and members of this committee met with Chancellor Richard Edwards on February 12, 2016, to discuss the use of the data provided by AA at RU- NB. The committee prepared a set of questions for Dr. Edwards and gave them to him in advance of the meeting; these questions and Dr. Edwards’s responses are summarized in Appendix II. The content of Appendix II has been edited and approved by Dr Edwards.

 

RESOLUTION

Whereas, we the members of the New Brunswick Faculty Council (NBFC) are concerned by the lack of information and transparency about AA and its metrics of faculty scholarly “productivity” at Rutgers University; and

Whereas, we are concerned that the AA data on individual faculty members and thereby the ratings of their programs and units are erroneous and/or incomplete, or missing entirely and that facultymembers areneitherabletoseetheirownAAdataeasily,nordotheyhaveany mechanism to correct errors; and

Whereas, the AA metrics have the potential to influence, redirect, and possibly narrow the research and scholarship of faculty members, as they compete for higher scores from AA; and

Whereas, members are very concerned about how the AA metrics’ assessment of individual faculty members’ scholarly “productivity” may be used to evaluate faculty for hiring, promotion and tenure, merit raises and similar purposes; as well as for assessing the “productivity” of members’ departments/units or programs and the resulting allocation of resources on that basis; and

Whereas, any measures of faculty accomplishments should be used only if they are part of a broad and well thought-out evaluation system formulated by those in the discipline, and should include valid ways of identifying and measuring all facets of scholarly and creative accomplishments; and

Whereas, evaluating a program primarily on the basis of inaccurate AA scores adversely affects all faculty members in that program; and

Whereas University Regulation 60.5.15 states that "Informed judgments concerning a faculty member's accomplishments can be made only by qualified colleagues. Such subjective judgment by persons competent to evaluate duties, responsibilities, services, and accomplishments will protect the interest of professors themselves, the department, the academic unit, the University, and the students better than any objective rating that could be devised;" and

Whereas, Rutgers University has a tradition of employing robust standards and procedures for the tenure and promotion of individuals and tasking deans with assessing departments and programs;

Therefore, be it resolved that

  1. The Rutgers University Administration, in consultation with the NBFC, shall seriously reconsider the need for AA metrics to evaluate faculty members, programs, departments, and units at the University. Absent a compelling case made to the NBFC for the necessity of these metrics, the agreement with AA shall not be renewed.

  2. If the Rutgers Administration can justify the continued use of AA services, then the Administration, in consultation with the NBFC, shall

    a. Disclose as openly and transparently as possible complete information about AA and its metrics to the RU-NB faculty;

    b. Develop a protocol that will enable individual faculty members to easily access their own AA data;

    c. Establish a procedure that will allow faculty members to supplement their AA score with data about their scholarly and creative accomplishments that are not measured by AA. Such data shall be used to adjust their AA score or shall accompany their AA score and receive appropriate verifiable consideration. This procedure shall be administered by the Office of InstitutionalResearchand Academic Planning. The Administration shall provide the Office of Institutional Research and Academic Planning with adequate resources for this purpose;

    d. Ensure that AA metrics will not be used directly or indirectly in the evaluation of individual faculty for hiring, tenure and promotion, merit raises, and any other personnel actions;

    e. Ensure that the data from AA are not used in decisions regarding budgeting and resource allocation at the University; and

    f. Share benchmarking or comparison information with the faculty members of the academic programs or units that are the subjects of such benchmarking or comparisons.

    g. Develop a formal mechanism for gathering faculty input to improve data gathering methods and analytic processes employed by AA.

Be it further resolved that the New Brunswick Faculty Council calls on the Rutgers administration to act forcefully and expeditiously to address and resolve all the concerns raised in this NBFC report on AA in order to allay faculty concerns about access to AA data, errors in AA data, and the uses of AA data at Rutgers University.

 

APPENDIX I: FACULTY SURVEY

During the 206th meeting of the New Brunswick Faculty Council on October 23, 2016, an ad-hoc committee was formed to determine RU-NB faculty members’ knowledge of and opinion about AA and use of its services at Rutgers University – New Brunswick. In consultation with the NBFC Executive Committee, the ad-hoc committee developed a faculty survey. After several iterations followed by pretesting, the survey was implemented on January 29th 2016. The faculty mailing list (NB_ALLFACULTY@RAMS.RUTGERS.EDU) was used to distribute the survey electronically to RU-NB faculty members. These faculty members were requested to respond by February 5, 2016. A reminder was sent out on February 2, 2016. A total of 561 faculty members responded, of which two respondents did not consent to participate in the survey, therefore, a total of 559 usable surveys were received (N=559). Please note that some of these 559 respondents did not answer all questions. Therefore, for some questions, the number of usable responses was less than 559.10

Faculty Knowledge about AA

We began the survey with a question about our faculty colleagues’ awareness and knowledge about AA by asking them about their level of their knowledge about AA (“How much did you know about Academic Analytics?”). There were 506 responses to this question, and 238 of them (or slightly over 47%) knew either a “little” or “not at all” about AA, while the rest had at least “some” knowledge of AA.

In a follow-up question, we asked our respondents whether Department Chairs or Unit Chairs had informed them about AA. In response, almost 50% of the respondents (251 out of 505) reported that their Department Chairs or Unit Deans had not informed them about AA, while almost 38% (191 out of 505) reported being informed about AA at their unit levels.

Faculty members at the School of Arts and Sciences (SAS) at RU-NB had discussed the use of AA metrics in SAS at a faculty meeting on December 14, 2015, where the Dean of SAS was in attendance. SAS faculty were very vocal about their opposition to the use of AA at Rutgers.11 It should be noted that the SAS is the largest school at RU-NB in terms of numbers of both students and faculty. David Hughes, professor of Anthropology and the President of Rutgers’ American Association of University Professors and American Federation of Teachers affiliated faculty union presented at the SAS meeting. There and elsewhere he has opposed the use of AA metrics at Rutgers University.12 Given the date of the SAS meeting and the circulation of David Hughes’ publicity about AA and its use at Rutgers University prior to the implementation of the Council’s survey, it is surprising that almost half of RU-NB faculty members who participated in this survey still lacked awareness about AA.

Faculty Interest in having Access to Own AA Data

We asked the respondents whether they would like to see their own AA data. A majority of the RU-NB faculty members (377 out of 503 or 74.95%) showed “some” interest in seeing their own data, of which 302 (or 60% of total) were “a lot” or “very” interested in seeing his/her own AA data. This result clearly shows the interest of the RU-NB faculty in accessing their own data.

Faculty Opinion on use of AA Data to Measure Faculty Productivity

We wanted to know what concerns that RU-NB faculty members have about AA and its metrics for faculty productivity. As we know, some of these faculty members have seen their own AA data and we wanted to know the opinion of these faculty members on the quality of AA data. In response to the question whether AA data suffers from errors or omissions, almost 85% of the respondents (415 out of 490) responded that they had not seen their personal AA productivity data. Among the remaining 75 respondents, an overwhelming majority (69 or 92%) opined that AA data suffered from at least “some” errors. We followed up by asking how concerned they were about errors or omissions in the AA productivity data – almost 65% (or 48 out of 75) respondents expressed “a lot” of concern about errors and omissions.

We asked all respondents about their concerns about the impact of the AA data’s quality on individual faculty members and on their unit/department/programs and the University as a whole. A large majority of respondents (389 out of 502, or slightly over 77%) expressed that they are concerned “a lot” about such an impact from erroneous AA data. One of the earliest users of AA metrics, the Higher Education Funding Council for England or HEFCE (http://www.hefce.ac.uk/), in a recent report evaluating the role of AA metrics, commented that data collection and analytical processes should be open and transparent, “so that those being evaluated can test and verify the result.”13

We asked the RU-NB faculty members what productivity data Rutgers should collect in order to understand the scholarly productivity of a faculty member in RU-NB. Over 400 faculty members responded to this question and the list of productivity data they consider important to understand their productivity are as follows (in terms of importance as expressed by the respondents): first author in publications, book chapters written, books written, books edited, co- authorship, PI of non-Federal grants, PI of Federal grants, and Co-PI of Federal grants and Co-PI of non-Federal grants (the latter two tied for 8th place).

Faculty Opinion on the Use of AA data in Promotion and Tenure

One of the main concerns of the Council was how the AA data will be used by University administrators. Such concerns are legitimate given comments on AA data’s potential use by its founder Dr. Lawrence Martin, as well as the findings of the Higher Education Funding Council for England (HEFCE) (http://www.hefce.ac.uk/). According to Dr. Martin, AA data can be used to identify the best and least productive faculty members. He has noted that if the least productive faculty members still receive reduced teaching loads, "the cost of that is staggering."14

In its evaluation of the AA metrics on faculty scholarly productivity, HEFCE found that there is considerable skepticism among researchers, universities, representative bodies and learned societies about the broader use of metrics in research assessment and management. Additionally, HEFCE commented that peer review, despite its flaws, continues to command widespread support as the primary basis for evaluating research outputs, proposals and individuals. Moreover, according to HEFCE, carefully selected indicators can complement decision-making, but a ‘variable geometry’ of expert judgement, quantitative indicators and qualitative measures that respect diversity in research will be required. HEFCE also agrees that there is legitimate concern that some indicators can be misused or ‘gamed’: journal impact factors, university rankings and citation counts being three prominent examples.

We asked the RU-NB faculty about their concerns regarding potential uses of AA data by RU administrators (without indicating any specific use). Almost 78% of the respondents (391 out of 503) were expressed “a lot” of concern about how their AA data may be used at Rutgers. We followed up by trying to identify their specific concerns (see below). A large majority (almost 84% of the respondents) were “concerned” or “very concerned” about the use of AA data for the following (in order of their importance as identified by the respondents): employment of AA data to evaluate faculty for promotion and tenure (i.e., among all these concerns, the respondents were most concerned about this use); use of AA data to evaluate faculty FCP or merit pay increases; the accuracy of AA data in assessing respondents’ departments or programs; the accuracy of AA data in assessing respondents’ own scholarly productivity; use of AA data to compare individual faculty members with and to those at peer institutions; use of AA data to compare departments, units, and programs across institutions; and use of AA data to compare peer institutions. To cross-check the validity of their responses, in a separate question we asked our respondents where AA data could be useful. We gave them similar choices to those in the earlier question, e.g., use of AA data to evaluate faculty for promotion and tenure, etc. The majority of respondents expressed the opinion that AA data is either a “little useful” or “not useful” for those purposes.

Given the seemingly widespread use of the AA data by academic institutions across the United States and Europe, we wanted to ask RU-NB faculty specifically about the potential uses of AA data at Rutgers. One of the questions we asked was whether AA productivity metrics on an individual faculty member should be included in the evaluation of his/her merit increases, promotions, etc. Not surprisingly perhaps, we found that almost 74% of the respondents (365 out of 495) disagreed that his/her AA productivity data “be included in evaluation of faculty for merit, promotions, and the like.” In a related question, we also asked whether Rutgers should have a policy to include AA data when evaluating faculty members for tenure, promotion, merit raises, and similar purposes. A majority of the respondents to this question (352 out of 493, or 71.4%) disagreed that such a policy be instituted at Rutgers University.

Additionally, respondents also listed a plethora of other productivity data they think should be included in evaluating their productivity. Such additional items to measure productivity included: symphonies composed, exhibitions, conference papers, keynote and invited talks, artistic production and performance, editorship, student mentoring, patents, etc.

Summary of Faculty Comments in an Open-ended Question about AA

In addition to structured questions, we also used an open-ended question (last question in the survey) asked RU-NB faculty members about their opinion on AA. According to the comments provided in response to that open-ended question, Rutgers-NB faculty members feel that:15

  • Overall, the comments received about the Academic Analytics software are mainly negative and recommendation is that it should not be used for decision making and should be abounded.
    • Some representative comments from the survey are:

      a.  “I have only heard terrible things about AA's metrics: inaccurate, erroneous, flawed, inaccessible. BAD data is not good.

      b. “...AA data should be used as one additional data point or tool for assessing productivity--not the only one.”

      c. “I support evidence-based assessment. But the information gathered by AA is both inappropriately coarse-grained and inaccurate.”

      d. ‘I had a chance to see my data on AA one time, and it missed key publications I had produced, values authorship on which I contributed 10% of the work equally to 100% authorship, treats edited books equally to single-authored books, and misses all book chapters I've written and all citations to my work in books. Even Google Scholar, which is free, is superior in measuring the scope of my academic influence.’

      e. “In a perfect world the data collected would be informed by the same conditions across peer institutions. This is unlikely to be the case.”

      f. “Measurements of productivity vary from field to field. No single metric can be used to compare across fields--whether for tenure or any other purpose. If AA is used as one of many metrics and it can be overridden, then perhaps inclusion would not be detrimental...”

      g. “Metrics have many problems and should not replace common sense; scoring publications by impact factors or citations has many pitfalls (e.g. truly novel research, ahead of its time or in an understudied field will have few citations)”

      h. “Research and thought are about quality, not quantity. We cannot quantify productivity in terms of quantity - it would be comparing apples and oranges and highly inaccurate.”

      i. “The central question is the accuracy and comprehensiveness of the data. If it is complete and correct, it could be a useful guide to evaluation but in an advisory,nconsultatife [sic] or informal way: the final authority over scholarly productivity MUST remain faculty judgment and field expertise. And the ultimate measure needs to be quality and not just so-called productivity (and I say this as a faculty member who has high "productivity").”

      j. “Since the data is not open and cannot be verified, it has zero validity and legitimacy [sic]. It would be far better to use a product that is publicy [sic] visible and verifiable, such as Elsevier Pure, than a secret black box like AA. Just a very poor choice of a tool for this purpose.”
  • Some faculty members stated that they have not seen their data and cannot comment in the absence of information.

    • Some representative comments from the survey are:

      a. “Because I have not seen AA data and I am not familiar with how accurate it is it is hard to say if this would be a valid measure for promotion/tenure/raises”

      b. “Answers could change with more knowledge of AA”

  • Some faculty members are concerned that the AA data has already being used at Rutgers and they are unable to make corrections to their data

    • Some representative comments from the survey are:

      a. “...Faculty have heard that they can make corrections to their personal data if they hear of an error. In other words, we are invited to do the job that AA has been paid to do...”

  • Metrics are not good to judge academic work for promotion, tenure or even departmental review and the data could be faulty and misleading, they are more or less corporate assessment nothing to do with academics

    • Some representative comments from the survey are:

      a. “Colleagues are much more than a sum of AA metrics, which are incomplete in any event”

      b. “The object of the gameshould [sic] be to improve performance, but for promotion, tenure, and remuneration purposes, productivity (however measured)is legitimate.”

  • AA metrics specifically not suitable to their profession or do not measure teaching & service, and easily be manipulated

    • Some representative comments rom the survey are:

      a. “I am a professor in arts/arts education, so have concerns that creative research is not included in the list of scholarly productivity.”

      b. “As an artist, this 'collecting' falls short as there are myriad- too numerous to count-ways in which artists do research.”

      c. “For people at land-grant parts of the university, a lot of our productivity isn't even measured, and the journals we publish in not measured. AA is not a complete view of what we do.”

      d. “AA analytics seems to lack any understanding of scholarship in the humanities and social sciences”

      e. “The metric assessment system is not an accurate indicator of faculty productivity and does not take gender or race into account this is important when undertstanding [sic] that women and people of color are burdened with more service which sometimes affects certain types of publishing.”

  • AA data is full of omissions and inaccuracies, erroneous at times, and counting wrong things for example impact of a book may take longer than 5 years, not counting interdisciplinary work

    • Some representative comments from the survey are:

      a. “aa analtucs [sic] makes two huge mistakes which makes its data worthless.

      b. first, it fails to distinguish the quality of tge [sic] venue, publisher, etc. second, it omits cgapters [sic] in anthologies which is the most common way many distuished [sic] peopke [sic] publish their work.”

      c. “AA data is riddled with errors but also poorly designed in principle. Of greatest concern to me is the way in which it gives no credit to interdisciplinary work, which is so vital today and which Rutgers has heavily promoted for years now. I have published in top journals outside of my presumed discipline, given my home department, and those publications do not count at all because colleagues in parallel home departments at other US institutions do not publish in those venues. I should be rewarded for achieving mastery in more than one field, not punished! Also, the notion that books can only "count" for 5 years is absolutely ludicrous! Outstanding books may take longer than that to fully impact a field, and may continue to have a transformative effect long after.”

  • AA data is costly and it is worth for paying such premium whereas public databases can provide sufficient information about productivity on publications and citations received especially

    • Some representative comments from the survey are:

      a. “What a waste of money!”

      b. “complete waste of money”

      c. “Comparisons are useless if level of investment across institutions is not the same.”

      d. “AA data is less valuable than a google search, but costs money that should be used for research and education. What I'd really like to know is whose palm was greased to get the AA contract: who benefits from this sham?”

      e. “I don't mind identifying myself: I am Stephen Miller, vice-chair of mathematics. I spent a lot of time going over the AA information when it first came out, and found it was frequently nonsense. The data collected is internally inconsistent. My chair (Simon Thomas) wrote a report outlining some of the problems we found. For example, even at the level of software there are problems: the information could not correctly determine who was a PI on a grant. I am in favor of analytics, but this particular company is a fraud and we should not be trusting any information they give us. The amount of time we are spending to simply correct AA errors and defend ourselves against the harm they cause is too large, and would be much better spent on our core missions. Prof. Thomas' report is very explicit and damning.”

  • Some faculty are in favor of using AA data and think AA metrics are better than not having any metrics. Some also believe that AA data should be used for comparing programs peer-to-peer not for faculty scholarly productivity.

    • Some representative comments from the survey are:

      a. “I am in favor of using AA for determining how our graduate programs rank nationally - but only after the data has been verified.”

      b. “A metric with some error is better than no metrics”

      c. “As aggregate data used to compare the SAME department (that is physics to physics, math to math, history to history, etc.) across different institutions, AA data may be useful to deans AS PART OF a larger set of data. As stand-alone data is it useless, and as data to evaluate any individual it is dangerously flawed and erroneous.”

      d. “Gathering of AA data can be useful in comparing departments across peer institutions and use that information to improve the departmental administration. However, given the narrow focus of measurable data AA collects, data collected should not be used as a means to evaluate the success of a faculty member.”

      e. “I am not against using tools such as AA, but the faculty should be given a chance to look carefully at the algorithms used. If it is just bean counting, then it is not going to be useful, and in fact will be damaging.”

  • Other comments

    • Some representative comments from the survey are:

      a. “I'm surprised that AA is mostly concerned with research/publications and grants/funds. Being a scholar also means teaching and providing service, which unforunately [sic] are not consireded [sic] as important. Also, how will the AA evluate a scholar in relation to critical social components they may face, such as (historical [sic] and contemporary) challenges because of and contributions to diversityand inequity?”

      b. “My conversations with British scholars during a year in London made clear the extremely negative effect this kind of "page-counting" and indivious [sic] comparison with other faculty, departments, and peer institutions had had on both faculty morale and the quality of publications. Faculty have felt great pressure to churn out publications even if the research was not ready or had litte [sic] to add to current knowledge, simply to increase their quantity. Because individual departments within a university, and invididual [sic] universities across the British system, receive more or less funding based on these figures, colleagues have felt shamed and disparaged for not "holding up their quota," leading to terrible morale. I spoke with one well-regarded scholar and head of department who planned to quit university and teach at the high school level because of the competitive and unfriendly atmosphere created by this system and because of its denigration of high-quality work and work demanding a major investment of research time in favor of facile and flimsy work that excelled only in quantity.”

      c. “the AA data may be useful for some purposes -- but it would provide a metric which could and would then be gamed which would not measure research impact, significance”

      d. “The AA information would potentially be useful if it is accurate, and if it is used intelligently, with consideration of its limitations.”

      e. “The data used by AA is not sufficiently fine-grained. For example, it counts all published books, whether they are scholarly monographs, textbooks, edited volumes, edited volumes of one's own past articles. Using only this data would produce a severely distorted picture of the quality and quantity of faculty research. Moreover, over time, if used for important decision, it would have perverse incentives, giving faculty reason to (for example) publish many short pieces in less prestigious journals rather than a single longer piece in the field's most selective journal. Knowing that they would be evaluated using this data is also very likely to motivate the best faculty to leave Rutgers as a second-class institution, and to deter high caliber faculty from joining the faculty at Rutgers. This would be a serious step in a very wrong direction.”

      f. “The entire point of academia and scholarship is to create new knowledge. Often, this requires pushing against traditional boundaries - doing things that are radical departures from the norm, even. Distilling academic "productivity" into a quantitative result derived from a black-box by a third party flies in the face of everything academics and scholars do and stand for. It's reductive, it prizes certain kinds of science over other kinds of science, and is specifically detrimental to scholars who are: new (assistant professors), radical (doing research on controversial topics or using novel methods), and those in the humanities/social sciences - disciplines that are slower by nature than the "hard sciences." I am very, very against Academic Analytics being used at Rutgers.”

 

APPENDIX II:INTERVIEW WITH CHANCELLOR EDWARDS

Question 1: Why has Academic Analytics been introduced at Rutgers? What are the specific purposes of the AA data at the unit level and at the faculty level?

Response by Chancellor Edwards (CE): AA was introduced at Rutgers as it provides tools for comparing schools and graduate/doctoral programs with and to their aspirant peers. RU-NB uses AA data for various purposes, such as goal assessment (e.g., benchmarking, program review, strategic planning) and evaluation of opportunities (e.g., hiring and retention). Dr. Edwards noted that the Legislature and Congress are interested in data on institutions of higher education. Dr. Edwards acknowledged that it is hard to impress upon the Legislature and Congress that judgments on institutional performance cannot be made on the basis of metrics alone.

Following the most recent United States National Research Council report (2006-2010), the data collected for that report and the resulting rankings were found to be faulty or not representative. U.S. News rankings are similarly faulty. Indeed, Dr. Edwards noted that he has seen a program ranked highly according to the U.S. News ranking while that program was ranked very low in various other rankings. Therefore, administrators at RU wanted to ways of looking at programs that were based more on metrics than on popularity polls. AA is used by our peers and by other institutions as well (two-thirds of the AAU member universities are using AA). AA gathers data on graduate programs at Rutgers and other institutions and places programs into percentile bands rather than strict rankings. It was determined to be cost-prohibitive and too difficult to carry out such a task in-house. Therefore, RU looked at several other commercially available ways to gather information and chose AA, deeming it the most effective choice at providing data for (Carnegie) R1 institutions. AA works closely with big AAU institutions on constructing indices.

At Rutgers, AA data is offered for use to the Office of Institutional Research (which is also responsible for submitting data to the publicly available sites that AA combs to put together comparative data from various universities and individual faculty members) and to Deans for benchmarking programs and schools against their peers, e.g., public AAUs. The RU Administration (at the central and unit levels) may find AA data useful as one component of the information they use to allocate resources to help programs improve. According to Dr. Edwards, AA "could" be used for making decisions about providing additional lines to programs or departments. Dr. Edwards commented Deans regularly have to make resource allocation decisions among various units and that AA data is one piece of what they consider in making such resource allocations, but certainly not the only piece. When asked whether a low ranking might lead to a reduction in a department’s faculty line allocation, Dr. Edwards said that such decisions are made by Deans, who may look at the AA data.

At Rutgers, AA data are not currently being used at the level of individual faculty. However, if a department is hiring at a higher rank (i.e. professor or distinguished professor) and must decide between, say, three very equally qualified candidates, AA metrics, among other things, might be useful in helping to make a final decision.

Question 2: In the administration's view, are there any shortcomings or limitations in the AA data on scholarly output? If so, what are they?

CE: Dr. Edwards acknowledged that there are limitations in the AA data and that the RU admin is aware of such limitations. He commented that these limitations apply across all institutions for which AA collects data, not just Rutgers. However, when a graduate program at Rutgers is compared with and to its peer program at another institution, the other program also has the same limitations in the data collection. An example is that AA currently looks at some major types of federal research funding, but does not claim to capture all external funding. Thus, state contracts or grants or awards from foundations are not included in the AA analysis. According to Dr. Edwards, this is well understood by deans and others who might want to use AA data, and thus AA data would not be the only thing a decision maker would look at or consider.

Dr. Edwards is quite familiar with this limitation from the time when he was the Dean of the School of Social Work, but found AA data to be useful because the same limitations applied to the Schools of Social Work at other universities. Edwards also pointed out that AA has an advisory group, made up of faculty and administrator from multiple universities which functions to make recommendations to AA for changes that will make their data reports even more useful, such as including using book chapters and distinguishing edited books from authored books. Several of these recommendations are currently in beta testing by AA. Edwards also pointed out that AA has greatly increased the numbers of journal titles it now considers, from the number that were included just a few years ago.

Question 3: Will Rutgers enable individual faculty members and individual units to access their own data?

CE: Currently, the department chairs, deans, associate deans, and graduate program directors have access to the AA data at Rutgers. Dr. Edward's office is currently considering how best to facilitate individual faculty members accessing their own AA data; this is likely to happen in the near future. Dr Edwards said that he wants to make the AA data available to all faculty members. Dr. Edwards also noted that the data in AA are based on publicly available sources, such as federal grant data, etc.

Question 4: Will Rutgers enable individual faculty and units to respond to omissions and inaccuracies in AA data?

CE: The administration does not have the resources to verify AA data for each individual faculty member. The Office of Institutional Research, which works with AA, does not have sufficient staff personnel to be a liaison between AA and individual RU faculty members in correcting missing faculty scholarly productivity data. Dr. Edwards mentioned that in his preliminary conversation with AA, it is "interested" in correcting errors or omissions when reported to them. Edwards expects to individual faculty members will be able to directly contact with AA about the omissions and inaccuracies in their personal AA data.

Question 5: What errors and/or omissions have administrators - Dr. Barchi, yourself, others - discovered in their own AA data?

CE: See Dr. Edwards’ response to Question 2 above. Further, Dr. Edwards is not aware of any errors or omissions or inaccuracies in his AA profile, other than the fact that certain types of external funding are not included.

Question 6: What are the consequences for individual faculty members at all career ranks and for units and schools of having a higher or lower productivity index (as per AA's definition) compared with their peers?

CE: Dr. Edwards commented that deans and department chairs might use the AA productivity index for hiring senior faculty members (professor and distinguished professor), but would not be likely to use it at the levels of assistant and associate professor. Further, deans and department chairs might find the AA productivity index useful in making decisions about retention of faculty members. According to Dr. Edwards, no decisions about merit raises are made using AA data, including those for administrators. Additionally, the PRC does not have access to AA data on RU-NB faculty going up for tenure and promotion.

In response to a question from an EC member as to why external reviews of programs are not used to rank programs and allocate resources (instead of using AA data), Dr. Edwards commented that he had observed that most external reviews provide high ratings and recommend that a department or program needs more money, more resources etc., but fail to provide meaningful comparisons with and to its peers. AA, on the other hand, provides data to compare programs and units with and to peer institutions.

On the consequences of low AA scores, see Dr. Edward's comment under Question 1. Additionally, Dr. Edwards mentioned that if the AA productivity index for a faculty member or a unit is low, the relevant individual(s) should figure out how to improve it, such as publishing more or getting more grants. Dr. Edwards suggested that the potential for "gaming" the AA system is very low, but commented that such "gaming" might theoretically be possible, just as it might theoretically be possible to “game” other measures of productivity, such as citation indexing.

Question 7: According to a recent survey conducted by the NBFC, many faculty members feel that their activities related to teaching, advising, and service should be included in any productivity metrics that RU is using or considering. Does the University plan on developing indices, analogous to AA, on these other activities? If so, what indices, how would they be constructed, and how would they be used? If not, why not?

CE: Dr. Edwards responded that teaching and service are obviously important parts of the whole promotion packet. An individual’s citation index and h-index can be included in the promotion packet, but that teaching and service are also considered during promotion decisions. According to Dr. Edwards, metrics are poor at measuring teaching and service effectiveness. However, both service and teaching are looked at by the PRC. Dr. Edwards mentioned that although RU faculty may not believe it, teaching is reviewed by the PRC, and promotion and tenure may be denied on the basis of it. Similarly, the PRC also looks closely at a faculty member’s record of service (Form 1A, for example, has line items for this) when considering promotion and tenure. So RU-NB already considers teaching and service in addition to research output for promotion and tenure.


References

1 Information presented here on AA is from http://www.academicanalytics.com/ which was accessed multiple times in the month of February 2016 during the preparation of this report.

2 The discussion presented here on the source of publication and research grants data used by the AA to populate its AAD is available in the following publication, “Faculty Scholarly Productivity (FSP) 2009 Database Methodology,” available athttp://academic.fiu.edu/CAC/Memo_Chairs_Academic_Analytics.pdf

3 The source is the following publication, “Improving Measurement of Productivity in Higher Education” by National Research Council of The National Academies, The National Academies Press, available at http://www.nap.edu/catalog.php?record_id=13417

4 According to http://www.academicanalytics.com/Public/AdvisoryCommittee, the advisory committee of AA consists of Julie Carpenter-Hubin (Ohio State University), Lou McClelland (University of Colorado Boulder), Lydia Snover (MIT), and David Jamieson-Drake (Duke University).

5 The 13 federal agencies included in the 2009 FSPI are Department of Defense–Air Force Office of Scientific Research; Department of Defense–Army Research Office; Department of Defense–Office of Naval Research; Department of Health and Human Services; Environmental Protection Agency; Department of Energy; Federal Aviation Administration; National Aeronautics and Space Administration; National Institutes of Health; National Oceanic and Atmospheric Administration; National Science Foundation; National Institute of Food and Agriculture; and United States Department of Education.

6 Vinod Patel, "Special Reports: Productivity Metrics," Chronicle of Higher Education, March 4, 2016.

7 Patel, ibid.

8 Some faculty have been able to obtain their AA data either because they were given access by the Rutgers administration or because they filed OPRA (http://nj.gov/opra/) requests. It is possible, but by no means certain, that additional faculty may be able to obtain their AA data by filing an OPRA request.

9 According to Chancellor Richard Edwards, Academic Analytics is currently testing the inclusion of book chapters as a category.

10 The survey did not collect information on the ranks (assistant, associate, professor, distinguished professor) of the respondents or their disciplines.

11 https://www.insidehighered.com/quicktakes/2015/12/15/rutgers-professors-oppose-academic-analytics, ANDhttp://chronicle.com/article/CanDataMeasureFaculty/234595.

12 https://issuu.com/targum_editor/docs/dt_01-20_fcc82bccdadb52.

13 http://www.hefce.ac.uk/pubs/rereports/Year/2015/metrictide/Title,104463,en.html

14 Chronicle of Higher Education; http://chronicle.com/article/Is-Pushing-Faculty/134892/.

15 These comments presented here are verbatim from the survey and may include typographical errors.