This document is in support of the BYU Policy on Rank and Status (2008), which states, “Each discipline has its own scholarly traditions and its own channels for communication among scholars, and therefore each department should establish criteria for defining and evaluating scholarship within its discipline” (3.4.3). This document articulates some of the policies of the McKay School of Education regarding the criteria for evaluating the quality of scholarly activities.

In cases of academic promotion, the burden is on the candidate to provide strong evidence of scholarship meeting a high standard in the three criteria outlined below. Potential sources of evidence and methods for interpreting this evidence are determined by each department as appropriate for its own scholarship traditions.

The MSE document provides general guidelines and scholarship criteria that the departments can apply in a specific evaluation of the quality of research journals in their own disciplines.

Overall Thoughts on the Evaluation of Scholarship

Evaluating scholarship is a multi-faceted endeavor. Consistent with the BYU Policy on Rank and Status (2008), we endorse the use of both qualitative and quantitative evidence in the evaluation of faculty scholarship. Historically, qualitative evidence of scholarship productivity and value in the McKay School has included the following:

  1. External reviews of a faculty member’s scholarship by experts in the field
  2. Peer reviews of a faculty member’s scholarship by colleagues at the university
  3. Professional judgments of the reputation of scholarly presses and journals that have published a faculty member’s work
  4. Indicators of a faculty member’s scholarly influence and reputation, such as
    • appointments to editorial boards,
    • leadership positions in professional organizations,
    • books published by prestigious scholarly presses, and
    • favorable published reviews of scholarly works, national and international scholarly awards, and receipt of external grants.

Traditionally the McKay School has considered quantitative evidence of scholarship productivity and quality to be primarily the number of peer-reviewed scholarly books, book chapters, journal articles, monographs, technical reports, and textbooks published by a faculty member since coming to BYU. In recent years, with an increased number of quantitative indicators of journal and article influence (e.g., ISI journal impact factor, immediacy index, Eigenfactor, H-index, etc.), McKay School faculty have included such evidence to support the quality and influence of their scholarly work.

The remainder of this document describes how the McKay School intends to use qualitative and quantitative approaches to evaluating scholarly influence and quality according to three main criteria: rigor, impact, and prestige.

Caveats Concerning Use of Quantitative Impact Indices in Isolation

Academicians are increasingly inclined to judge scholarship quality primarily on quantitative indices (e.g., journal acceptance rates) and on the ISI impact factor. The McKay School of Education believes that scholarship evaluation should be holistic, not based on single isolated data pieces—qualitative or quantitative. With recent emphasis on impact factors, some of their limitations when used as sole criteria should be considered.

Variation by Discipline. Impact factors (IFs) vary widely by discipline. For example, only 27% of the education journals indexed by the two main education databases (ERIC and Wilson Web) are also indexed in ISI (Corby, 2001). A 2010 ISI Journal Citation Report in the category of Education &; Educational Research returns only a single journal out of 139 with an IF greater than three, and only nine other journals with an IF 2.0 or higher (7.2%). In contrast, over 40% of the journals returned in the category Chemistry, Analytical and nearly 90% of the journals in Developmental Biology have an impact factor of 2.0 or higher (See Table 1). In education even ISI impact factors of 1.0 are rare.

Due to this wide disparity among fields, many researchers have argued that IF is not a valid measurement for journals in fields such as nursing (Melby, 2005), communication (Levine, 2010), medicine (Barbui, 2006; van Driel, 2007), developmental psychology (D’Odorico, 2001), social work (Furr, 1995), and education (Corby, 2001). Springer, a leading publisher, has stated on its website, “The citation patterns in these disciplines are entirely different; therefore the numerical values of their impact factors also differ significantly, and comparisons would not yield appropriate results” (taken from: ).

Table 1.

Impact Factors for different categories in the ISI Journal Citation Index

ISI Category Search

# of total results

# of journals with an IF ? 2.0

Education &; Educational Research



Chemistry, Analytic



Computer Science, Information Systems



Developmental Biology



Professional paradigms and subfields. Educational research bridges very different paradigms and subfields of professional activity. Some are oriented toward practitioner use; others are more focused on research Thus we value different kinds of professional activities, including (1) original data-based research, (2) instrument and methods development, (3) theoretical inquiry and philosophy, and (4) models, curriculum, and instructional designs. Publication in a respected high-impact practitioner outlet can be as valuable to that readership as a high quality research article to academics.

Also many faculty conduct scholarship in smaller sub-domains—such as science education, open educational resources, or measurement theory. Impact in these areas is felt on a niche, rather than on a wider general level.

Outlets Not Peer Reviewed. While peer review is highly valued in research publication, greater impact in education may occur with outlets not peer reviewed, such as widely read and utilized handbooks, textbooks, and trade magazines. For example, Educational Technology, in the field of Instructional Psychology and Technology, is a non-peer-reviewed magazine found by Holcolmb, Bray, and Dorr (2003) to be one of the most read and utilized publications in the field—above most peer-reviewed outlets. Also recognized leaders in this field publish in Educational Technology because of its high prestige and impact.

Open Distribution. Many education scholars increasingly value open knowledge distribution and thus encourage publication in online open-access journals with potentially high impact due to increased availability. To avoid uncritical acceptance or rejection, these journals should be evaluated with the same criteria as print journals.

Variety of Metrics. Many metrics are available for measuring quality of individual manuscripts and publications (c.f., Harzing, 2010), with benefits subject to debate. Furthermore, algorithms for calculating value may change without users understanding how such changes should affect interpretation and use.

Non-Mainstream Types of Research. We recognize that an overreliance on quantitative metrics for evaluating scholarly quality may discourage research that challenges dominant theories and paradigms (e.g., naturalistic materialism) or focuses on specific subcultures (Nokomo, 2009; Wicks, 2004). Such research can be difficult to publish in mainstream Tier 1 journals, but it may have great relevance to BYU’s unique mission.

Understanding these challenges and the need to evaluate scholarship through a multi-faceted, context-dependent lens, we recognize and welcome the need to provide criteria for evaluating our scholarly activities.

Criteria for Evaluating Scholarship

This section presents three criteria for evaluating scholarly activity and publication outlets: rigor, impact, and prestige. The guidelines for applying these criteria are inclusive enough to allow interpretation of scholarly quality using both well established and novel approaches. The scholar submits evidence for each of these three criteria that can be independently verified by internal (department and college) and external (within the discipline outside the university) review.

All publications in a scholar’s application for promotion and tenure are not expected to represent high rigor, high prestige, and high impact. However, applying these criteria to specific journals and publications can provide context for understanding the scholar’s overall portfolio and his or her ability to produce high quality scholarship.

Rigor. This criterion evaluates how rigorous and selective a publication outlet is, reflecting the quality of its representative work. This may be a subjective evaluation not easily reduced to simple acceptance rates. Differently tiered journals often have similar acceptance rates due to differences in calculation of rates, number of articles published annually, and quality of the submitted articles. Despite these challenges, evaluating rigor emphasizes the quality of work represented in a publication outlet.

Possible indicators of rigor-Not all types of evidence are of equal value:

  • Acceptance rate
  • Blind peer review
  • Quality of the editorial board
  • Quality level of articles published

Impact. This criterion refers to how well and widely referenced individual manuscripts and publication outlets are within a field. For this criterion we look for indexed publication outlets with citation impact ratings or high SORTI Esteem or Q Scores (i.e., a ranking of journals within specific disciplines). Because these metrics are affected by the quality of indexing, we stress triangulating impact statistics from multiple venues (e.g., “Publish or Perish,” which calculates citations in Google Scholar).

A few non-peer-reviewed outlets can have high impact in ways not captured by traditional impact metrics. For example, a product or curriculum being adopted by a prestigious organization or by state education departments can be an opportunity for high impact. Additionally, publication in a truly seminal book in a field may be particularly impactful. In these situations, the scholar should document his/her justification for ranking this work as having a level of high impact in these nontraditional ways.

Possible indicators of impact-Not all types of evidence are of equal value:

  • ISI impact ratings
  • “Publish or Perish” impact statistics
  • Google Analytics for web articles
  • Circulation numbers and other estimates of readership
  • SORTI Esteem or Q Score
  • The publisher’s reach (e.g., a press that is a main publisher in the field)
  • Documented sales, adoption, and implementation of academic books, articles, curricula, or designed instruction
  • Documented review of the scholarship in mass or academic media

Prestige. This qualitative judgment concerns how highly peers regard publication in a particular outlet. Studies surveying academics about which journals they read or value may indicate prestige (e.g., Orey, Jones, &; Branch, 2010). When studies are not available, a scholar or department can solicit perceptions of other professionals of which publication outlets are most prestigious.

On occasion, the most prestigious publications are not peer-reviewed journal articles. For example, publishing in the seminal handbook in one’s field can be substantially prestigious. Because book chapters vary in prestige, the scholar should provide justification for considering a non-peer-reviewed publication as prestigious.

Possible Indicators of Prestige-Not all types of evidence are of equal value:

  • Studies of what is read, used, and respected in the field
  • Reputation of the editorial board
  • Publication sponsorship by a well-respected organization
  • Recommendations by external professionals qualified to assess which publication outlets are considered prestigious in the discipline

Criteria Application

The aforementioned criteria can be used in deciding to which of three tiers the different journals belong. No single criterion should be considered in isolation. The evidence for each criterion should be defensible, either in representing objective data (such as acceptance rates and impact factors) or in representing subjective opinions widely held in the field (verifiable through external review).

Appendix A provides examples of applying the criteria to different educational journals. This appendix only provides models; it does not represent permanent indicators of the evidence needed for each tier. Decisions about whether specific evidence qualifies a publication as Tier 1, 2, or 3 must be considered in relation to the other journals in the discipline.

Publication Fit as an Additional Perspective

The fit of an article or research agenda to the mission of the department, as well as to the choice of particular publication outlets, is also important when making value judgments of scholarship. An appropriate fit between research and publication outlet allows for a specific targeted community to be potentially impacted by the research. Sometimes this targeted impact can be more important than publishing in a journal with a typically higher general impact where the target audience would not find the research. For example, a qualitative methods journal that may have lower impact ratings than more general journals may be widely read by methodologists seeking scholarship on their specific modes of inquity.

Fit between a scholar’s research agenda and the mission of a particular department is also important, as it helps increase the stature of the department within a discipline, benefitting students, faculty, and the university.

Thus McKay School scholars should conduct research that provides an adequate fit to the mission and goals of their department, and they should also attempt to fit their research to appropriate publication outlets for maximum potential impact on a target audience. In choosing among publication outlets that all fit a research agenda, the criteria of rigor, impact, and prestige should guide the decision of where to publish.

Manuscript Quality versus Journal Quality

Efforts to assess the quality of journals and other publication outlets are really efforts to evaluate the impact and quality of an individual piece of scholarship. However, individual manuscript quality and journal quality are not necessarily equal, as an article published in a lesser quality journal could be highly cited and bring the scholar great prestige. Thus a scholar going up for promotion and tenure might also make the case for the individual research quality of a specific piece of scholarship under the same criteria:

  • Rigor could be judged from an outside review of a sample of the scholar’s work. In addition, professional research awards are indications of approval for the rigor (or quality) of one’s work.
  • Impact for an individual can be measured through the h-index and other citation counts from “Publish or Perish” that measure personal productivity and impact.
  • Prestige can be evaluated by whether the individual receives professional recognition and awards, requests to present keynotes and workshops, or other confirmations of prestige. For example, confirmation that one’s work has been assigned as a course reading at another university or high Google Analytics metrics for a scholarly website or online article would be evidence for the scholar’s professional standing.

In these situations, the burden is on the individual to document evidence for quality. These kinds of data are more difficult to obtain, and so in the absence of persuasive evidence for individual quality, an evaluation of journal (or publication outlet) quality can be used, per the criteria and guidelines in this document.


Authorship in educational journals is listed in order of contribution, although equal contributors are often listed alphabetically. Candidates for promotion who publish collaborative articles must explain the substance and quality of their contribution in producing the manuscript. Unlike other professional fields, education encourages co-authoring publications with students (including students as first authors), as this practice shows evidence of mentoring. However, our college expectation is that a candidate for promotion should show a good balance between first-authored publications (indicating ability) and co-authored publications (indicating mentoring and collaboration).

Focused Research Agenda vs. Cross-disciplinary Activity

Many disciplines expect faculty to publish primarily in one unified research agenda. In education we frequently collaborate with subject matter experts (e.g., with a historian on research related to history education), with methods experts (e.g., with a psychometrician on an assessment of student self-efficacy), or with policy experts, among others. In addition, cross-disciplinary collaboration (e.g., teachers are being asked to teach literacy in all subjects) is emphasized as well. So our scholarship may span multiple content areas and be published in varied kinds of academic publications.

Because of this encouragement for cross-disciplinary and collaborative work, some departments consider it acceptable, even desirable, for an education scholar to publish along different research agendas and in different fields. However, as the purpose of quality scholarly work is to contribute meaningfully to a field of inquiry, the scholar must demonstrate a meaningful and substantial contribution in his or her area of research and be capable of articulating the connecting threads among the various publications to form a few core research trajectories. Thus publishing in a sole research agenda is of less concern than demonstrating a significant and consistent contribution in one’s work.


Barbui, C., Cipriani, A., Malvini, L., &; Tansella, M. (2006). Validity of the impact factor of journals as a measure of randomized controlled trial quality. Journal of Clinical Psychiatry , 67 (1), 37-40.

Corby, K. (2001). Method or madness? Educational research and citation prestige. Portal: Libraries and the Academy , 1 (3), 279-288. doi: 10.1353/pla.2001.0040.

D’Odorico, L. (2001). The citation impact factor in developmental psychology. Cortex: A Journal Devoted to the Study of the Nervous System and Behavior , 37 (4), 578-579.

Driel, M. L. van, Maier, M., &; Maeseneer, J. D. (2007). Measuring the impact of family medicine research: Scientific citations or societal impact? Family practice , 24 (5), 401-402.

Furr, L. A. (1995). The relative influence of social work journals: Impact factors vs. core influence. Journal of Social Work Education , 31 (1), 38-45.

Harzing, A. W. (2010). The publish or perish book . Melbourne, Australia: TarmaSoftware Research Pty Ltd.

Holcomb, T.L., Bray, K. E., &; Dorr, D. L. 2003. Publications in educational/ instructionaltechnology: Perceived values of ed tech professionals. Educational Technology , 43 (5).

Levine, T. R. (2010). Rankings and Trends in Citation Patterns of Communication Journals. Communication Education , 59 (1), 41-51.

Melby, C. S. (Ed ). (2005). Examining the future of professional journals. Nursing &; Health Sciences , 7 (4), 219-220.

Nkomo, S. M. (2009). The seductive power of academic journal rankings: Challenges of searching for the otherwise. Academy of Management Learning and Education, 8(1) , 106-112.

Orey, M., Jones, S. A., Branch, R. M., &; Association, F. F. E. C. A. (2010). Educational media and technology yearbook . Vol. 35, 2010 (illustrated ed.). New York, NY: Springer.

Wicks, D. 2004. The institution of tenure: Freedom or discipline. Management decision, 42 (5): 619–627.

Appendix A: Example Publications

The following are some examples of different ways that evidence could be collected and organized to describe a journal in relation to the three criteria.

  1. Journal 1 . High rigor due to a low acceptance rate of 8% and rigorous traditions of double-blind review; high impact (indexed and ISI impact rating of 1.1 “Publish or Perish” average citation count of 19 citations/paper), and high prestige (official journal of the main international professional organization in the field, and edited by leading researchers).
  2. Journal 2 . Medium-high rigor with an acceptance rate of 21-30% and a strong tradition of double-blind reviewers. Very high impact with an ISI impact factor of 2.906 and 51.73 citations/paper (P or P). High prestige, official APA publication and long-standing tradition of publication in the field.
  3. Journal 3 . Medium-high rigor ( acceptance rate of 21-30%, double-blind peer reviewed). High impact with an ISI impact factor of 1.341 and 21.31 cites/paper (P or P). High prestige as the journal is used by scholars in numerous fields. Highly respected researchers regularly publish research articles in this journal.
  4. Journal 4. Medium-high rigor with an acceptance rate of 21-30% and double-blind review. High impact with an average of 24 citations/paper (P or P), even though there is no ISI impact rating. However, this average number of citations/paper is the highest for this field of research. High prestige as this journal is internationally known and has been the subject of content analysis studies conducted for the main handbook in this field.
  5. Journal 5. Low rigor ( not peer reviewed nor usually data-based; acceptance rate 15-20%), but medium-high impact (average of 20 citations/paper in P or P) and medium-high prestige (publishes many well known researchers, highly regarded as the premier source of new ideas, but not known as a research outlet). In addition, one study found that this publication was one of the most-read in the field.
  6. Journal 6 . Medium rigor (acceptance rate of about 25%; double-blind review), medium impact (registered members of the main professional organization automatically subscribed, showed by studies to be highly read and used in classes, but marginal “Publish or Perish” rating of 12 citations/paper), and low prestige (though highly read and used, rarely considered a high quality research outlet).
  7. Journal 7 . Low rigor (high acceptance rate of 66%; double-blind reviewed), medium impact (online and freely accessible, but not indexed in major databases, medium-level “Publish or Perish” average of 10 citations/paper), and low prestige (not well known, although occasionally publishes well known scholars).
  8. Handbook 1 . While not a blind-reviewed journal, this is the seminal handbook in the field. It is sponsored by the main professional organization and edited by distinguished and leading scholars, including editors of the leading journal in the field. It is widely read in graduate classes and by professionals, and is considered a very prestigious place to publish with high “Publish or Perish” impact ratings that rival the top journals in the field (30 citations/chapter). It would probably be evaluated as medium rigor (not peer reviewed, but rigorously reviewed by its editors), high prestige (due to its standing in the field and within the main professional organization), and high impact (due to its high citation and readership counts).
  9. Handbook 2. This handbook is not the seminal book in the field, but it is edited by a known scholar. It has low rigor (not blind reviewed), low impact (not highly cited in P or P), and medium-low prestige (some well-known scholars are published in the book, but the book itself is not widely disseminated).

The following table contains a possible representation of the above comparison.

Table 2.

Evidence for Recommended Criteria on a Sample of Educational Journals






Journal 1 (Tier 1)

8% acceptance rate;

double-blind peer review

Cites/paper 35.83; h-index 87 (PorP)

IF 1.1 (ISI)

Flagship Research journal of professional organization

Journal 2 (Tier 1)

21-30% acceptance rate. Double-blind peer review

51.73 cites/paper;

IF 2.906 (ISI)

APA-sponsored journal with field’s namesake.

Journal 3 (Tier 1)

21-30% acceptance rate. Double-blind peer review.

Cites/paper 21.31

h-index 67 (PorP)

IF 1.341 (ISI)

Sponsored by flagship organization in several fields.

Journal 4 (Tier 1)

21-30% acceptance rate. Double-blind.

Cites/paper = 24 (PorP); highest in this field

International journal analyzed as one of the main journals in the field’s seminal handbook.

Journal 5 (Tier 2)

15-20% acceptance. Editorially reviewed.

Cites/Paper 19.89 h-index = 63 (PorP)

#2 most read and used publication (Holcomb, Bray, &; Dorr, 2003)

Journal 6 (Tier 3)

25% acceptance rate.

Cites/paper 3.3; h-index = 22 (PoP)

Widely read

(Holcomb, Bray, &; Dorr, 2003), but not often considered a top outlet for research

Journal 7 (Tier 3)

High (66%) acceptance rate.


Cites/Paper 9.71; h-index=9 (PorP)

Relatively new and less prestigious.

Handbook 1 
(Tier 1)

Open call, reviewed by established leaders in the field.

Cites/paper 34.55; h-index = 33 (PorP)

Used in graduate courses and as a reference for scholars. Published by main professional organization

Handbook 2 
(Tier 3)

Editorial review by a known scholar

Cites/paper = unknown

Published by Information Science Reference