New Playbook: Incremental Credentialing in Graduate Education

Reflections on Conducting Research in a Changing Credentialing Ecosystem

Kirk Knestis, Founding Principal of Evaluand LLC and Research Manager for Credential As You Go

Holly Zanville, Research Professor and Co-Director of Program on Skills, Credentials & Workforce Policy at George Washington University and Co-lead of Credential As You Go

—————————

Education researchers are raising the yellow flag warning that traditional research methods may not work well in the dynamic credentialing ecosystem. The yellow flag is on the field at the national Credential As You Go initiative too.

CAYG’s mission is to inform and facilitate the development of a nationally adopted incremental credentialing ecosystem that improves education and employment outcomes for all learners. The vision is clear—formally recognize all learners for what they know and can do as they acquire learning from multiple sources. This means embracing and aligning the growing array of credentials—including those shorter term than degrees and certificates. This means transforming the current U.S. degree-centric system, where learning is primarily counted only when a learner completes an associate, bachelor’s, master’s, or doctoral degree, to a system that recognizes academic success in smaller units of learning. Think credit and noncredit skills-to-jobs pathways, microcredentials, industry-recognized certifications, apprenticeships, and other non-degree credentials.

Evaluation of innovations like the transformation of an entire postsecondary system, which CAYG envisions, requires measurement. How will we know if the new system is better than the old one? And for whom? If it is better for currently entitled students but not all Americans, we will have gained little.

Credentialing innovations have long “theories-of-action” —the series of theoretical “if-then” causal linkages at play between where a change is made (an innovation is implemented) and where outcomes are measured. The implementation of meaningful changes in instruction (the teaching-learning interface between instructors and learners) requires understanding and establishing the institutional, programmatic, and degree-level conditions conducive to desired changes for learners.

For CAYG, this means considering two broad questions: (1) At the credential level, where and how is learning achieved and recognition awarded for student success? (2) At the system and institution levels, what are the conditions (e.g., policies, processes, technologies) that are necessary for effective deployment of innovative credentials?

These two levels of inquiry must occur simultaneously since research and measurement happens while higher education continues as “normal.”  Higher education outcomes such as enrollment, completion, and persistence will continue to focus on progress toward completion of traditional degreed credentials. Those well accepted measures will be more useful for reporting rates by institution, school, program, and even degrees, than they are for understanding individual student-level differences, resulting from an innovative microcredential option that learners might select. This is in part because those measures are, at the individual level, binary (e.g., a learner is or is not enrolled, completed, or re-enrolled the following academic term).

Further, higher education system data are not typically structured to determine outcomes—even using well-established measures—for learners pursuing non-degree credentials. It may not even be possible to identify in a data system if a student is pursuing an “incremental credential.” And many systems do not provide ways to track non-credit learning, non-credit to credit pathways, or pathways at all. Degree programs and course catalogs are arguably structured for the efficient (think, convenient) delivery of teaching-and-learning activities en masse. Recognition of course or degree completion is grounded in the assumption of uniformity—an implicit agreement among educators, learners, and the workforce that all students who have completed a given degree are equally prepared, have demonstrated the same outcomes, and are similarly qualified. Of course, this is not a safe bet.

We also recognize that the U.S. credentialing system, which includes degrees and certificates, is moving toward credentialing processes and requirements that are more diverse, flexible, and tailored to individual learner needs. As this set of innovations becomes more widely institutionalized, research study design and implementation face significant challenges. Four appear front and center even early in the CAYG effort.

  • The treatment being studied is variable by design.  It is generally expected in research studies that everyone in a given group is getting the same treatment. Studying a one-size-fits-all innovation is easy to study rigorously. However, when a credentialing approach allows greater “flexibility” (often tailoring to the needs of an individual learner), the treatment being assessed is by its nature variable. Given the desire to understand what works at scale, education research studies live or die on the assumption that large numbers of students get the same treatment. When the “same treatment” is that every student gets a program that is of the greatest benefit to them individually, that flies in the face of the expectation that such studies assess and manage implementation quality and fidelity. In this way, priorities of programming and research (that everyone gets the same thing) confound equity (that everyone gets what they need), the latter being a primary aim of innovative credentialing approaches.
  • Analytic power becomes a problem. Postsecondary education programs are innovating at an unheralded pace given changing 21st-Century workplace demands and calls for recognition of an array of valuable credentials—degree and non-degree. This will result in an increasingly wider array of credential offerings with fewer students potentially in each. And that will likely be problematic where analytic power is concerned, as quality impact research requires volumes of data large enough to enable satisfactory analyses.
  • Innovation and scale-up are in conflict. As we are studying credentials in a period of rapid innovation, it is difficult to know when a new credential opportunity is “done” and ready for repeatable testing at greater scale. It is also difficult to scale an intervention that is inherently flexible to the context in which it is being offered. And if innovations cannot be replicated at scale, they cannot be rigorously tested to assess if they “actually work”—the priority long established in education research.
  • If innovative credentials work, traditional measures stop working. The purpose and theory behind any education innovation drives decisions about which what outcome measures matter. Traditional credentialing approaches have established a set of measures of higher education success. However, it is emerging that as the array of credentials broadens, those measures miss crucial aspects of new options. As we expand the aims of credentialing, we by necessity redefine what is important to measure. Where it has generally been sufficient to track if a learner completes a degree (and maybe how long that takes), it now becomes necessary to measure how many different credentials an individual gains, how those credentials are connected, how well they meet educational and employment expectations, and ultimately how they further learners’ goals.

The current flux in our higher education system is occurring amid pressing and complex questions: “What is working, how and why is it working, and for whom is it working?” Answering these questions as credentials and contexts change is creating tremendous challenges for researchers. We have a bag of tried-and-true methods to study outcomes, but the bag is small and its contents are increasingly inadequate to fully study a complex and changing ecosystem. The stakes are immense: many innovations in credentialing seem sound and necessary, garnering millions of dollars to transform our learn-and-work ecosystem. But we must have evidence of actual outcomes—verifiable results of innovations, particularly for our learners, credential providers, and employers—even as we must also accommodate the complexity of what is being attempted.

Education researchers are by necessity developing new methods to examine outcomes, like finding ways to facilitate matching of new microcredential offerings with existing systems of degrees and certificates. Such workarounds are necessary until higher education systems collect data on all credential enrollments; develop ways to track progress toward completion rather than simply counting completions; and better link academic outcomes to employment and wage information. For now, the yellow flag should stay on the field while credential providers improve their data collection systems and higher education systems continue to make improvements to the very credentialing systems being studied.

Download a PDF of the Blog Here.

Improving Education and Employment Outcomes