What is key to the successful design and implementation of CBE? Assessment.
If you aren’t starting here, you’re missing a huge piece of the puzzle.
Meaningful assessments are paramount to student success regardless of the learning model, but they are especially powerful when directly tied to the competency statements that transcend a student’s time in the classroom.
As important as the alignment between assessment and competency (or learning outcome if you are not in a competency-based system) it is equally important for you and your institution to determine how you will be assessing, and how you will calibrate to insure validity of assessment and accuracy in its translation to “grading.”
Ah, yes. Grading. Reporting. In an ideal world, we wouldn’t need to translate or think about what the report card or transcript will look like. Thankfully, more and more schools, equipped with interrogated data and extensive research, are confidently going gradeless. Hacking Assessment exemplifies the importance of assessment and feedback necessary to measure student progress and inform learning without a grade attached. The concept is evolutionary.
Are you describing competency-based assessment in traditional terminology? This may inadvertently limit student learning. Or, are you using terminology that’s transformative and engages the learner? Using vocabulary that fosters a growth mindset changes the dynamic of learning and feedback of the learner and instructor and may mean the difference between developing fortitude and perseverance, or falling short and giving up.
|“What grade did I get?”||“What did I learn?”|
|get good grades||authentic measure of progress|
At College for America, we see a similar use of a binary competency system that effectively eliminates grades. Students receive feedback on the projects they submit but only receive an indication of “mastered,” or “not yet.”
Nationwide, more colleges and universities every year report accepting students into programs from schools who do not use grades. Because, the grade is not what’s important in student learning. Rather, what matters most is whether or not a student has learned, and whether or not they can demonstrate mastery with a level of proficiency in skills, abilities and knowledge that will lead to lifelong learning and career success.
There are countless scholarly articles written and several subject-matter experts who have emerged in the field of grading and assessment with complete distinction. Thomas Guskey, an education professor from the University of Kentucky, has written and presented extensively on grading, reporting and student learning. He advocates establishing an institutional understanding of the purpose of grading and reporting to ensure that the data that is surfaced is what is actually needed and wanted. Through numerous studies, and the work he’s done with the Center for Innovation, he’s demostrated the importance of data accuracy for alignment in grading. When schools muddy their proverbial data waters with multiple scales of assessment or use an assessment scale with too many degrees of variation, it’s difficult—if not impossible—to assure data alignment:
Measurement experts identify precision by calculating the standard error of measurement. This statistic describes the amount by which a measure might vary from one occasion to the next using the same device to measure the same trait. For example, suppose the standard error on a 20-item assessment of student learning is plus or minus two items. That may not seem like much, but using a percentage grading scale, that would be a range of 20 percentage points—a difference in most cases of at least two letter grades. Many educators assume that because the percentage grading scale has 100 classification levels—or categories—it is more precise than a scale with just a few levels (such as Excellent, Average, and Poor). But in the absence of a truly accurate measuring device, adding more gradations to the measurement scale offers only the illusion of precision. When assigning students to grade categories, statistical error relates to the number of misclassifications. Setting more cutoff boundaries (levels or categories) in a distribution of scores means that more cases will be vulnerable to fluctuations across those boundaries and, hence, to more statistical error (Dwyer, 1996). A student is statistically much more likely to be misclassified as performing at the 85-percent level when his true achievement is at the 90-percent level (a difference of five percentage categories) than he is of being misclassified as scoring at an Average level when his true achievement is at an Excellent level. In other words, with more levels, more students are likely to be misclassified in terms of their performance on a particular assessment.*
The evidence is clear: Assessment is as important as any other aspect of creating a competency-based learning model, if not the most important piece in allowing schools to authentically measure student progress.
The alignment of assessment to competencies needs to guide your curriculum design (or redesign) and determines the most appropriate and accurate means of measuring student progress. Ensure validity of assessment as well as grading, and reporting will be your greatest asset in designing and implementing a successful competency-based learning environment.
*Guskey, T. R. (2015). On Your Mark: Challenging the Conventions of Grading and Reporting. Bloomington, IN: Solution Tree