Tests? What a chore. You need to construct them. You need to administer them, You need to grade them. Yuck. Yuck. And yuck.
Construction: what should be assessed and how? Is that question too hard or too easy? Administration: to what extent can cheating be prevented and how much energy is that going to cost? Grading: do you hate multiple choice or do you hate hours upon hours of grading?
To students, assessments may seem like nothing other than punishment.
Of late, there has been much examination of grading. Anything smacking of "traditional" is guaranteed to be on blast in Teacher Twitter. Some teachers have gone all in on Standards Based Grading and are eager to evangelize that movement. Others are all for ungrading, and are keen to share The Good News about that with anyone who will listen.
Having given my first assessments in 1986, I grant myself a modicum of "old curmudgeon" license. I don't recall much talk about SBG or ungrading on Twitter back then, mostly because there was no Twitter. Web 1.0 was still in the future (it exploded in 1995). If it was talked about at the conferences (national or local AAPT), those conversations eluded me. Should I have jumped on the SBG or ungrading bandwagons as soon as I heard from proponents? Maybe. My own district was pushing Myron Dueck's Learning Targets hard.
My aversion to going with anything that I deem to be trending may well be a character flaw, and I own it. I was very late to listening to anything by Norah Jones because her album was plastered all over the record stores. (Remember record stores? Yes, I'm that old.) If she was that popular, she wasn't for me. I figured it out eventually.
In any case, here's what I did with Physics unit tests in the vacuum of my own classroom kingdom.
A unit test would be administered at the end of each unit. The formula I eventually settled on was 18 multiple choice: 10 from the current unit, 4 from the previous unit, and the other 4 from older units. One problem where work needed to be shown. The test is valued at 100 points: 5 for each multiple choice and 10 for the free response problem. This composition is largely a matter of taste and you may well have a different formula. Topic for a different thread.
Make-up window. Some students are invariably absent on test day. I was available on various days at lunch and/or after school. The availability schedule was posted and maintained in the front of the classroom on a board so it could be seen from the back. The last day to make up a test was usually a week or so from the original test date. Students who failed to make it up within that window could still make it up, but at the end of the semester, with no Test Correction Journal opportunity.
The first Test Correction Journal (TCJ) Day of the semester. When I mark student answer documents (my unpatented "Bairdtrons"), I simply make a slash through incorrect responses. Scores are recorded. For TCJ, each student is given their marked Bairdtron back. There were four versions: P, H, Y, and Z. Same multiple choice questions; different order. The free response problem was presented on the flatscreen, but there were four questions: P, H, Y, and Z. With four versions, no student is adjacent to anyone with his/her form in any direction. Note: TCJ Day is usually the day before the next unit test.
Students are directed to group up by test form: P, H, Y, and Z. In a class of 32, there would be eight students in each group. I would then distribute one copy of the test per table (two students). Each student's task was to write a journal entry for each item they missed. Students who aced the test were there to assist those who did not. Anyone with entries to write needed to learn from neighboring classmates with the same form.
The journal entry consisted of a complete, correct statement of the contents of the question. For a simple example, "Doubling the speed of an object quadruples its kinetic energy" was acceptable while "I picked A but the answer was D" was not acceptable. Numerous "Goofus and Gallant" style acceptable vs. not acceptable examples were supplied, described, and reiterated. It was impossible to over-emphasize the extent to which the multiple choice letter labels were irrelevant to the process.
I was mostly silent during TCJs. I wanted students to engage each other with teaching and learning. I already had my chance.
The other part of each journal entry was a reflection statement. Why did they pick the wrong answer? What went wrong? I assure students that the better their reflection, the less likely they are to miss a question like this in the future.
When finished, students stapled their Bairdtrons to their completed TCJs. Between one TCJ Day and the next, I would review them for ... gird your loins here ... compliance. There—I said it. I am a monster. Students who produced complete and correct TCJs got ... no points whatsoever. I said I was a monster. What they did get was the chance to earn points that they missed back. But that will have to wait until the next TCJ Day.
Subsequent TCJ Days: TCJ Quiz. Ten questions from the unit test (usually the 10 from that test's unit) are reformatted into a short quiz. Still four forms. At the beginning of class, TCJs are returned to students who completed them for a brief review. Mini Bairdtron answer forms are distributed to those students. A 10-question TCJ Quiz is then traded to each student in return for their journal. No aids (including calculators) are allowed during the quiz. Remember, this is now the third time they've seen each question. Granted, time has passed since the test and the journaling.
While students complete the TCJ-1 quiz, the marked Bairdtrons for Unit Test 2 are handed back. I get my steps in!
After all students are done with TCJ-1, students are directed to regroup by test form and test forms are once again distributed as described above. Students once again collaborate to complete their TCJ-2s.
The needs-dependent reward, or why I never used district grading software. The TCJ quiz is marked. Students could score up to 10/10. Scores are entered into my Excel grading spreadsheet. Now for the points.
The simplest way to describe the points earned is to say students can earn up to half the points they missed on the original unit test. Via TCJ, a 60 can be turned into an 80. A 98 could be turned into a 99. A 40 could be turned into a 70. You get the idea. The more you need, the more you can get.
Excel subtracted the unit test score (say, 60) from 100 (so 40) then divided that by 2 (20). It then multiplied that by the quiz score divided by 10 (for example 8/10=0.8). In this case, (20x0.8=) 16 points would be added to the original score, turning that student's 60 into a 76.
Did every student correctly complete their TCJ for every test? No, they did not. Those who did ended the semester with an A or a B, nearly to a person. Those who did not engage in TCJs ended the semester with Cs, Ds, and Fs.
Students who engaged obviously reaped benefits beyond the points. But for academically motivated students, for better or worse, the points were what they were there for. I know it's awful to think of what we do as tricking students into learning against their will. But I did confess to being a monster.
Funny story in the comments.
These forms may or may not be useful.
BairdTron student answer document. Not quite a Scantron, but if you print a bunch and cut them as guided by the marks, you can line up the completed forms for quick grading (in parallel?). Used for unit tests.
TCJ Blank. Completely necessary? Probably not. But students find it helpful and it streamlines the compliance review process. Instruction reminders are embedded. Each form holds up to six journal entries.
BairdTron mini. Again, go at the photocopies with the paper cutter and you're good to go for the TCJ quizzes.