I serve as the School Improvement Chair for my school district. I help coordinate the flow of data among several of our district analytic and assessment tools. I provide coaching around the use of data and assessment to help drive instruction. I oversee the implementation of NWEA’s MAP Growth Assessment. To put it simply, I am up to my eyeballs in data when it comes to a good portion of my professional work. I don’t see it as a bad thing. Assessment provides opportunities for reflection, for measuring proficiency on standards, for determining if the coordinated efforts we put forth in our school systems are having the outcomes we’re seeking. At the very least, it helps guide and give credence to the beliefs we hold when it comes to “how we think we’re doing.”
Data trends confirm if our instructional interventions are effective; they correct misconceptions about which demographics of students cause the most behavioral issues, and data trends can help identify crucial transitional gaps in learning. If you’ve been involved in K-12 public education in the last 10 years, none of this is new to you. What is new (to myself at least) is a growing concern over assessments of academic achievement. Specifically, the number of assessments we deliver and how we use the results at the K-12 level, especially the assessments we have control over.
There’s an old saying that many farmers are fond of when it comes to measuring the fruits of your labor. “Weighing the pig doesn’t make it fatter.” It’s not the best metaphor for assessing academic achievement of students; they certainly aren’t farm animals, despite messy lockers. However, the old saying implies that if we are constantly focused on measuring progress, we ignore the time and opportunity we might otherwise use to affect progress towards academic achievement. It’s easy to cry out “hear, hear, Federally and State mandated testing must go!” But that’s not the issue as I see it.
K-12 educators have historically used far more instructional time for tests, quizzes, and other assessments than what’s required of us by State and Federal Departments of Education. I was guilty of it myself in the classroom, especially when I was a newer teacher. Weekly spelling tests, unit/chapter/section pre- and post tests. Unit, chapter, marking period, and semester summative tests. Pop quizzes, homework quizzes, self-assessments, benchmarks, screeners, regular formative assessments, the list goes on. Each and every assessment we give students are not inherently bad things. But when students take benchmark assessments for Math and ELA 3 to 4 times a year, double screeners for the same subjects each marking period, and are then subject to teacher-driven tests taken from curricular materials, Teacher’s Pay Teachers, or summative assessment banks, it’s no wonder that everyone feels like we “test too much.”
When I am answering questions from teachers about whether or not “Johnny” can retake a benchmark assessment (that has NO weight on any grade or academic progress) because they feel he didn’t do well, we begin to miss the entire point of the assessments at all; we don’t want students to do inherently well on every single assessment, especially when they’re just meant to monitor progress of growth. If students passed every test with flying colors, there would be little value in the assessments showing us where students need to grow. And likewise, doing poorly on an assessment does not mean the score has to go in the grade book. I have a lot of thoughts about assessment, probably enough to write an entire thesis. But to simplify and focus my own thoughts, I think I’d like to start framing my concerns about assessment overload in the same way we take about PLCs. The “four essential questions of right-sizing assessments” could be as simple as:
- What assessments are we absolutely required to deliver?
- What do we want to know about a student’s academic progress?
- What assessments will give us that information if the ones we are required to deliver don’t give us what we need?
- What assessments can we do without because they either duplicate another assessment OR are just used for a grade on a report card?
These questions aren’t perfect, and I would definitely welcome any thoughts, critiques, or resources about how to make them better. I always find it valuable when working with educators to have a simple set of questions or talking points to focus conversation. As we move away from COVID, towards the regular work of continuous improvement and assessment in a relatively “normal” learning environment, I wonder what other districts and teachers are doing to “right-size” their assessment delivery.