At no time is this more evident in a high school than during final exam time. I had to refrain from posting during the actual final exams themselves, lest my outrage at what we traditionally do on final exams result in a post with lots of swearing and reckless emotional statements. But, now that final exams are over, I feel I must speak out about what these final exams really measure.
In our school, teachers that teach the same course (for example, Biology) must all give the same final exam as a common assessment. The content of this exam is, in theory, supposed to be agreed upon by all teachers, and be a representation of what we want students to walk away with--what we want students to remember long after they sat in uncomfortable chairs at wobbly desks in our classrooms.
On the Biology exam, there were many questions pulled from publisher test banks, such as the ones below:
Osmosis is a type of
A.active transport.
B.passive transport.
C.dynamic equilibrium
D.endocytosis
A cell will swell when it is placed in a(n)
A.hypotonic solution
B.hypertonic solution.
C.isotonic solution
D.None of the above
The questions above do nothing but promote basic recall of information rather than true understanding. They measure what matters least, what I consider to be facts in isolation--science stuff they will forget as soon as the test is over. All of these questions can be successfully answered by students who still know absolutely nothing of real importance or value. So what if a student knows that osmosis is a type of passive transport? That doesn't mean they understand why it is passive transport, or what passive transport is, or why passive transport is an important process in their cells that aids in keeping them alive on a day-to-day basis, or how passive transport is connected to active transport in key body functions such as the transmission of nerve impulses. So what if they know that a cell will swell if it's placed in a hypotonic solution? That doesn't mean they understand how or why a cell maintains its osmotic balance, and why if they drink a lot of water their animal cells without cell walls don't all burst and die.
As I electronicized this exam into Juno for the exam day, that "So what?" question came to mind a lot as I prepared each question for importing. The other question that came to mind often about what these questions were measuring was, "Who cares?"
This is what I have found about most test bank questions--they don't assess any real or meaningful learning at all. They only test fleeting facts that students have crammed in their heads for that testing period. This means the final exam that we gave for Biology is, for the most part, completely useless as far as drawing inferences and conclusions about student learning.
We did have questions like this on the exam:
You do an experiment that involves yeast, a microscopic unicellular eukaryotic fungus. In this experiment, you put two samples of yeast in two different test tubes. You place one of the tubes in boiling water and boil the sample for 30 seconds. After this, you place 10 drops of a red dye called Congo Red on each test tube. You take a sample from each tube and look at each sample separately under a microscope. You notice that the yeast in the live sample (the one you did not boil) all took up the red dye and are now a lovely red color; however, the yeast in the same you boiled did not take up the dye, and the yeast are colorless, with the dye still in the solution around them.
Which of the following is the best scientific explanation for your results?
A.The live yeast were making ATP for the active transport of the dye across the membrane.
B.The yeast that were boiled were all dead so the dye didn’t get into the yeast.
C.The yeast that were boiled didn’t have enough proteins in their cell membrsanes to transport the die because they had a lower surface area-to-volume ratio than the live yeast.
D.The live yeast used diffusion to take up the dye, and, since the boiled yeast died, they couldn’t diffuse the dye across.
But there were only 18 of these types of questions on the final exam. Eighteen questions out of 115 that measured more than just surface-level learning. Not that these questions are perfect measures of true understanding, but at least they went beyond simple recall.
I did absolutely no review for this exam. I'm not telling you this as some sort of demented teacher badge of honor; I'm telling you this because I don't see the point of wasting class time reviewing for an exam that I knew wasn't a valid assessment of learning in the first place. I'm not really interested in how students do on what I consider to be a 90-minute guess-and-forget-fest at the end of each semester. Their portfolios were what mattered to me--places where students created, collected, and reflected on evidence of their understanding, which, to me, are better measures of what students were going to walk away with. Not perfect measures, but better. Their portfolios were what I used to determine their final scores, and that's what my students were busy doing when everyone else was reviewing.
I also did no review because I wanted to see how much science stuff my students could remember on their own. If I had set up the learning conditions correctly throughout the semester (i.e., the students did the work of learning), shouldn't they be able to be successful on this exam without review? What's the need for review, other than cramming content into kids' heads just long enough so they can remember it on guess-fest day? To me, reviewing is a bit like stacking the deck--how do you know what is really stuck in students' long-term memory? How can you draw valid inferences about real student learning after a few days spent putting random facts into short-term memory?
My students did just fine on the exam. While there was one area they did poorly on (Photosynthesis & Cell Respiration), this was because of different interpretations by different teachers of what the objectives looked like after students had mastered them. Our communication of what those look like will need to be improved for next year. Overall, however, my students did pretty well--and not once did I lecture at them, make them take notes, have them do textbook worksheets, or write out definitions to vocabulary words. But I don't place much stock in those exam results--because the test, to me, was invalid in its design in the first place.
Just because you have common assessments doesn't mean the data gathered from them means anything at all. If the construction of the assessment is flawed, the data obtained from the assessment will be flawed as well.
While poorly-constructed exams frustrate me, what really gets me all snarky about these poorly-constructed exams is what they do to students. They just reinforce the idea to kids that rote memorization and recall is what learning is about, that how much remembered is important, that content is king. And that's why I think final exams--or any exam written so students can do well without actually showing any real learning--are making kids stupid. They teach students to not think.
Looking back on this post, I can see the edges of my outrage peeking through--I apologize for that. But I can't apologize for how I feel about final exams or any other exam that was designed to make a teacher's job easier rather than focus on what's best for students. There are better ways to assess what really matters, authentic ways to measure student thinking and learning that matters--and we need to get out of our comfort zones and use those better assessments. However, in order for these "assessments that matter" to become commonplace, I think what is valued as "learning" by educators needs to undergo some change first.