Assessing Critical Thought – We’re Gonna Need a Bigger Boat

During the movie Jaws, an iconic scene is when they realize that their boat is not sufficient and they aren’t going to be able to get rid of the killer shark with it, which led to the comment, “We’re gonna’ need a bigger boat!”  Assessing students for critical thought has put me as a classroom teacher in this same head space in the past few years.  When I ask questions on written exams that attempt to give the students an opportunity to express their critical thinking, many times their responses leave me with more questions than answers.  This is the proverbial, “I think we need a bigger boat.”  The ways I have been assessing students during traditional mathematics instruction simply don’t give me much information about the level of critical thinking of my students.  This issue has been the direction of my professional development and research for the past few years, so I would like to share that with you.

In reflection throughout the years, I have developed a bit of a complex toward genuine and authentic assessment. What I mean is I can’t stop thinking about how to make it fair and how to improve what I have.

The bread and butter of my high school mathematics program is written assessment. Written assessment has been difficult to perfect and, if I am not careful, oftentimes it doesn’t really help me find out what I want know.  I can ask questions that are open response to eliminate false positives and pay special attention to the levels of difficulty or depth of knowledge to ensure that students are able to demonstrate their individual understanding fairly.  But my trouble with written assessment is that I am not allowed, without discussion, to reframe questions for students. On a daily basis, I focus on reframing and questioning. Why should assessment be different?  Written assessment tools provided by the administration don’t allow me to meet my students where they are at; I knew that changes in my assessment practices would help me relate to them and then discover their understanding.

For my students, group or team assessments are typically thought of as an extension of the daily group work. I have observed that such assessments can be more challenging and can involve more critical thought because of the collaboration that occurs. In my experience, team assessments certainly give me an opportunity to ask more difficult questions and prompt students to collaborate and learn.  I have found them to be a wonderful teaching tool; team assessment days are often the best days of the chapter for everyone in my classroom.  However they also produce many challenges in the area of grading.  Some teams tend to defer to the perceived “smartest” or “best” student in the group for the final say on a task.  This is an issue of status that is difficult for me to solve.  There are also a few students in each class that sit back and just write what the other students think without participating further.  These can be good learning experiences for students, but if individual assessment for understanding is the goal, then this instructional activity falls short.

So how should I assess for understanding?  Well, I decided that I needed to change and be open to new ideas surrounding assessment.  I needed to do this in a way that didn’t create a mountain of work because in 2022, I simply didn’t have the time or resources.

The first step in my journey was choosing learning targets that created a timeline for learning in my classroom.  This directed the students to specific targets to be learned and gave them the understanding of what they needed to know.  We also needed to distinguish depth of knowledge levels so students were aware of the expectations.  This is a work in progress for me and my students; it is work that needs to be done over an extended period of time.  I am a fan of rough draft thinking and also the idea that steps toward the end goal may be difficult; but these steps don’t have to be perfect, they simply need to be better than the first draft.  

Here is an example of a first draft of success criteria that I created for level 1 area and perimeter topic:

The success criteria in 1-3 are depth of knowledge 1-2, while 4-6 are depth of knowledge 2-3, which I labeled as “non-routine expectations.” 

My next step was to explore and determine alternate methods of assessment, both formative and summative.  I have become partial to student interviews.  Like all assessment methods, there are definite pros and cons to student interviews, but in my school district with my students it works really well.  The idea is to have students interview each other formatively.  This produces a genuine peer assessment opportunity for mathematics quality.  This is somewhat time consuming because the students need a script; they are not yet content experts, but it is worth the effort because I have found that the discussions are really effective.  Finally, I interview each student individually.  The interview surrounds one or two learning targets from the grading period. The goal is to assess critical thinking. The interview also builds up the transparency of thinking and genuine nature of the student-teacher relationship.  The obvious drawbacks are time and supervision.  I have found that each of these problems requires brainstorming; it also helps to have colleagues that share a similar vision.

Here is an example of a peer interview handout:

This peer interview has a great ending with the challenge that led to a lot of deep discussion in class, and it was linked to the 4th and 6th success criteria in the above document.

The final but continuous step is to collaborate to make the learning targets, success criteria, and assessments better.  The only way to solve this problem in my classroom is to keep trying to improve on the model that I am using.  I am so encouraged by the improved thinking that interviews have created in my room.  They have become a natural extension of the questioning that students participate in on a daily basis, and I find that they encourage deeper critical thinking.

In TRC 9.0, my team is working on the development of success criteria; we are currently researching when the optimum time is to involve the student in self-reflection surrounding the success criteria. This research is re-focusing on the first step of this journey.  We are certainly early in the process of solving the problem, but I can see clearly that, “We’re gonna’ need a bigger boat.”

Leave a Reply