Our first year of instructional coaching is now midway through. Our student centered coaching model, based on Diane Sweeney’s work, has, at its heart, the collection of evidence of learning to respond to the specific student needs. Data. You might think that means it would be easy to measure the impact of our coaching. I did, yet I’m not sure it’s so clear cut.

First, the challenges we’ve faced. The coaches found they were still learning to interpret the coaching model, specifically within the context of our school. Teachers were setting goals for the coaching cycle not explicitly about student learning. Their goals were more teacher centered. For example, one teacher wanted the coach to time how much the teacher talks vs. the students speaking. This is certainly valuable feedback for a teacher, but it’s not a specific look at student learning. We want to support teachers, but we also want to stay faithful to the model, and focus on student learning. Some coaches were more successful than others to steer the teacher back to a goal explicitly about student learning. Some coaching cycles became mentoring cycles, in particular with new teachers. 

Within the coaching cycles, there have been great successes. Some were successful for being faithful to the model, and others were successful in their positive impact on teacher practice that should lead to a positive impact on student learning. Others successfully demonstrated student learning, as is the main goal of the coaching model.

So far, we have attempted 12 cycles. I deliberately did not call them coaching cycles, again, as some might be better described as mentoring cycles. Of the 12, 3 cycles were not completed, or adequate data was not collected for various reasons beyond the control of the coach. Of the remaining 9 cycles, we collected data on 11 specific goals. Some cycles had more than one goal. 

Each goal and its related data was, obviously, quite different, which made us wonder if we’d be able to combine and analyse the resulting data. However, in each cycle, coaches and teachers attempted to collect data using 4 achievement bands; Emerging, Developing, Meeting, Exceeding in both pre and posts assessments. These headings may not have been labeled exactly the same in the actual cycle, but the spirit of each is the same. Yet, not all cycles used all four bands, some using only 3. Because of this, for the sake of analyzing this data together as a view of our impact, we combined the 4 bands, into two; Emerging/Developing (ED) and Meeting/Exceeding (MX). We then calculated the percentage of change from the ED in the Pre Assessments, to MX in the post assessments.

This, of course, has its flaws. For example, an improvement from Emerging to Developing isn’t apparent in our analysis. Neither is an improvement from Meeting to Exceeding apparent. Still, as a start, it seemed the right way to go. Plus, we can easily go back and break that data down further.

Looking at it this way, the average amount of growth from Emerging/Developing to Meeting/Exceeding for all the cycles is 44%. We also looked at this by division, Elementary, Middle and High School. Great! I think. Maybe. Hmph.

So what does this actually mean? Are these good numbers? Does this actually measure our coaching impact? It’s definitely not showing us causation. Is this really more a measure of the effectiveness of the learning experiences that the coach and teacher devised? It’s our first semester so, maybe we might think of this as a baseline that we will grow from. On the other hand, each learner and group of learners is different, so should we expect this to vary according them? Then again, the point of the cycle is to address the individual needs of each student, so theoretically, we should be getting better at identifying and supporting those needs, impacting our numbers. For now, I’m not sure how to interpret this. If you have some thoughts, I’d love to see them in the comments.

We also collected qualitative data. Both teachers and coaches are asked a number of questions to prompt a reflection on the value of the coaching cycle. Reading through these provided great insight, and a nice pat on the back, as the teachers were quite positive about the experience. Here are a couple of highlights.

I will continue to implement a lot of the skills and ideas we used in this unit; pre/post testing, documentation of individual conferences with students, thinking through how I can allow students to demonstrate their understanding in creative, non-prescriptive ways, and thinking through my assessments and if they can incorporate more student choice.

This was one of my favorite reflections, as it’s exactly what the coaching cycle should provide, support for identifying and reacting to a student’s particular need, although I hope the teachers is thinking not just about students struggling with the material, but also those that need extension:

It was nice to have another adult in the room to help with small-group discussions, to help me identify students who seemed to be struggling, and who needed some extra support.

We also wanted to have a visual representation of the qualitative data, and a word cloud seemed like the easiest thing to do. Again, I’d love to hear more ideas for visualize this. It’s promising to see the common ideas that popped up, with students literally in the middle.

With that, I’d say we’re off to a good start. And to really understand our impact, we’ll need a few more data points. For now, I feel that the feedback we’ve received from principals, and teachers, and the content of the teacher reflections are positive. I think I better find a Data Science MOOC to get in on!

Is anyone else using Diane Sweeney’s Student Centered Coaching Model? or another model? How are you measuring impact?

Leave a Reply