Each year, we conduct our annual end of the year assessments. Usually, we use this data to group our students for the last time, before they move on to the next grade level. And, in talking to other literacy coaches, I've found that most of us use this data solely for this purpose. However, this year, I tried something different. We are actually going to use our data to compare how much impact we were able to have on student achievement and reflect on those practices that led to the most achievement.
The first thing I did was go through the entire process by myself. I did this because I wanted to know what questions to ask and how to guide my teachers into using the data to not only reflect but inspect their own practices this year. After I completed the entire process with each of my teacher's data, I reflected on my own. According to our data, our sight word initiatives were very effective in the primary grades. Also, in the intermediate grades, our phonics intervention really made a difference for our most struggling readers. And, I was able to clearly see what our focus will be for next year: fluency. One other area of note for our school was vocabulary. Students made huge gains in vocabulary. I also learned that our intermediate grades need to hone in on specific comprehension strategies next year. Going through this process myself opened my eyes as to what I needed to zoom in on for next year and where I needed to tweak my own practices as a coach.
The next step was to conduct the data chats with my teachers. I made a conscious effort to talk as little as possible at each data chat. I wanted the data to do the talking for me. My job was to guide the teachers into completing each of the tasks. The first thing I asked the teachers to do is to write down what each of their students scored on our state's (FAIR) assessment. You can use any baseline test that you use in your school. Once you have listed each student's score, then we write down the scores for our last assessment period (AP3). After, we write a + or a - to indicate whether each student made progress or regressed. I used 10 percentage points as a gauge. If the student did not score at least 10 percentage points difference, then I considered it no change. And finally, on this sheet we write down exactly what the percentage difference was.
Teachers completed one chart for each period taught. After, teachers are given a chart I call, Student Achievement Pie Chart. It's a very simple chart. Teachers are asked to look very closely at their data and analyze the various parts. We looked at proficiency levels, reading comprehension percentile levels, and amount of progress vs. regressions.
This step is very revealing and helps the teachers see exactly just how much progress their students made during the year. It also opens up some great discussions. Notice what an amazing job this teacher did with her Reading Comprehension Percentiles!
After, we worked on the Best Practices Chart. Teachers scored their own effectiveness using each of the best practices listed. They did not turn any of this in. It was solely used for their ownOnce they wrote down their own scores, we discussed and ranked each of the best practices by how much impact we believed they had on student achievement. This ranking changed with each group. The rankings also helped us discuss where our major emphasis was and whether we should change this for next year. This was a great vision casting session. Then, we brainstormed different ways we could improve on those areas we felt we were not performing our best in.
It really was a reflective process. You may get the entire data chat protocol on my TPT store: http://www.teacherspayteachers.com/Product/Reflective-Data-Chat-Protocols-for-Literacy-Coaches-734514
Or, just leave me a comment, your email address, and I'll email you a copy.
No comments:
Post a Comment