Thursday 26 May 2016


This week my syndicate leader talked us through PROBE assessments as we have to do them for the mid-year reports. I have never done a PROBE assessment before, and have only seen it done a couple of times before. PROBE is an assessment for reading comprehension in students. You give them a passage to read, and they have to answer questions about it to show they understand what they have read. It is scaled by reading ages, from 7-15(ish) years old. The passages get harder as the reading level increases. The questions students are asked are to test a combination of inference, vocabulary, reaction and evaluation skills. Once the test has been marked, teachers either decide students are 'at' that level of comprehension, or they need to be moved up or down a reading age level, and hence re-tested.

Talking through the assessment booklet was very beneficial, as I found there are multiple ways to do the test, (picture 1) and hence different ways to mark depending on how you administered it (picture 2). If it wasn't for the PD session, I wouldn't have known that.
In my head as I was reading through each option, I was thinking of particular students in my class who would be best suited to each type - e.g. I might try listening comprehension for my student who struggles to read because of a lack of decoding skill, but when he listens to audio versions of our school journal texts, he can answer questions about what happened and hence shows some comprehension; or I might use silent reading comprehension for my very competent readers, to test if they still have the same level of comprehension as they do when they read aloud. 

Reflecting on the PD afterwards, I found that one of the times I had seen PROBE testing done, it was not done in the right way, hence all the data collected from it was incorrect, and reflected negatively on students. It stuck me the ripple effect such a small thing can have. Even if you do the assessment, but you don't do it correctly, your data is skew, hence all reflections on student achievement are skew. So, what was the point in doing it in the first place?

The other thing that struck me was the last page in the booklet...

Teachers, even when they are assessing in the right way, going 'by the book', doing their best, can negatively impact the way the students are assessed. 
There are so many little things that can go wrong. We just need to minimise the initial drop in the water as much as possible, so we can minimise the ripple effect it has.

I will do my PROBE assessments of my class this week, and I hopefully will remember to do a reflection on how I think I went. 

No comments:

Post a Comment

Thank you for leaving me feedback!