Close reading is a strategy that allows us to interpret a text based on a specific purpose. It is a method and not an outcome. Therefore I believe (my opinion alone) those trying to sell close reading rubrics might as well be selling snake oil.
You do not measure close reading. That would be like measuring a specific tweak to a golf swing. In the end you do not care about the frequency and fidelity of the method. You want more yardage. That is your evidence that the intervention worked.
The easiest way to look for evidence of close reading is to model and teach students to annotate text for different purposes. Teachers can easily quantify and measure the frequencies and types of annotation. So you can examine how they annotate texts. You can have students return to the same text and annotate for different purposes.
Another way to check for understanding through close reading is through text based talk. This is harder to assess than text annotation as multiple conversations occur at once in the classroom. As a teacher you are looking for evidence of text based inferences. However this is easier to assess and rubrics could be developed for online forums. Basically teachers need to look for evidence that students are returning to the text, using complex vocabulary, respond to prompts about author’s craft.
If you want to assess close reading beyond annotation you must have students create a product with the information they read.
Short Answer Responses
Short answer responses that focus on looking for evidence of the CCSS will work. Students who are better trained as close reading (text annotation on their part) and text dependent questioning (by the teacher or small group forums) should prove better. For this I would just use sample rubrics from the Smarter Balance Pilot items.
Debates, Argumentation, Informational Writing
Another way to measure if students have integrated the process of closed reading into their reading is to examine their writing products. This is where the content portion of your rubric is to critical to your success in argumentative writing. You want teachers develop criteria so students who are able to focus on key vocabulary, claims and evidence, and authors craft out perform students who do not.
These are just my thoughts. Close Reading, like much of the CCSS, cannot be taught in isolation. It isn’t a product of learning but it is the process of reading that college and career ready students use. There are some close reading rubrics floating around the web. I wouldn’t trust most of them. In fact many of these rubrics just have students evaluating key ideas, authors craft, etc within a writing assignment.
I think you are better served by:
Teaching teachers to model and assess text annotation.
Teaching teachers to model, offer guided practice in text dependent questioning techniques.
Using online forums based on text dependent prompts (rubrics could be developed for these).
Using building wide writing rubrics so students who engage in close reading with sources out perform students who do not.
Encouraging teachers to develop performance assessments that will demonstrate evidence of close reading.
I know this may not be the answer you were looking for. It would have been easy to post links to some of the bad rubrics. Yet I believe rubrics are for measuring products and close reading is a process that leads to students developing better products.