![](https://static.wixstatic.com/media/06b54e_ebce0f64064e4fe79dfdf6f07c0f1bd9~mv2.jpg/v1/fill/w_980,h_654,al_c,q_85,usm_0.66_1.00_0.01,enc_avif,quality_auto/06b54e_ebce0f64064e4fe79dfdf6f07c0f1bd9~mv2.jpg)
Have you experienced problems with your rubrics?
Does your rubric not quite assess what you want it to assess or do you find, during marking, that your standards in the rubric do not appropriately address the skills you want to assess?
There are a range of problems that can crop up and impede your progress - worse, fundamentally undermine the quality of your course.
Here are some common problems that I have seen derail teachers' assessments in spectacular ways.
Firstly, ask yourself if you:
Know what it is you want to assess. Sounds simple but this only comes from understanding your course learning outcomes, content, assessment tasks, the aims of the course - and the quality of the criteria that are critical in assessing identified core skills.
I start every consultation on rubrics with the question 'What are you assessing?' And I have frequently had the answer 'I don't really know...' even when the teacher has already written the rubric! It takes a specific skill in getting this right, mainly because the rubric responds to the skills you are assessing, not the content you are providing. (Having said that, make sure you are assessing something you have actually taught - yes, this happens!)
Use relevant and appropriate criteria - the wrong criteria are not going to help you assess the course in the way you want to if the criteria are poorly constructed.
This is not easy - writing good criteria is as difficult as writing a good rubric and is the subject of another article. For now, we will assume the criteria are exactly what you need and want!
Ensure your rubric is appropriate to the level of study and the scope of the task. Simple tasks require simple rubrics - mostly.
However, if you are going to assess a basic task with a complex rubric (it happens) make sure that the quality of the task allows you to assess the criteria. Quite often, the depth is insufficient to allow this. Makes for a very stressful marking process!
Have selected the right type of rubric - HOLISTIC or ANALYTIC?
Now we are getting into dangerous territory. There are proponents of both, usually hotly defended. That's OK.
A simple rule of thumb suggests that holistic rubrics tend to provide a score based on an overall impression of a student's performance on a task.
The same rubric can be used across different tasks, as they may be non-specific.
However, their main disadvantage is that they tend to provide incomplete or inaccurate feedback to the student rendering the effectiveness of the assessment irrelevant or debatable at best.
They also open the possibility of debate with students (which is a whole heap of fun) around the somewhat subjective approach to assessment since a result is little more than an ‘impression’.
By far the more effective is the analytic rubric.
There are some who would debate this, however, let us just assume that we want to assess core skills in key assessment tasks – an analytical rubric is therefore necessary (and avoids debate).
Rubric writing is a logical process!
Here are the most common flaws in rubric writing
![](https://static.wixstatic.com/media/06b54e_27ba1fd9e47a49c685c28c1878473f2d~mv2.jpg/v1/fill/w_980,h_1469,al_c,q_85,usm_0.66_1.00_0.01,enc_avif,quality_auto/06b54e_27ba1fd9e47a49c685c28c1878473f2d~mv2.jpg)
1. Validity – does the rubric assess what it says it will assess? What the student is being assessed on needs to be made clear in both the task and the rubric. This should never be a surprise to students!
A rubric should assess the key learning outcomes of the task.
This flaw occurs when there is a lack of ‘line of sight’ between the task and the rubric.
2. Measurability – language in rubrics is vital. For example, if you say you are going to assess ‘understanding’ what does that mean? You would need a second rubric to identify what this actually looks like in the student’s work.
This is considered an error of measurability and is a key understanding in rubric writing – one that very few people understand well.
Other common words such as 'consider' or ‘demonstrate’ fall into the same category (unless, in the latter, you are referring to a process or physical task such as executing a dance move or the use of a plumbing tool!)
There is a long list of such words that should be avoided to keep your rubric-writing-life simple!
And sometimes the solution is as simple as adding another verb – for example, ‘consider and explain’ or ‘consider and evaluate’. This effectively renders the word ‘consider’ a part of what the student does but not what they are assessed on. They will be assessed on their capacity to ‘explain’ or ‘evaluate’.
The purist might argue, ‘Is it necessary then?’ I leave that up to you to decide.
3. Consistency of Standards – in an analytic rubric you will have a range of descriptors (referred to as standards) usually from Pass through to High Distinction (5 levels) in the tertiary sector and others. (3 and 4 level rubrics exist but we will deal with 5 in this example.)
This also depends on clarity of language to make sure that you define what each of these levels will look like.
In doing this, make sure you do not start assessing something else!
When descriptors become difficult to compose (and this happens often), it is very common that the language shifts to another component of assessment and your rubric begins to assess something else, for example, 'analysis' at the HD level and 'scope' at the Distinction level.
This is a very common rubric flaw, They should assess the same thing!
There are many examples evident in rubrics globally - if you would like examples and further analysis of this, ask us in the comments.
4. Language consistency - if ethical ‘issues’ are referred to in a criterion, the standards must also refer to ‘issues’ rather than other versions or synonyms across the levels such as ‘questions’, ‘actions’, ‘problems’ or ‘activities’. Their meanings are different as is the application. It is important to be clear about what is being assessed and what it relates to, at each level.
5. Descriptors should always assess the SAME actions in each level. For example, if analysis and relevance are being assessed, this SHOULD BE ABLE TO BE COMPLETED BY THE PASS STUDENT. It is not an option for the Pass (or Credit or Distinction) student to simply not respond to ALL parts of the task. This would represent flawed assessment. The PASS response, however, will be at a lower level than the HD response.
![](https://static.wixstatic.com/media/06b54e_269cbb5c79f94540afb3ad15abd54365~mv2.jpg/v1/fill/w_980,h_653,al_c,q_85,usm_0.66_1.00_0.01,enc_avif,quality_auto/06b54e_269cbb5c79f94540afb3ad15abd54365~mv2.jpg)
So, if you have had some difficult assessment experiences with your rubrics, take a critical look and see if you have fallen into any of these traps.
What problems have you encountered in rubric writing?
Let us know…
Be assured, there is always a solution and it usually represents a significant ‘aha!’ moment, accompanied by a beautific smile of relief, when teachers begin to think of rubrics in this way.
It is still a tricky transition for most and even more tricky to embed these principles in rubric writing but once mastered makes assessment a breeze!
![](https://static.wixstatic.com/media/06b54e_89808daadbbb4a949a91c7578172a1d2~mv2.jpg/v1/fill/w_980,h_653,al_c,q_85,usm_0.66_1.00_0.01,enc_avif,quality_auto/06b54e_89808daadbbb4a949a91c7578172a1d2~mv2.jpg)
Images courtesy Andre Hunter, Amanda Lucati, Ghaith Harstany, Sydney Rae (Unsplash).
Great explaination, especially regarding measurability - I have had problems with rubrics that have you deciding between successfully and thoughtfully.