All Collections
Creating (better) course content
Choosing the right marking template for your context
Choosing the right marking template for your context

Looking at the learner and facilitator views to help you, as an author, choose the right marking template.

Caitlin Foran avatar
Written by Caitlin Foran
Updated over a week ago

When thinking about marking templates, it’s a good idea to have an understanding of both what the learner and facilitator will see. Let’s take a look at a task for both of those users.

In the examples below, we’ve got (in order):

  • A task that cannot be automarked – written and recorded, draw on image etc. etc.

  • A task that can be automarked ie.where you can set the correct answer - multiple choice, matching, cloze etc. etc.

  • A task where you can set a correct answer, but haven’t.

Learner task list

Here we can see that tasks show as roughly the same for learners (we've named the tasks to show how they're marked, but your tasks will have usual names like Nebula fill in the blanks etc.).

When learners press submit after completing a task, it changes to Done in their list. It shows as Done, regardless of whether or not a facilitator can or needs to intervene. So this means a learner can't really tell what's "supposed" to have facilitator intervention and what's not. We’ve chosen to do this as we don’t want to set learners up for the expectation that they will be receiving feedback as there may be tasks where a facilitator can feedback if they want to, but doesn’t have to.

Facilitator marking list

As a facilitator, you'll see slightly different labels depending on whether the system can mark without you or not. For instance, with the automarked task - Cloze association w. correct answers, the system can do the marking without you needing to intervene.

For the task below, although for the learner, the task list showed Done, for the facilitator it shows as Awaiting Feedback. This is so that as a facilitator you can see whether you have marked this learner’s task already or not.

When a facilitator opens a task to give feedback or mark, they'll see:

  • The task

  • The learner's response

  • An area where they set the mark (e.g. complete, percentage) and can add feedback (e.g. text, audio, file).

The exact options a facilitator sees in the area on the right depend on the marking template set by the author.

Our recommendations

Now that you’ve seen how things look for both a learner and a facilitator, you’ll need to think about:

  • How will facilitators want to work i.e. Will they want to add feedback to most manually marked tasks or just a few? How much time are they allocated to give feedback?

  • Where is facilitator intervention is absolutely required and where it is optional (and the relative proportion of these)? Where will learners most need and benefit from feedback?

Facilitator workload

If facilitators' workload is a concern, take a look especially at the manually marked tasks and ask if they are all absolutely necessary in their current form. For instance:

  • Some might not be required to scaffold to the assessment, consider deleting them.

  • Some might be able to be converted to a similar automarked task. This is especially true for non-assessed tasks or courses with lower level cognitive outcomes e.g. recall, identify.

  • Some might exist just for learners to capture optional notes or reflection. Consider reminding learners that they have the option to write nothing or converting some to rhetorical questions (e.g. marked up with the pullout style) and suggesting learners can use social notes if they have something to share.

Did this answer your question?