Leicester College News

Feeding back to HE students through screencasts: Student perceptions

Content

Within Higher Education (HE), much focus is placed on providing good feedback to students, which is continuously assessed through student feedback (student voice channels such as the Student Survey and Ipsos Mori) and quality assurance processes. In recent years, many HE establishments have begun to use screencasts to provide feedback to students for written submissions and research has shown that these present several advantages, both for students and assessors (Straub, 2000; Harper, Hannelore & Fernandez-Toro, 2012; Silva, 2012; Stannard, 2013; Vincelette, 2013).   This short piece explores this subject and highlights some important implications for all college HE tutors.

There have been a number of studies which have examined student perceptions surrounding written feedback. Many of these have suggested that students may have difficulties deciphering feedback due to poor hand-writing or vague comments (Beach & Friedrich, 2006; LaFontana, 1996; Bardine, Bardine & Deegan, 2000). Furthermore, students often tend to look at grades and not read feedback or in some instances, comments may also be too detailed to extract logical meaning from, leading to frustration by students (Brown & Glover, 2006; Mutch, 2003; Thiass & Zawacki, 2006). Often word-processed feedback does not solve this issue, with assessors being able to copy and paste comments for multiple students or areas of an assignment, which could also apply to Turnitin GradeMark (Blair, Curtis, Goodwin, Shields, 2013). On the other hand, there is often frustration faced by the assessor, where students are not acting upon previous feedback to improve the overall quality of their work.

The results of the National Student Survey have highlighted that students feel that assessment and feedback are areas that need to improve and this trend has continued in the results of the last two academic years (Marriot & Teoh, 2012; Unistats, 2016). These themes drove this project to examine student perceptions surrounding feedback they had received. The main aims of this research were to:

  • Identify how useful students find feedback of a summative piece of work and whether views change when screencasts are used.
  • Explore screencasts as a method of delivering feedback and explore the optimum way for students to receive these.
  • Work with students to develop an enhanced feedback approach that was both beneficial to both the student and the assessor.
  • This research compares student perceptions relating to written and screencast feedback, comparing two groups of HE students for a single module on the foundation degree they study within a predominantly further education college environment.

Summary of results:

Overall results suggested that the students generally preferred screencast feedback when received by email, but further exploration is needed with regards to the optimum software to use and the best way to share screencasts with students. Screencasts in this study were made using free software known as Jing, but Screencast-o-matic might be a better software tool to use, since this permits the user to download the file in a number of formats, giving greater flexibility compared to Jing. This would enable screencasts to be shared via a virtual learning enviroment or platform such as ProMonitor, as was found to be the case in other institutions.

Only in the screencast group did students refer to the feedback as being consistently personalised and making students more likely to act on the feedback, encouraging them to reflect on the feedback they received. Comments from dyslexic students in particular suggested that screencasts were more memorable as often they could get lost in detailed written feedback. Students also highlighted that screencasts gave them greater clarity and they were less likely to have questions relating to comments from the assessor.

The views expressed in relation to screencasts were positive from all individuals taking part in the study, whereas views in relation to written feedback were much more varied. It was noted that some written feedback that the students had experienced previously was quite brief or that generic comments had been used which did not clearly identify how the student could improve a particular aspect of their work. Some students also noted that they would not always read feedback if they had passed a piece of work, but that they would watch a video regardless.

Conclusion

In summary, the results of this research suggest that screencasts should be limited to five minutes in length and that grades should be given at the end of the screencast. This saves on marking time for the assessor and also encourages students to actively review their feedback. The students in this study likened screencasts to “having a conversation” with their tutor and that as a result of this, they felt more empowered to discuss feedback which was not always the case for written feedback. This further led to greater dialogue between student and assessor, helping to identify gaps in knowledge.  There may be wider implications too, not only in helping to enhance student learning, but also in encouraging staff and students to see themselves more aspartners in learning.

Cassandra Reeves-Moore is HE Programme Lead, Foundation Degree: Children, Families and Community Health, Leicester College.

References

Bardine, B., Bardine, M., & Deegan, E., (2000). Beyond the red pen: Clarifying our role in the response process. English Journal, 90 (1), 94–101.
Beach, R., & Friedrich, T., (2006). Response to writing. Handbook of writing
Research. New York: Guilford Press. pp. 222–234.
Blair, A., Curtis, S., Goodwin, M., Shields, S., (2013). What Feedback do Students Want? Politics. 33 (1), 66–79.

source: https://www.aoc.co.uk/july-2016-feeding-back-higher-education-students-through-screencasts-student-perceptions-cassandra

 

Sign Up for Updates

Be the first to discover all the latest news, offers, courses and events at Leicester College...