Vol. 30 No. 2 (2015)
Research Articles

A Preliminary Evaluation of Using WebPA for Online Peer Assessment of Collaborative Performance by Groups of Online Distance Learners

Jo-Anne Murray
University of Glasgow
Sharon Boyd
University of Edinburgh
Published November 19, 2015
How to Cite
Murray, J.-A., & Boyd, S. (2015). A Preliminary Evaluation of Using WebPA for Online Peer Assessment of Collaborative Performance by Groups of Online Distance Learners. International Journal of E-Learning & Distance Education / Revue Internationale Du E-Learning Et La Formation à Distance, 30(2). Retrieved from https://www.ijede.ca/index.php/jde/article/view/920


Collaborative assessment has well-recognised benefits in higher education and, in online distance learning, this type of assessment may be integral to collaborative e-learning and may have a strong influence on the student’s relationship with learning.  While there are known benefits associated with collaborative assessment, the main drawback is that students perceive that their individual contribution to the assessment is not recognised.  Several methods can be used to overcome this; for example, something as simple as the teacher evaluating an individual’s contribution. However, teacher assessment can be deemed as unreliable by students, since the majority of group work is not usually done in the presence of the teacher (Loddington, Pond, Wilkinson, & Wilmot, 2009).  Therefore, students’ assessment of performance/contribution of themselves and their peer group in relation to the assessment task, also known as peer moderation, can be a more suitable alternative. There are a number of tools that can be used to facilitate peer moderation online, such as WebPA, which is a free, open source, online peer assessment tool developed by Loughborough University.  This paper is a preliminary evaluation of online peer assessment of collaborative work undertaken by groups of students studying online at a distance at a large UK university, where WebPA was used to facilitate this process.  Students’ feedback on the use of WebPA was mixed, although most of the students found the software easy to use, with few technical issues and the majority reported that they would be happy to use this again.  The authors reported WebPA as a beneficial peer assessment tool.


  1. Blair, A. & McGinty, S. (2013). Feedback-dialogues: exploring the student perspective. Assessment & Evaluation in Higher Education, 38(4), 466-476.
  2. Bourner, J., Hughes, M., & Bourner, T. (2001). First-year undergraduate experiences of group project work. . Assessment and Evaluation in Higher Education, 26(1), 19-39.
  3. Davies, W. M. (2009). Groupwork as a form of assessment: common problems and recommended solutions. Higher Education, 58, 563-584.
  4. Dolmans, D., Wolfhagen, I., van der Vleuten, C., & Wijnen, W. (2001). Solving problems with groupwork in problem-based learning: Hold on to the philosophy. Medical Education, 35(9), 884-889.
  5. Knowles, E., & Kerkman, D. (2007). An investigation of students’ attitude and motivation toward online learning. . Student Motivation, 2, 70-80.
  6. Loddington, S., Pond, K., Wilkinson, N., & Wilmot, P. (2009). A case study of the development of WebPA: An online peer-moderated marking tool. British Journal of Educational Technology, 40(2), 329-341.
  7. Luxton-Reilly, A. (2009). A systematic review of tools that support peer assessment. Computer Science Education, 19(4), 209-232.
  8. McConnell, D. (2002). The experience of collaborative assessment in e-learning. Studies in Continuing Education, 24(1), 73-92.
  9. Nicol, D., & MacFarlane-Dick, D. (2006). Formative assessment and self-regulated learning. Studies in Higher Education, 31(2), 199-218.
  10. Ogden, J., & Lo, J. (2011). How meaningful are data from Likert scales? An evaluation of how ratings are made and the role of the response shift in the socially disadvantaged. International Journal of Health Psychology, 12, 350-361.
  11. Race, P. (2001). A briefing on self, peer and group assessment ILTSN Generic Centre - Retreived January 10 2013 from http://www.phil-race.com/.
  12. Russell, M., Haritos, G., & Combes, A. (2006). Individualising students' scores using blind and holistic peer assessment. Engineering Education, 1(1), 50-59.
  13. Strong, J. T., & Anderson, R. E. (1990). Free riding in group projects: Control mechanisms and preliminary data. Journal of Marketing Education, 12(2), 61-67.
  14. Wilmot, P., & Crawford, A. (2007). peer review of team marks using a web-based tool: an evaluation. Engineering Education, 2(2), 59-66.
  15. Yi-Ming Kao, G. (2012). Enhancing the quality of peer review by reducing student "free riding": Peer assessment with positive interdependence. British Journal of Educational Technology, 44(1), 112-124.