A Preliminary Evaluation of Using WebPA for Online Peer Assessment of Collaborative Performance by Groups of Online Distance Learners

Jo-Anne Murray and Sharon Boyd

VOL. 30, No. 2

Abstract

Collaborative assessment has well-recognised benefits in higher education and, in online distance learning, this type of assessment may be integral to collaborative e-learning and may have a strong influence on the student’s relationship with learning. While there are known benefits associated with collaborative assessment, the main drawback is that students perceive that their individual contribution to the assessment is not recognised. Several methods can be used to overcome this; for example, something as simple as the teacher evaluating an individual’s contribution. However, teacher assessment can be deemed as unreliable by students, since the majority of group work is not usually done in the presence of the teacher (Loddington, Pond, Wilkinson, & Wilmot, 2009). Therefore, students’ assessment of performance/contribution of themselves and their peer group in relation to the assessment task, also known as peer moderation, can be a more suitable alternative. There are a number of tools that can be used to facilitate peer moderation online, such as WebPA, which is a free, open source, online peer assessment tool developed by Loughborough University. This paper is a preliminary evaluation of online peer assessment of collaborative work undertaken by groups of students studying online at a distance at a large UK university, where WebPA was used to facilitate this process. Students’ feedback on the use of WebPA was mixed, although most of the students found the software easy to use, with few technical issues and the majority reported that they would be happy to use this again. The authors reported WebPA as a beneficial peer assessment tool.

Résumé

L’évaluation collaborative a des avantages bien reconnus dans l'enseignement supérieur. Dans l'apprentissage à distance en ligne, ce type d'évaluation peut faire partie intégrante de l’apprentissage en ligne collaboratif et peut avoir une forte influence sur la relation de l'étudiant avec l'apprentissage. Bien qu'il existe des avantages connus associés à l'évaluation collaborative, le principal inconvénient est que les étudiants perçoivent que leur contribution individuelle à l'évaluation n’est pas reconnue. Plusieurs méthodes peuvent être utilisées pour remédier à cela; par exemple, quelque chose d'aussi simple que l'enseignant qui évalue la contribution d'un individu. Toutefois, l'évaluation par l'enseignant peut être considéré comme peu fiable par les étudiants, puisque la majorité du travail de groupe n’est généralement pas fait en présence de l'enseignant (Loddington, Pond, Wilkinson, et Wilmot, 2009). Par conséquent, l'évaluation des étudiants de la performance / contribution d'eux-mêmes et de leur groupe de pairs par rapport à la tâche d'évaluation, aussi connue comme la modération par les pairs, peut être une alternative plus appropriée. Il existe un certain nombre d'outils qui peuvent être utilisés pour faciliter la modération des pairs en ligne, tels que WebPA, qui est un outil d'évaluation par les pairs en ligne, gratuit et de source ouverte, développé par l'Université de Loughborough. Ce document est une évaluation préliminaire de l'évaluation par les pairs en ligne du travail collaboratif entrepris par des groupes d'étudiants qui étudient en ligne à distance dans une grande université du Royaume-Uni, où WebPA a été utilisé pour faciliter ce processus. La rétroaction des étudiants sur l'utilisation de WebPA était mixte, bien que la plupart des étudiants ont trouvé le logiciel facile à utiliser, avec quelques questions techniques, et la majorité ont indiqué qu'ils seraient heureux de l'utiliser à nouveau. Les auteurs ont rapporté que WebPA était avantageux comme outil d'évaluation par les pairs.

CC Logo

This work is licensed under a Creative Commons Attribution 3.0 Unported License.

Introduction

Providing students with opportunities to work in collaboration is of well-known benefit to students’ learning. In online learning, interaction between students and collaborative working opportunities appears to be even more important to prevent learners feeling isolated in their studies. Thus, online collaborative assessment is an emerging form of assessment, particularly in higher education. It is proposed that if used effectively, online collaborative assessment is an efficient way of managing the increased growth in student numbers in higher education, in particular by reducing the time taken in grading assignments, thereby, also enhancing the timeliness of feedback.

Despite the pedagogical advantages of online collaborative learning being well known (McConnell, 2002), it is hampered by problems of unequal participation.  Allocating fair and equitable grades for group effort remains a major problem. Whilst some say that in true teamwork, individuals should stand or fall by judgement of the team output, others recognise that no matter how sound the argument may be for this, many students express a strong opposition to this approach and insist on individually assigned grades that can be demonstrably associated with the efforts and abilities of each student (Yi-Ming Kao, 2012). In fact, studies have shown that it is not necessarily the complexity and nature of the task, size of the group or indeed cultural differences, but recognition of individual effort that most greatly affects an individual’s contribution to the collaborative assessment (Davies, 2009).  It has also been reported that the use of peer-moderated grading reduces student complaints associated with group assessments (Loddington et al., 2009). Moreover, peer assessment has been identified as potentially improving a number of transferable skills including; decision making, negotiation, communication, empathy and delegation (Russell, Haritos, & Combes, 2006).

It has been argued that students should be afforded the opportunity to anonymously evaluate, or conduct, peer appraisals of their group members' contributions, resulting in a change in the final grade awarded to that individual (Strong & Anderson, 1990).  It is often very difficult for an academic to assess each group member’s contribution to a collaborative assessment and it is proposed that when it comes to measuring individual’s relative contribution to group work, the only people who know what the relative contributions were, are the students themselves (Race, 2001). This is particularly true in online distance learning, where the work typically takes place out of view of the academic. Thus, peer moderation of work is rapidly advancing in higher education and online assessment systems have been successfully used to facilitate this (Luxton-Reilly, 2009). WebPA is a free, open source, online peer assessment tool developed by Loughborough University (Loddington et al., 2009).

WebPA allows a tutor to grade and provide feedback on group tasks but with the final individual student result calculated by the WebPA algorithm on a peer result ‘weighting’ set by the tutor. Thus, once all group members have entered their grades and the tutor has provided a group grade, WebPA calculates a grade for each individual in the group based on the self and peer grades provided (see http://webpaproject.lboro.ac.uk/ for further information on this process). This provides a unique result for each student based on peer feedback of their activity within the group. With WebPA, students grade their peers anonymously against prescribed criteria set in advance by the course team (e.g., team work skills, research skills). Loughborough students reported that this made them feel involved in the assessment process, encouraged positive team dynamics, and provided a ‘fairer’ grade with more rapid feedback (Loddington et al., 2009). Whilst work has been done to evaluate the use of WebPA with on-campus students (Loddington et al., 2009), very little has been done to investigate its use with online distance education students.

This paper describes the use of WebPA with postgraduate students studying online at distance. Two major research questions guided this study:

  1. In what ways, if at all, do online distance learners find peer assessment beneficial?
  2. How can WebPA support the peer assessment process for online distance learners?

Methodology

Participants

The study population were groups of postgraduate students undertaking an online distance education Masters’ degree at a large UK university open to international students. These students were registered in part-time programs spanning three years, with years one and two comprising students engaging with the taught element of the program and those in year three undertaking the dissertation phase. The program consisted of six, 20-credit taught courses and a 60-credit dissertation course. Within the university described in this paper, a course refers to a unit or module within a degree program. There were 60 students in the program. Ethical approval was sought and received from the university’s research ethics committee.

Study Design

Research was carried out using an explanatory sequential mixed methods approach (Creswell, 2014), starting with a quantitative investigation of the tool (Likert survey), and followed by a qualitative analysis of themes arising from the staff-student committee meeting. This approach was a good fit with the collaborative process of course feedback and development in the program, whereby, students respond to an end-of-course feedback survey, the results and action points are shared with students, and these actions are discussed and amended in the staff-student committee meeting as part of a feedback dialogue. This participatory process reverses the traditional staff-student relationship, allowing the student to take ownership in the development of the course and develop academic skills in giving effective feedback (Blair & McGinty, 2013). The mixed methods approach not only allowed the authors to explore how the tool can work for the student group but also provided a method of triangulation in exploring the research questions.

Description of Peer Assessment

Students undertook an online collaborative assessment as part of their studies, which involved creating a wiki site discussing a relevant topic of their choice The assessment was worth 50 percent of the total grades for their course. Students self-allocated into groups of three to four and after submission of the group assignment, students graded each other using WebPA, with a weighting factor of 50 percent. The course team established grading criteria that characterised group working, time management, problem solving and communication. The grading criteria used for the peer assessment are given in Table 1. These criteria were based around examining group behaviour and dynamics, and measure elements that take place during non-contact time and, thus, promote a student-centred approach.

Reasons for Using the WebPA Tool

The teaching team had used WebPA as a mechanism to manage self- and peer-assessment several times previously. Therefore, the team was confident with regards to the application of WebPA in postgraduate online teaching. WebPA was chosen due to its simple collection and processing of grades from a student cohort. For these students, peer assessment was seen as way of contributing to their learning experience. The calculations used in WebPA were considered a fair and efficient way of calculating individual grades for each team member. Other drivers were that the peer assessment process was seen as a way of identifying free riders within a group.

Table 1. The grading criteria used with WebPA (score range 1-5) adapted from Loddington, Wilkinson, Glass & Wilmot (2008)


Score Range Explanatory Text

Criterion






Ability to generate ideas

Contributed no useful original ideas

Contributed some helpful ideas

Made an average contribution in this respect

Made an above average contribution in this respect

Generated a wealth of realistic ideas throughout

Ability to search for information

Made no effort beyond a simple internet keyword search

Made some effort to carry out research

Made an average contribution in this respect

Searched and shared a reasonable number of resources

Searched a wide variety of sources tenaciously and with success

Ability to share research findings clearly 


Contributed no useful information

Contributed some useful information

Made an average contribution in this respect

Shared some above-average research findings

Shared all their research clearly and supported the group activity

Participation

Unreliable, did not attend group meetings or respond to discussions

Responded slowly or demonstrated below average participation

Made an average contribution in this respect

Responded quickly and usually present

Reliable, always present when required by the team unless prevented by illness

Contribution to the assignment

Made only a small contribution to a poor standard

Made a small, but helpful, contribution

Made an average contribution in this respect

Made an above average contribution to a reasonable standard

Completed some of the most challenging sections to a high standard

Good team player

Did not show much interest in activity, did not take part

Showed little interest in activity, took part to some degree

Made an average contribution in this respect

Worked well with all team members and demonstrated some initiative

Worked well with all team members, showed good initiative and time  management in the tasks they were assigned within the group

Use of the WebPA Tool

WebPA was integrated with Learn (the virtual learning environment). Before the assessment took place, students were provided with information on the reasons for using WebPA and how to use it. The course team found that providing this information increased submission rates on WebPA. Students undertook their group assignment, using tools such as Skype, Second Life, GoogleDocs and Dropbox to coordinate their activities across time zones. They were allowed to self-select their groups via sign-in sheets on Learn. At the closing date, WebPA combined the student grades with the tutor grades and calculated the final individual result based on the ‘weighting’ percentage set by the tutor. A penalty for non-completion of the WebPA grading was implemented whereby any individual who chose not to provide feedback had their grade reduced by ten percent.

Student Survey

A self-completion survey was designed to assess students’ use and perception of using WebPA specifically and of peer assessment in general. Data were gathered via an online questionnaire using the Survey Monkey tool. The survey mainly consisted of a series of Likert-scale questions, where there was a choice of a number of fixed alternatives. There were also some free text comments sections and a small number of open questions, along with some questions on demographics. The survey questions were based on those reported by Loddington et al. (Loddington et al., 2009). Likert-scale questions were used for this survey since they are generally easily understood by respondents and are also an efficient and inexpensive way of obtaining data, especially in an online format. The responses are quantifiable and are easily coded for data analyses. Quantitative research using Likert scales is, however, not without its limitations, with researchers highlighting a number of psychometric and conceptual issues (Ogden & Lo, 2011). Therefore, it is important to interpret data generated in this way in the context of participants’ decision-making processes. Pre-testing of the questionnaire was conducted to ensure that respondents were able to understand the questions in the way that the study intended. The questionnaire was subsequently revised according to the feedback from pre-testing. Once the questionnaire was finalised, it was sent to the study participants by email for completion at the end of the online course.

The results of all end-of-course surveys are shared with students in a revised format, using graphs to demonstrate the key themes highlighted, and listing suggested action points resulting from any issues reported. These results and action points are discussed at the staff-student committee meeting.

Staff-student Committee Meeting

The staff-student committee meeting is held twice per academic year, providing students and staff with an opportunity to discuss the courses in light of student feedback, and agree on action points within a mutually agreed timescale for both staff and students as stakeholders in the program. Student representatives poll the student group for issues to discuss, structure the agenda, and chair the meeting. As this meeting is organised early in the academic year, students are familiar with the procedure and aware that they have the opportunity to review the feedback they have provided. Students can expand on comments made in the open-text fields in the course feedback, clarify any comments that they feel need it, or highlight issues that have already been resolved.

The meeting is normally held using the Skype instant messaging (IM) facility (http://www.skype.com/en/) as this is familiar to all students, easy to use, and works well with limited bandwidth or poor connectivity. Skype IM also provides a simple method for producing a transcript at the end of the meeting, as the text comments can be copied and pasted into a text document for sharing with those unable to attend. This also provides a useful method for capturing student comments as data, and also allows the students to review and amend for clarification, ensuring the statements correctly support their views, and is particularly important when encouraging a participatory approach to student engagement in learning design (Seale, 2009).

The staff-student committee can be viewed as a form of focus group, in that it facilitates discussion with the group on key topics. Where this differs from a standard focus group approach is in the determination of the topics for discussion, as these are set by the student representative in response to peer input, rather than by staff.

Results

Use of WebPA

The course team found WebPA easy to use and time saving by reducing the amount of grading involved for a cohort of students, allowing prompt and more detailed feedback to be provided. In addition, there was no need to calculate an individual score for each student based on self and peer feedback as WebPA did this automatically. Being able to recognise individual efforts in the group task was valuable and WebPA made this a very straightforward process.

Survey Data

Twenty nine students responded to the survy, a response rate of 48 percent. The majority of the students (68%) either disagreed or strongly disagreed with preferring group work to individual assessment, although 17 out of 29 (58%) stated that they felt there was an advantage to group work (Table 2), and, from the free-text comments, this appeared to be due to students feeling that group work develops team working skills and that a group has a broader knowledge base than an individual.

Table 2. Students’ evaluation of group working (n = 29)


SA

A

NSF

D

SD

Given the choice, I prefer group assessment to individual assessment 

1

3

5

8

11

I think there is an advantage to using group assessment

8

5

3

6

6

SA: strongly agree; A: agree; NSF: no strong feelings; D: disagree; SD: strongly disagree

Table 3 reports students’ evaluation of peer assessment. Some students (41%) preferred peer assessment of group work to tutor-only grading, although others (24%) had no strong feelings on this. When asked if they felt peer assessment was beneficial to their learning, there was an equal split (27%) in those that did and those that did not, with the rest (24%) having no strong feelings. The students were also split in their opinions of peer assessment improving communication within the group, with an equal number (33%) feeling that communication did improve compared to those that did not. However, the majority of students (35%) reported that peer assessment improved group dynamics. A similar number (34%) also reported peer assessment to improve participation within the group. Peer appraisal skills were considered to be improved by using peer assessment by the same number of students (38%) that had no strong feelings on the subject. When asked if peer assessment improved problem solving skills, the majority of students (48%) had no strong feelings). A similar response was seen for improving team working skills, with the majority of students (31%) having no strong feelings on this. However, a large number (45%) of students did feel that peer assessment improved their self-reflection and appraisal skills.

Table 3. Students’ evaluation of peer assessment (n = 29)


SA

A

NSF

D

SD

Given the choice, I prefer peer assessment of group work to tutor-only grading of the assessment

5

7

7

4

6

I found the peer assessment process beneficial to my learning

5

6

7

3

8

I found the peer assessment improved my communication within the group

6

4

9

6

4

I found peer assessment improved group dynamics

2

8

6

7

6

I found peer assessment improved participation within the group

3

7

8

6

5

I found peer assessment improved my peer appraisal skills

2

9

11

5

2

I found peer assessment improved my problem solving skills

2

2

14

4

7

I found peer assessment improved my self-reflection/appraisal skills

2

11

8

4

4

I found peer assessment improved my team working skills

4

7

9

5

4

SA: strongly agree; A: agree; NSF: no strong feelings; D: disagree; SD: strongly disagree

Table 4 shows students’ evaluation of WebPA. Over 76% of students reported that they would use WebPA again, with the majority reporting WebPA to be secure and easy to navigate. Over 80% of respondents also found WebPA to be secure, with no technical issues surrounding its use.

Table 4. Students’ evaluation of WebPA (n = 29)


SA

A

NSF

D

SD

WebPA was secure 

17

10

2

0

0

Enough information was provided on using WebPA

15

13

1

0

0

WebPA was easy to navigate

17

12

0

0

0

There were no technical issues with using WebPA

18

7

0

4

0

I would be happy to use WebPA in future assessments

8

14

4

1

2

SA: strongly agree; A: agree; NSF: no strong feelings; D: disagree; SD: strongly disagree

Thematic Analysis

A basic thematic analysis was carried out on the transcripts for the staff-student committee meetings and feedback. The core themes outlined in Table 5 are as observed with the survey – students focus either on ease of use (WebPA the tool) or the impact of peer assessment on grades (higher weighting, value of algorithm).

Table 5. Selected statements demonstrating WebPA-related discussion themes

Theme

Selected Statements

WebPA as a software tool – focus on ease-of-use and integration

“I would just like to say how much I have learned from all the different IT platforms etc. we were introduced to throughout the course.” 

“A great tool to use, straight forward once technical hitches understood on how to access, also good to use other IT software, another string to the bow so to speak.” 

“I had no technical issues. It was easy to complete.” 

WebPA as part of the peer assessment process – focus on the peer assessment activity

“I like that we can grade our peers. It takes the sting away from having a freeloader in your group. You don't have to get annoyed. You just mark them accordingly. Similarly, you can 'reward' a hard-working, capable star.” 

“I prefer to be responsible for my own marks, good or bad. I don't want to have the risk of being downgraded due to poor work from other students and conversely I don't want marks that I would not have obtained by myself due to superior work from other students.”

Difficulties with, or dislike of, group work were clarified in the additional comments, emphasising the wide-range of commitments impacting on part-time, online distance students’ availability for real-time discussion and group interaction.

I don’t think it’s fair that the peers grade each other as due to individual time constraints some people are not able to be able to be as involved when the others want them to be causing a negative feedback from the other peers.

I'm afraid that I really do not like group work especially online as it relies very heavily on having internet access which is ok until there is a problem.

Discussion

The aim of this paper was to evaluate the use of WebPA with groups of online distance learners. The majority of students reported that they would use WebPA again; however, this does not fit with the number of students who felt that they liked having their grades influenced by their peers. This anomaly could be explained by WebPA’s ease of use: students stated they would use it again, since there were few technical issues surrounding it and any problems were easily resolved. Surprisingly, fewer students liked having their grade influenced by others than was expected. Certainly some say that in true teamwork, individuals should stand or fall by judgement of the team output; however, others have reported that no matter how sound the argument may be for this, many students express a strong opposition to this approach, and insist on individually assigned grades that can be demonstrably associated with the efforts and abilities of each student (Yi-Ming Kao, 2012). Allocating equal grades has been thought to spark fears that “lazy” students may benefit from the efforts of teammates, or particularly diligent students may have their efforts diluted by less diligent team members (Wilmot & Crawford, 2007). Other studies have stated that it is not necessarily the complexity and nature of the task, size of the group or, indeed, cultural differences, but that recognition of individual effort, that most greatly affects an individual’s contribution to the collaborative assessment (Davies, 2009).  This was evidenced by the following comment in the current study:

I like that we can grade our peers. It takes the sting away from having a freeloader in your group. You don't have to get annoyed. You just mark them accordingly. Similarly, you can 'reward' a hard-working, capable star.

In an online setting this issue of “free-riding”, defined as a non-performing group member who benefits from the accomplishments of the other members with little input from him/herself (Morris and Hayes, 1997), may be exacerbated by the fact that students may never get the chance to meet face-to-face.  Another aspect to consider is the "sucker effect", whereby a group member may reduce their input to a task when they experience free-riding (Mulvey and Klein, 1998).  Indeed, competent students may even choose to fail as a group rather than be a "sucker" (Kerr, 1983).  Penalties can also be used to deter free-riding, such as "divorcing" free-riders from groups.  Nevertheless, studies have shown that students are less willing to confront free-riders in this way, but are willing to anonymously award poor grades to them (Strong and Anderson, 1990). 

Whilst group working has reported benefits for learners studying online at a distance (McConnell, 2002), there has been little work done to evaluate the use of peer assessment with distance learners. Distance learners do differ from traditional on-campus students in that they typical study part-time and fit their studies in around work and family commitments. Also, unlike the traditional campus-based student groups, distance learners cannot always set aside time to study at the same time as other members of their group. Moreover, the flexible nature of distance education means that students are studying from around the globe and, thus, time differences also affect their ability to work synchronously on group assessments. It has been said that distance learners are often required to take more responsibility for their own learning than on-campus students—they cannot just simply follow what the other students are doing since they must log into the VLE as a solitary initiative, and interact with fellow students and their tutor of their own accord (Knowles & Kerkman, 2007). It is possible that a greater effort is required by distance learners to participate in group work compared to on-campus students. The flexibility afforded in DE allows learners to come to their learning at a time when they are motivated and ready to learn, which has many benefits for overall learning, but may impact on their perceptions of being able to work collaboratively on assessed group projects. Therefore, whilst collaborative work may be essential for distance learners to promote community building, perhaps graded group work is more problematic for this group of students, who have unique study patterns. This was evidenced by the comment:

I don’t think it’s fair that the peers grade each other as due to individual time constraints some people are not able to be able to be as involved when the others want them to be causing a negative feedback from the other peers.

Less than half of students felt that the group grading was helpful to their learning. Certainly, others have reported group assessments to be an authentic form of assessment in terms of a student's later employability (Bourner, Hughes, & Bourner, 2001), with the skills involved in online collaborative work fostering the formation of lifelong learning skills. However, it is difficult to ascertain from this preliminary study whether the group work itself was helpful to learning, or if the grading criteria used for the WebPA prompted students to consider their group working skills and, thus, their learning benefited in that way. It has certainly been reported that collaborative assessment can also promote the construction of knowledge and enhancement of problem-based learning among students (Dolmans, Wolfhagen, van der Vleuten, & Wijnen, 2001); however, it would appear logical to look further at distance learners’ perception of group work separately from peer moderation of group work as students’ perception of one may be affecting their thoughts on the other. Online collaborative assessment appears to fit well to the seven principles of good feedback provided by (Nicol & MacFarlane-Dick, 2006) and is seems well suited to promote self-regulated learning, which is essential for the distance learner.  In particular, it demonstrates the principles relating to peer and self-assessment.  WebPA facilitates both peer and self-assessment and allows teachers to facilitate student autonomy through peer-supported learning and recognition of individual’s efforts. Other methods used by the authors to facilitate this include wiki-based tasks with activity monitored via wiki history, but this proved difficult to grade and not easily scalable with increasing numbers of online distance learners. In particular, online collaborative assessment is well suited to the principles relating to teacher and peer dialogue and self-assessment.  Collaborative online assessment can also be used for summative assessment purposes and is considered to promote a "deep" approach to learning, "active" as opposed to "passive" learning (Davies, 2009).  Moreover, collaborative learning is reported to promote experiential learning (McGraw and Tidwell, 2001) and collaborative learning (Lee et al, 1997).  Collaborative assessment can also promote the construction of knowledge and enhancement of problem-based learning among students (Dolmans et al., 2001)

Thus, it would seem that online peer assessment of collaborative work has potential benefits for online distance learners and that WebPA has the ability to facilitate this process.

Conclusion

WebPA offered a simple, yet effective approach to promoting student autonomy and engagement with the assessment and feedback process. Using WebPA was beneficial to the authors as a peer assessment tool with a diverse group of students. Student feedback on using WebPA was, on the whole, positive and the majority of students were happy to use this online system. In particular, students focussed on the ease of use of WebPA, and appreciated the sense of fairness fostered by grading against a rubric, echoing the study carried out by Panadero, Romero & Strijbos (2013).

Further work is required to investigate learners’ perceptions of using WebPA with a larger student cohort. Given the issues raised by the students on the pressures of organising group work at distance, together with an acknowledgement of the benefits in reducing isolation and fostering a sense of community, working together with students to determine the best phrasing of criteria to allow for flexibility in methods of participation may resolve the negative perception of WebPA reported by some students. As stated earlier, working with students, the course team can gain a clearer understanding through a process of discussing which aspects of the grading process are most helpful in supporting their learning.


References

  1. Blair, A. & McGinty, S. (2013). Feedback-dialogues: exploring the student perspective. Assessment & Evaluation in Higher Education, 38(4), 466-476.
  2. Bourner, J., Hughes, M., & Bourner, T. (2001). First-year undergraduate experiences of group project work. Assessment and Evaluation in Higher Education, 26(1), 19-39.
  3. Davies, W. M. (2009). Groupwork as a form of assessment: Common problems and recommended solutions. Higher Education, 58, 563-584.
  4. Dolmans, D., Wolfhagen, I., van der Vleuten, C., & Wijnen, W. (2001). Solving problems with groupwork in problem-based learning: Hold on to the philosophy. Medical Education, 35(9), 884-889.
  5. Knowles, E., & Kerkman, D. (2007). An investigation of students’ attitude and motivation toward online learning. Student Motivation, 2, 70-80.
  6. Loddington, S., Pond, K., Wilkinson, N., & Wilmot, P. (2009). A case study of the development of WebPA: An online peer-moderated marking tool. British Journal of Educational Technology, 40(2), 329-341.
  7. Luxton-Reilly, A. (2009). A systematic review of tools that support peer assessment. Computer Science Education, 19(4), 209-232.
  8. McConnell, D. (2002). The experience of collaborative assessment in e-learning. Studies in Continuing Education, 24(1), 73-92.
  9. Nicol, D., & MacFarlane-Dick, D. (2006). Formative assessment and self-regulated learning. Studies in Higher Education, 31(2), 199-218.
  10. Ogden, J., & Lo, J. (2011). How meaningful are data from Likert scales? An evaluation of how ratings are made and the role of the response shift in the socially disadvantaged. International Journal of Health Psychology, 12, 350-361.
  11. Race, P. (2001). A briefing on self, peer and group assessment ILTSN Generic Centre. Retreived January 10 2013 from: http://www.phil-race.com/
  12. Russell, M., Haritos, G., & Combes, A. (2006). Individualising students' scores using blind and holistic peer assessment. Engineering Education, 1(1), 50-59.
  13. Strong, J. T., & Anderson, R. E. (1990). Free riding in group projects: Control mechanisms and preliminary data. Journal of Marketing Education, 12(2), 61-67.
  14. Wilmot, P., & Crawford, A. (2007). peer review of team marks using a web-based tool: an evaluation. Engineering Education, 2(2), 59-66.
  15. Yi-Ming Kao, G. (2012). Enhancing the quality of peer review by reducing student "free riding": Peer assessment with positive interdependence. British Journal of Educational Technology, 44(1), 112-124.

Jo-Anne Murray is Associate Dean for Digital Education in the College of Medical, Veterinary and Life Sciences at the University of Glasgow.  Jo-Anne has developed and delivered many online distance education programmes and has keen interest in collaborative online media. Jo-Anne also has experience in running MOOCs and, as such, is very interested in open educational resources. E-mail: Jo-Anne.Murray@glasgow.ac.uk

Sharon Boyd is a lecturer at the Royal (Dick) School of Veterinary Studies R(D)SVS with a specific remit to support students working online at distance. She has a PgCert in University Teaching and MSc in eLearning and is currently working towards her doctorate in education at Edinburgh. Her research areas of interest are sustainable and digital education. E-mail: Sharon.Boyd@ed.ac.uk