Gathering and Using Quality-of-Student:
Experience Data on Distance Education Courses

Margaret Rangecroft, Peter Gilroy, Tony Tricker, Peter Long

VOL. 17, No. 1, 75-83

Abstract

This article reports on findings from a three-year research project that examined ways of improving student evaluation of distance education courses. The focus is on the perception of the evaluation tool by three distance education course directors and how it might be used to improve the quality of their programs.

Résumé

Cet article fait état des résultats d’un projet de recherche d’une durée de trois ans qui a consisté à examiner les façons d’améliorer l’évaluation des étudiants de cours d’éducation à distance. L’accent est mis sur la perception qu’ont trois directeurs de programmes d’éducation à distance de l’outil d’évaluation et sur comment cet outil peut être utilisé pour améliorer la qualité de leurs programmes.

Introduction

Among the number of different ways of assessing the quality of a course, by far the most common is the tried-and-tested system of asking students variations on the question “How was it for you?” A recent example of what might be called the standard approach is provided by Wall (2001), who sets out three versions of the approach: the university-wide review based on a student survey, the staff-student consultative committee, and informal feedback from students to staff, to which she adds the notion of a student focus group. The information thus produced is usually recorded on paper (although in the case of Wall’s focus group approach it was recorded on tape) and then presented as a report to the course director.

One issue that this approach raises, as Wall (2001) points out, is that “the results cannot be made known quickly” (p. 30). Our concern, however, is with different although related issues that are the responsibility of distance education course teams. How might the teams use student survey information drawn from distance education courses both to understand how their students perceive their course and how the student experience of that course might be improved?

All four members of the Template research group responsible for this article are experienced directors and module tutors, that is, module designers and instructors, on distance education courses. All four have been dissatisfied with the “standard” student survey tools they were using, not least because they seemed singularly inappropriate for the types of courses and students with whom they were working. For example, in most universities the results of a university-wide review would inevitably be heavily skewed toward the perceptions of face-to-face students given the numbers of such students. Neither the staff-student consultative committee nor the focus group approaches to student evaluation is well suited to distance students, especially those based overseas, because it is difficult for such students to be properly representative of their courses, leaving aside the question as to how, given the distances and time changes involved, they would be able to meet.

The point being labored here is that as distance education tutors, we were not primarily concerned with Wall’s (2001) problem of accessing the results of student surveys quickly. Our problem was in finding an evaluation tool that would be relevant to the unique situation in which distance education students find themselves and for which their courses are supposed to be designed to accommodate. Relevance, not speed, was our prime concern, although it was of course important that data should be available as quickly as possible before its sell-by date had expired. Moreover, given the gradual increase in distance education courses, and the concomitant interest that the quality mafia was showing in distance courses, it was of some consequence to us that we establish an appropriate evaluation tool for such courses.

We have elsewhere (Tricker, Rangecroft, Gilroy, & Long, 1999; Gilroy, Rangecroft, & Tricker, 2001) provided the details of our current research project. In brief, we have adapted a tool used in the service industry, the Template, to measure the fit between customer expectation and experience (Staughton & Williams, 1994) so that it would apply to the situation that distance students (as customers) find themselves in with their course provider (as service industry). In the context of distance education courses this translates as a gap that might exist between what students look for in their course and what they in fact experience.

The Template adopts a radically different procedure from the more conventional satisfaction survey. A number of aspects of interest are first identified. First, for each of these aspects represented by distinct endpoints on a scale, the Template offers a range of possibilities. Respondents indicate two positions, one that corresponds to what they look for in their

course and the second what they experience. The scale for a typical aspect is shown in Figure 1.

The aspects are identified by asking students to identify the most important aspects of course provision to them. We used a variety of methods to establish what the students themselves felt were the important aspects of course delivery. These included teleconferenced and face-to-face focus groups and a paper-based questionnaire (Rangecroft, Gilroy, Long, & Tricker, 1999). The resulting aspects were ranked by importance to reduce them to a more manageable number, and the end-points were created for each remaining aspect. Care was taken to ensure that the end-points were as far as possible value-neutral.

After the students have marked the two positions (one indicating what they look for, the other what they experience), the distance between the two—in other words the gap—is calculated for each of the aspects. The gaps identified by individual students are then combined with responses from other students at the same stage of the same course and analyzed to produce comparative statistics. The most meaningful of these statistics is the so-called satisfaction gap associated with an aspect of course provision. This is the average across the student cohort of the absolute values of the difference between the two measurements. We take the absolute value to indicate that any difference between what is looked for and what is experienced indicates dissatisfaction. The aspects included in the Template are then ranked in order of this satisfaction gap to establish the order of importance for taking action to close the gap. To date the data have been collected on paper, but we are currently developing a Web-based version that will be much more automated.

To date we have used the Template to evaluate a number of postgraduate distance education courses in several institutions and across a range of subject areas. The data have been collected from students at all stages of their course and while they were studying rather than on completion of units so that their experiences were fresh. Informal feedback from students themselves suggests that they find the Template quick and easy to complete. Certainly response rates have been reassuringly high.

One positive outcome of our approach has been how we have been able to take advantage of information technology to feed information back to the course team with a minimum of delay. In addition, the technology allows us to present information in a variety of ways to facilitate understanding such as bar charts, simple tables, or more complex graphical presentations.

This article reports on how the course teams used the information we provided about the gaps (or lack of them) that students identified between their expectation and experience of the course..

The Expectations of the Distance Education Course Directors

The three directors of the courses the students evaluated were responsible for three distance master’s courses (Applied Statistics, Total Quality Management, and MEd) that were offered in Hong Kong, the United Kingdom, and Singapore respectively. It is fair to say that the directors expected the Template’s findings to be mainly positive in that they thought their publicity material (course guides, handbooks, induction procedures, and so on) would create not only a consistency of student expectations (i.e., all students applying to their program would have much the same expectations), but also that students’ expectations of their course would more or less match their experience of the course. The directors assumed that they had made it clear to students what the program of study would be like, so they were confident that there would be a close match of expectation to experience. We believed that any problems would cluster around the actual experience students had of their course.

It was accepted that differences might arise between the various years of a course (students just beginning their studies might have different experiences and expectations than others close to completion, as the directors believed the expectations of the latter were more realistic than those of the former). The idea of comparing student expectation with experience was itself novel to the directors, but in the main they did not think there would be much to question their faith in the quality of their courses, where quality is defined as providing (within reasonable limits) what students expect from their course. In other words, with a confidence based on “standard” student evaluation procedures, they expected the gaps between student expectation and experience to be small, if they existed at all.

The (Un)Comfortable Fit Between Expectation and Experience

There was no point in showing the directors every detail of the Template’s findings, although they were available if required. Instead we offered them the more manageable Top Line findings. As the directors expected, in many areas little or no gap existed between distance students’ expectations and experience of their course. The results from the Template showed the following aspects where students’ experiences closely matched their expectations.

Course directors found it comforting to know that, in these respects at least, no changes to the design of their course was necessary in order to meet their students’ expectations. However, the Template results revealed a number of aspects with significant gaps to which the course directors felt they must respond. We now identify three of the more important of these with an indication of how the course director intended to respond. This shows the variety of ways course directors and their teams can act on the basis of evidence derived from the Template so as to close up significant gaps between student expectation and experience.

Core Text Issues

Students expected core texts for their course to be specified and provided as part of the service they pay for through their course fee.

Director’s Response

The current reading list only suggests texts, and for many units no core text exists. I will need to alter the course handbook so as to specify core texts (where they exist) and also make it clear to the students that as postgraduates they will need to make use of a number of texts. I will also have to explain how much the fee would be increased if we were to provide core texts as part of their course entitlement. This gap appears to have been created by a communication problem on my part.

Assignment Issues

Students expected more assignments to be linked to their professional experience (i.e., to be more applied than theoretical) and also expected more assignments to examine the application of theory to practice.

Director’s Response

I feel very strongly that at master’s level there has to be a core of theory that underpins the program. This said, there are many assignments that do allow students to apply theory to practice, not least the dissertation, which is intended to relate the course as a whole to their own professional experience. I need to manage student expectation better, especially during the induction sessions and in the course handbook.

Feedback and Tutorial Issues

Students expected a high level of extended feedback, which would include a substantially increased number of tutorials. They expected tutorials to be initiated by the tutor.

Director’s Response

I am very surprised with this response. After all, this is a distance course. I will have to make sure that my team make it clear to students what constitutes appropriate feedback. I will also have to cost the face-to-face tutorials so as to make the point during the induction program that these cannot be increased without a concomitant increase in fee level. This appears to be a case where we need to manage student expectation rather than assume they have the same understandings of distance education as the course team.

Discussion

It is clear that the course directors were able to recognize that their previous understanding of their course as derived from their experience and their existing “standard” course evaluations had given them no understanding of the fit or otherwise between what students were looking for on a course and what they actually experienced, whereas the Template did provide this important information. The directors also recognized that the information about a possible gap at this point was critical to the success of their course, especially regarding retention, in that:

Thus the Template allowed for pinpoint accuracy in the identification and subsequent management of student expectation and experience, including areas where no action was either required or possible. It provided infor-mation that contradicted the directors’ taken-for-granted assumptions about their course in that the directors had assumed that there would be little or no gap between what students expected and actually experienced on their course. As one director put it, the findings “give me points which I need to make sure are sustained and promoted, and this is useful.”

It also comes through strongly that course directors need to be more aware of the need to manage student expectations effectively. It is easy to assume that a carefully written course guide will be equally carefully read by students, but this research indicates that such an assumption is not always well founded. Face-to-face induction sessions are necessary to reinforce the important messages that can be found in course guides. The Template identifies which of these are perceived as significant by students and so allows gaps to be closed before they widen into significant problems. At least for the courses under consideration in this article the Template had shown that there was no need to change elements of the programs themselves (with all the cost and time implications of such alterations). Rather, student expectation needed to be better managed by means of improved communication. In this way, the Template provides a much more efficient and targeted approach to the use of student evaluation.

It is especially the case that in distance learning programs small problems of communication can quickly grow larger without the traditional face-to-face interaction to fall back on. For example, the students’ and tutors’ misapprehensions as to the nature of tutorial support identified above (i.e., students thought this should be initiated by the tutors, the tutors that it should be initiated by the students) could easily lead to a situation where no tutorial support was provided. Similarly, the fact that students expected to be provided with a clearly delimited set of core texts showed a radical misunderstanding of the philosophy underpinning their course.

The implication here is that evaluation through the Template should take place at some point during elements of the program, not as is more common at the end of the course, otherwise small misunderstandings (which we are identifying as gaps) can quickly fester into something more substantial. Our view is that this is one of the many complex reasons that affect retention and that the Template assists substantially in helping “to identify the interrelated facets of the (distance student) experience” (Morgan & Tan, 1999, p. 96).

Conclusion

Thus the Template did provide useful information for directors, answering in considerable detail the key question “How can I improve my course?” The first step in developing the quality of a course requires the production of sound and secure information about that course, and this the Template appeared to have done. The directors were in the position to measure and manage the quality of their course in evidence-based ways. The next step would be to enhance the quality of the course, providing added value as a result of the directors’ and their course team’s actions, by acting on the data that the Template produced. In part this would require managing the expectations of students based on knowledge of what those expectations were (e.g., it is a common phenomenon that senior managers in an institution create expectations in students through statements made at recruitment fairs, such as promising a three-week turn-around of assignments that lecturing staff might be blissfully unaware of); in part it would be a matter of improving that course itself; in part it would be a form of trend analysis, as the Template would provide information about how students’ experience changed throughout the lifetime of a program, and in part it would be a matter of effective communication.

Our contention is that these sorts of results are not easily produced through the “standard” kinds of evaluation processes. The Template provides the means to identify and justify action based on new and important evidence about distance courses so as to improve their quality. Moreover, in doing so it closes the audit loop, because it is the student who has identified the questions that need to be asked, and it is the student who answers the questions producing responses to the two questions that most exercise the minds of course providers, “How was it for you?” and “What should I do to make it better?”

References

Gilroy, P., Long, P., Rangecroft, M., & Tricker, T. (2001). Evaluation and the invisible student. Quality Assurance in Education, 9 (1), 14-21.

Morgan, C.K., & Tan, M. (1999). Unravelling the complexities of distance education student attrition. Distance Education, 20 (1), 96-108.

Rangecroft M., Gilroy P., Long, P. & Tricker, T. (1999). What is important to distance education students? Open Learning, 14 (1), 17-24.

Staughton, R.V.W., & Williams, C.S. (1994). Towards a simple, visual representation of fit in service organisations—The contribution of the service template. International Journal of Operations and Production Management, 14 (5), 76-85.

Tricker, T., Rangecroft, M., Gilroy, P. & Long, P. (1999). Evaluating distance education courses: The student perception. Assessment and Evaluation in Education, 26(2), 165-177.

Wall, A.L. (2001). Evaluating an undergraduate unit using a focus group. Quality Assurance in Education, 9 (1), 23-31.


Margaret Rangecroft is a senior lecturer at Sheffield Hallam University with particular interests in statistical education (at both school and university levels), distance education, and graphicacy. She was previously a primary schoolteacher in the West Riding of Yorkshire and Sheffield and is currently researching distance education.

Peter Gilroy holds a chair in education at the Manchester Metropolitan Institute of Education and is joint editor of the International Journal of Education for Teaching. He was a schoolteacher in Hertfordshire before moving to Clifton College of Education in Nottingham and then Manchester and Sheffield Universities. He has published widely in the field of teacher education.

Tony Tricker is a professor in applied statistics at the School of Computing and Management Sciences, Sheffield Hallam University in the UK. He has published widely in the field of statistics, the main areas being the precision of measurement and statistical process control. At present he is Head of Applied Statistics and Management Science.

Peter Long is a principal lecturer in operational research and operations management at Sheffield Business School at Sheffield Hallam University. He has worked in industry and local government and has research interests in customer chains in manufacturing and service industries.

ISSN: 0830-0445