The Incorporation of Quality Attributes into Online Course Design in Higher Education

Kathleen Anne Lenert and Diane P. Janes

VOL. 32, No. 1 2017

Abstract

A survey was designed incorporating questions on 28 attributes (compiled through a literature review) and considered to be quality features in online academic courses in higher education. This study sought to investigate the ongoing practice of instructional designers and instructors in the United States with respect to their incorporation of these quality best practices into their design process and course content. Although most respondents indicated they included the majority of the quality attributes in courses they designed or taught, a significant number rarely or never included some of the generally accepted features shown to result in positive learner outcomes.  

Résumé

Nous avons mené une enquête dont les questions portaient sur les 28 critères de qualité (identifiés dans le cadre d’une revue de littérature) des cours en ligne dans l’enseignement supérieur. Cette étude cherchait à faire ressortir dans quelle mesure les bonnes pratiques définies concernant la qualité sont mises en œuvre par les concepteurs pédagogiques et les enseignants, aux Etats-Unis, dans leur processus de conception des cours et le contenu de ces derniers. Bien que les répondants indiquent, pour la plupart d’entre eux, qu’ils intègrent la majorité des critères de qualité dans les cours qu’ils conçoivent ou enseignent, un nombre significatif d’entre eux n’intègre que rarement voire jamais certains de ces critères communément considérés comme ayant un impact positif sur la réussite des étudiants.

This work is licensed under a Creative Commons Attribution 3.0 Unported License.

Introduction

The number of postsecondary students in the United States enrolled in at least one online course for academic credit continues to rise. In 2013, an analysis of the most recent data from the Integrated Postsecondary Data System (IPEDS), showed more than 5.5 million students were enrolled in at least one fully online course in the U.S. (Allen & Seaman, 2015; Poulin & Straut, 2015).  In addition, of all degree granting institutions of higher learning in the U.S., about 70% now have some form of online distance offering, and academic leaders (77% in 2013 and 74% 2014) have begun to understand that online learner outcomes can be the same or superior to face-to-face class instruction (Allen & Seaman, 2015). Yet, while a strong majority (70%) of academic leaders report that online learning is critical to their long-term strategies, only about a third (28%) of Chief Academic Officers say their faculty accept the value and legitimacy of online education (Allen & Seaman, 2015). Online course offerings are being released at a faster rate than current faculty are being trained to deliver quality education in a digital age.  Faculty are challenged to keep up with the plethora of digital applications and approaches to course delivery now available, how to incorporate technology into the learning environment, and how to ensure their choices will result in the highest quality learning experience for the student.  A recent survey by the Online Learning Consortium (OLC) showed that a high percentage of faculty had not adopted new learning techniques in their courses, were not familiar with the technologies or determined the technologies weren’t relevant (Allen & Seaman, 2015). This research examined the attributes of quality, including defining quality in an online context, and theories that inform instructional design practice and survey methodologies that underpinned the framework for this study (Cresswell, 2013; Reiser, 2001; Dick, Carey and Carey, 2004; Reigeluth, 1999).  

As part of a series of independent study credits offered to students in the Masters of Educational Technology (MET) at the University of British Columbia in Canada and conducted by a graduate student investigator, who is also first author, between May 2015 and December 2015, this study examined various attributes of what constitutes quality measures in an online learning environment. The MET graduate program is delivered online and the student investigator, living in the United States, elected to include only research participants in the US. 

The question now is not whether we should integrate technology into learning but how to use it to personalize learning, create more accessible and equitable digital environments, and to do this by using the flexibility and creativity that technology affords.  The emphasis in the literature now is on ensuring that instructors are informed and trained in how to create quality online and blended learning spaces (U.S. Department of Education, 2010 & 2016).

The purpose of this study was to examine online asynchronous academic credit courses in higher education with a view to the presence of quality attributes that are critical to their design and delivery. The research questions were developed as a result of the student investigator’s work with faculty in the development of online graduate level courses at universities in the United States. There were two main questions: What components in an online course in higher education are defined as quality in the literature? and, Are instructors and instructional designers incorporating quality features into online courses?

The Literature

The determination of what constitutes measures of quality in online higher education varies widely in the literature. Quality is often measured as student satisfaction with an online course and it is understood that quality is considered low if the rate of attrition is high (Grace, Weaven, Bodey, Ross, & Weaven, 2012; Patterson, 2007). Quality can also be attained through the process of course design (a set of standards created by either the instructor, the instructional designer, or externally).  Although Quality Matters (QM), a peer-review process of only faculty instructors, uses a standardized rubric to measure and certify the quality of online courses, the research of Monroe (2011) concluded that there was a difference between how instructional designers and faculty reviewers rated the quality of a course.       

Piña and Bohn (2014) focused on two important points: quality assessment rubrics that assume the individual who created the course is also the one teaching the course and that these same rubrics only focus on course design and not on the role of instructor delivery. Additionally, they point out that faculty are often evaluated by students who take courses that the faculty member may not have designed. The concept of quality can be nebulous at best (Mitchell, 2010), or subjective and found, like beauty, in the eyes of the beholder (Maringe & Sing, 2014). Bates (2015) offered a definition of quality that was most appropriate as a benchmark for this research: “…teaching methods that successfully help learners develop the knowledge and skills they will require in a digital age” (Chapter 11, p. 1).

Prior to determining what features constitute quality in online learning, it is important to keep in mind principles of quality teaching in general. Chickering and Gamson (1987) outlined seven principles of teaching undergraduates based on their research on good teaching practices. Chickering and Ehrmann (1996) later recognized technology (specifically computers) as a resource for learning that was being incorporated into undergraduate education.  Merrill (2001) focused on the elements of teaching that include problem-based learning, content having relevancy to and engagement in solving real-world problems, and the demonstration of new knowledge – in other words, active learning as a strong element of quality teaching.   Graham, Cagiltay, Lim, Craner, and Duffy (2001) expanded on the basic principles of Chickering and Ehrmann (1996) to assess the quality of online courses and included technologies aligned with those same principles of good teaching. Fundamentally good practice encourages: student-faculty contact; cooperation among students, active learning, prompt feedback, and time on task; as well as high expectations and respect for diverse talent and ways of learning. (See Appendix 1)

Puzziferro and Shelton (2009) revisited Chickering and Ehrmann (1996), and although they affirmed the basic principles they also noted that the landscape of higher education was constantly evolving, as were our values.  This work (Puzziferro & Shelton, 2009) illustrated how learning is no longer a product that is delivered but one that is experienced by the learner.  Therefore, the role of faculty development (and professional instructional design) is increasingly more important now in guiding faculty on how to create learning spaces, specifically with the use of technology online and how to help instructors adapt to the new roles expected of them.

Creating an online course is a complex and iterative process, one in which it is difficult to ascribe any one component as being the major contributor to its success or quality level. Puzziferro and Shelton (2008) support having a “common framework for consistency, design, pedagogy and content” (p. 119) and Simpson and Benson (2013) in support of this, continue to show that student satisfaction levels increase after an online course went through a systematic review for quality.

Rubrics have been used for many years as a judge of quality – quality of the program, quality of the course and, of course, quality of the work of the individual student within a course. A large number of studies have been done using the Quality Matters rubric, which was designed in 2006 – 2007 to measure quality in online course design (Newhouse, Buckley, Grant, & Idzik, 2013; Parscal & Riemer, 2010; Piña & Bohn, 2014; Roehrs, Wang, & Kendrick, 2013). Later in the decade, Shelton (2010), with the support of the Online Learning Consortium, built an instrument that provided a 360-degree assessment for institutions to measure the quality of their online programs. Shelton emphasized the importance of determining and assuring quality in online programs and further stated that institutions to date have not been attentive enough to implementing methods to audit and measure the quality of programs, or have had the approach that institutions “recognize quality because—we are researchers, we maintain accreditation, we have multiple resources at our disposal, and we are selective in our admissions process” (Shelton, 2011, p. 7).

Universities are under increasing scrutiny by both governments and consumer-learners regarding the quality of their educational offerings. (Grace, 2013; US Department of Education, 2010). In the United States, academic programs in higher education are accredited through a peer-review process and the submission of self-studies examining compliance with the standards set by one of five regional accrediting bodies. The Middle States Commission for Higher Education, for example, has adopted the Interregional Guidelines for the Evaluation of Distance Education (Online Learning) which has nine hallmarks of quality that promote best practices in the area of program design and institutional governance and oversight (MSCHE, 2011).

Piña and Bohn (2014) caution not to pay too much attention to course design as the only measure of quality but to focus attention on the actions of the instructor during course delivery. Jaggars and Xu (2016) concluded that the connections that learners made with the instructors in the course contributed to positive learner outcomes. Examples include instructor presence in discussions and synchronous chats or some other demonstration of caring. By using faculty development and mentoring as hallmarks one is more likely to produce a quality online course (Barczyk, Buckenmeyer, Feldman, & Hixon, 2011; Maringe & Sing, 2014).

The availability of training, development, and resources for faculty has been known to improve the quality of an online course (Herman, 2012). These resources can include mentoring, consultant services, reading material, workshops, and group discussions.  However, the majority of faculty teaching online were dissatisfied with the level of institutional support, specifically faculty development offered by their institutions (Herman, 2012), as many faculty had not previously taught online. Therefore, the provision of budgetary allocations, via institutional support, is imperative to online initiatives. Without these, the quality of the courses may be diminished.

Another primary method of assessing quality is through student perception and satisfaction surveys (Anderson, Tredway, & Calice, 2015; Harrison, Gemmell, & Reed, 2014; Hill, 2014; Marmon, Vanscoder, & Gordesky, 2014).   Ciobanu and Ostafe (2014) explored the connection between student satisfaction and learning. Student satisfaction surveys can serve as a formative assessment for course designers as student preferences can change over time; what was a desirable feature a decade ago is replaced by a new method of engagement today.  Bloxham (2010) examined the formative assessment role that continuous quality improvement (CQI) had on student perceptions of quality and concluded that students did like to be asked for their feedback and see changes incorporated into the course.  Hill (2014) noted that graduate students and their level of satisfaction with courses are a good measure of quality as these learners have had years of experience not only as students but also, possibly, as teaching assistants.

There are a number of diverse aspects of the design of an online course, as well as the specific features included in the course that contribute to its overall quality.  The role of institutional support cannot be underestimated but was not included in this study. Institutions should, however, decide how they will measure quality in their online course offerings independently of traditionally delivered, or face-to-face classes (Mitchell, 2010).

Study Design

Between June and December 2015, research was undertaken to examine the notion of quality in online course design.  Based on the work of previous researchers (Shelton, 2011; Roehrs, C., Wang, L., & Kendrick, D., 2013; Monroe, R. M., 2011) attributes were identified and potential survey questions were culled.  Using this collection of attributes, new survey questions were designed that limited the instrument focus to these four domains: participant and course demographics; course design process and team; measuring and improving the course; and course content including instructor actions.   This online survey instrument was used to question instructional designers and instructors (i.e., faculty) who specifically taught online in the United States.   

The survey link was sent by email, in the Fall to 2300 individuals who were members of the Association of Education Communications and Technology (AECT), 90% of whom lived in the United States.  The study was limited to United States-based instructional designers and instructors. In the two weeks that the survey was open, 119 AECT members clicked the link agreeing to participate in the survey, with 103 continuing past the consent to the survey instrument. Of those 103 participants, 21 were excluded as they did not enter any data at all; two were excluded as they indicated they only taught in a face-to-face course; one was excluded as the participant indicated being a student, not an instructor or designer; and nine were excluded as they taught at institutions outside the United States. Although there were three participants who did not fully complete the survey, their responses were retained as they contained at least partial information.  This left a total of 70 participants for an overall response rate of 3%.

The Survey

Information was collected to determine what role the participants played in the course design or delivery, and to what degree they were able to make changes in the online courses they built or taught. Participants were then asked to indicate on a Likert scale whether the design process or content features were 1 = always present; 2 = occasionally present; 3 = rarely present; 4 = never present in the course; or 5 = do not know). The design process questions explored who was included on the team building the course, how often the course was reviewed and improved and if, and what type of, a course quality auditing measure was used. The survey instrument is included in Appendix 2, and the domains are presented in Tables A1, A2, A3 and A4.  The University of British Columbia’s Behavioral Research Ethics board approved this study.

Participant and Course Demographics (Section 1)

The first section of the survey asked about demographics. Of the respondents, 74.3% were female, 24.3% were male, and 1.4% preferred not to answer. The majority fell into the age range of 36-45, and the majority (41%) classified themselves as both instructor and designer of an online course. Participants primarily worked at 4-year public institutions in graduate programs taught entirely online; the median time they had been in their current role was between 4 – 6 years. There were, however, a significant number of participants who identified only as an instructor designer (35%) and those who indicated they had been working in their current role between 7 – 15+ years (38%).  AECT is a professional organization whose members and activities are directed toward improving instruction through technology, so the assumption was that there would be awareness, if not a propensity, to support the best practices within higher education and online learning.  The participants, overall, were an experienced group of educators in online teaching and learning.

Table 1: Survey Participant Demographics

Role

Gender

Age

Length of Time in Role

Type of Higher Education Organization

Course Type

Delivery Mode

 

 

 

(Years)

 

NC

UG

G

Online

Hybrid

(1) Instructor (N=12)

F=8
M=4

18-25=0
26-35=2
36-45=3
46-55=3
56-64=3
65>=1

< 1=1
1-3=2
4-6=5
7-10=1
11-15=3
>15=0

2 yr public=2
2 yr for-profit=7
4 yr public=0
4 yr private not-for-profit= 0
4 year for profit=
Other=2

1

3

8

12

0

(2) Instructional Designer
(N=25)

F=20
M=5

18-25=0
26-35=9
36-45=9
46-55=5
56-64=0
65>=2

< 1=3
1-3=5
4-6=9
7-10=6
11-15=0
>15=2

2 yr public=2
2 yr for profit=0
4 yr public=16
4 yr private not for profit=5
4 year for profit=1
Other=1

2

11

12

21

4

(3) Both Instructor & Designer
(N=33)

F=24
M=8
Prefer Not to Answer=1

18-25=0
26-35=1
36-45=13
46-55=9
56-64=8
65>=2

< 1=3
1-3=10
4-6=5
7-10=6
11-15=4
>15=5

2 yr public=4
2 yr for profit=0
4 yr public=20
4 yr private not for profit=1
4 year for profit=4
Other=4

4

11

18

24

9

NC: Non-credit course
UG: Undergraduate course
G: Graduate course

Data Analysis

Data analysis was conducted using SPSS 22. The working hypothesis was that there was a difference in which quality attributes were incorporated into the course depending on participant roles (instructor, instructional designer or both instructor and designer) within the course design. An ANOVA test was performed for each of the three sections of the survey as outlined in the Appendix 2 (Table A2, A3, and A4).

Course Design Team (Section 2)

The second section of the survey asked about the course design team. The majority of respondents indicated the online course design teams always included the instructor (81.2%) and an instructional designer (68.1%). At the same time, 79.7% indicated a librarian was rarely or never present as part of the course design team, and 73.9% of respondents indicated a content expert, who was not always the instructor, was included in course design. Of those participants who categorized themselves as the instructor (solely – and not the instructor and designer), 50% indicated they included an instructional designer always or occasionally in the design of the courses they taught; all but one respondent in this group were teaching in a fully online course.

Much of the recent literature has looked at the process of online course design and delivery as contributing to the quality. The method of design that uses collaboration between faculty and instructional designers was likely to result in the identification of a quality course (Bates, 2015; Brown, Eaton, Jacobsen, Roy, & Friesen, 2013; Chao, Saj, & Hamilton, 2010). The inclusion of a librarian was also key to ensuring quality by helping to identify teaching resources, to address copyright issues, (Bates, 2015) and as a guide in the alignment of research assignments with library resources (Mudd, Summey, & Upson, 2015).

Measuring and Improving the Course (Section 3)

Participants were asked if they were able to make changes in the online course; all indicated they were able to make changes: 47 (67.1%) said they could change anything they felt was necessary, 16 (22.9%) could change anything as long as it did not deviate from the syllabus; and 4 (5.7%) were able only to make changes due to technical issues or grammatical errors.

With regards to measuring course quality, when asked about rubric use in the course, only 56.5% said a rubric was used. The majority indicated they used a rubric designed internally by their organization (48.7%); close behind was the Quality Matters rubric (41%). The least used was the Quality Scorecard provided by the Online Learning Consortium (5.1%).  It was encouraging to see that 68% of all participants indicated their course was reviewed and improved annually; however, it was notable to see the remaining one-third (32%) either only occasionally, rarely or never reviewed or improved their course on an annual basis. Of the respondents to this question 11.6% indicated they did not know how often the course was reviewed or improved. This question was explored since the use of rubrics as a measurement of quality features was widely supported in the literature (Newhouse et al., 2013; Parscal & Riemer, 2010; Piña & Bohn, 2014; Roehrs et al., 2013; Shelton, 2010).

An ANOVA test was performed to determine if there was a difference between the groups based on participant role, on whether or not a rubric was used to measure the quality of the course and if the course was reviewed and improved annually. The t-test indicated that there was only a difference between groups on whether the course was reviewed and improved annually.

Table 2: ANOVA test – Course Reviewed or Improved Annually

 

 

Sum of squares

df

Mean Square

F

Significance

Course is reviewed and improved annually

Between groups
Within groups
Total

22.623
234.595
257.217

2
66
68

11.311
3.554

3.182

.048

A comparison of the means shows that participants who were both the instructor and the designer of a course were most likely to review and improve the course annually; the least likely to do this were those who identified themselves only as the instructional designer.  This may be as a result of the instructional designer being assigned a course during a design cycle but not having any responsibility for follow-up revisions after the design was complete, thereby leaving this to the instructor for the most part unless the design team (instructor and ID) was tasked with engaging with future iterations as part of the design process.

Table 3: Comparison of Means by Role – Course Reviewed or Improved Annually

Role

Mean

N

Std Dev

Std Error of Mean

Variance

Instructor

2.33

12

1.775

.512

3.152

Instructional Designer

3.29

24

2.074

.423

4.303

Both Instructor & Designer

2.03

33

1.776

.309

3.155

Total

2.52

69

1.945

.234

.783

Course Content Features (Section 4)

When asked about the attributes determined by the literature to be quality features in an online course, there were clearly some of the attributes that had a high percentage of always being present in the courses and others that were rarely or never present. The attributes that participants specified, regardless of their role in the design, which were always in the course included: 

These results are encouraging, as they seem to suggest a propensity among experienced instructors and designers to incorporate literature based quality features into their online course design and teaching process.

Other important quality attributes that had a more uneven uptake included:

These results suggest there is inconsistent incorporation of quality features, especially those that involve constructivist strategies for active learning and engagement. If learners, as an example, are not presenting their projects to peers but only submitting them to instructors, they do not have the opportunity to learn from each other and practice critical assessment. Only 20.9% of participants indicated they included having learners assess each other’s work in their course design, yet this is a consistently used method of formative assessment (Romeu Fontanillas, Romero Carbonell, & Guitert Catasús, 2016) that can increase student engagement (Weaver & Esposto, 2012). Synchronous opportunities allow the learner to connect with others in the course as well as the instructor, which may result in higher levels of student satisfaction, retention, and grades (Jaggars & Xu, 2016). Although, Jaggars and Xu (2016) concluded that students value interaction with the instructor more than student-student interactions, that may be dependent on the student demographics (community college versus other levels of higher education, such as graduate studies), a difference that may require further study.

A one-way ANOVA test was performed with ‘participant role’ as the independent variable and the Course Content Features (Appendix 2, Table A4) as the dependent variables.  For three of the course content features, the t-test results showed a P value of less than .05, so the null hypothesis was rejected. Results for these three content areas are listed in Table 4.

Table 4: ANOVA: Course Content Feature Incorporation Analysis by Role

 

 

Sum of Squares

Df

Mean Square

F

Significance

Synchronous Opportunities

Between Groups
Within Groups
Total

8.059
83.750
91.809

2
65
67

4.029
1.288

3.127

.050

Learners Present Projects

Between Groups
Within Groups
Total

21.399
71.719
93.118

2
65
67

10.699
1.103

9.697

.000

Learners Create a Product

Between Groups
Within Groups
Total

6.392
41.667
48.059

2
65
67

3.196
.641

4.986

.010

The means of the three groups were then compared as shown in Table 5.  For these three quality course content attributes, the data show that the Instructional Designer was least likely to include the items, whereas, the Instructor was most likely to include them in the course they taught.

Table 5: Comparison of Means by Role: Synchronous Opportunities, Learners Create a Product, and Learners Present Projects to Class

Role

 

Synchronous Opportunities

Learners Create a Product

Learners Present Projects to Class

(1) Instructor

Mean

2.08

1.17

1.42

 

N

12

12

12

 

St. Dev

.900

.389

.669

 

Variance

.811

.152

.447

 

Std. Error of Mean

.260

.112

.193

(2) Instructional Designer

Mean

2.83

2.00

2.92

 

N

24

24

24

 

St. Dev

1.239

1.063

1.248

 

Variance

1.536

1.130

1.558

 

Std. Error of Mean

.253

.217

.255

(3) Both Instructor and  

Mean

2.13

1.50

1.97

Designer

N

32

32

32

 

St. Dev

1.129

.672

.999

 

Variance

1.274

.452

.999

 

Std. Error of Mean

.200

.119

.177

Discussion

The data in this study show that many attributes of quality learning are being incorporated into the course design and the course content.  Some items are clearly being used: the design team always including the content expert and instructor; communication and navigation instructions outlined, deadlines for completed work regularly scheduled throughout the course, instructions on how learners can meet course objectives clearly stated, contact information provided for the Learning Management System and the library, although this attribute is included less often.   Yet even with this level of quality, there remains much room for extending the number of features that a course design team might regularly include in their courses as outlined here:

One of the more important additions to this list would be the recommendation to include a librarian in the course design team, an action likely to result in richer learning experiences.  Next would be to recommend that courses be measured against a rubric for consistency in quality, and reviewed annually as part of the good practice of updating and improving the course.  As online instructors become more knowledgeable and skilled with a broader variety of multimedia software or tools, we would expect to see more inclusion of these types of items as part of the course design.

This study did produce paradoxical findings, as the instructional designers were the least likely to include some important features. As professionals in the design of instruction, it would be expected that they would know and practice best guidelines and insert opportunities for quality online learning.

There is a critical need for ‘good modeling’ for future instructors, within the quality design process. New instructors, often graduate students learning their craft, need to see quality and understand its value, as they may, one day, become a more formal part of the academic teacher world.  

In addition, college students, and especially graduate students, who are adult learners, require specific course features to enhance learning.  Samples of assignment expectations provide scaffolding to adult learners, and case studies provide relevance of knowledge to their personal lives (Knowles, 2005). Beyond this, it can be argued that the adult learner could benefit from social interaction for learning often provided within online discussions and synchronous opportunities (Cercone, 2008).    

Some courses will not need all these attributes.  As Mitchell (2010) emphasizes, the institution needs to initiate a discussion about how they will assess quality in their own online course offerings, and within other auxiliary support such as student services. This is all determined by many factors including accreditation requirements, stakeholder expectations, and current ideals, as well as the needs of the faculty teaching and the students learning.

Conclusion

To show quality in online courses, one must include the design process, specific course features, and student services, as well as instructor delivery.  The set of quality attributes selected for this study, for the most part, support social constructivist strategies (Bandura, 1986; Fosnot, 1996; Vygotsky, 1926) to encourage active learning and student engagement with the course content. The literature concludes that these strategies are the most effective in increasing student satisfaction and student learning outcomes. The first question was answered and the survey instrument was built using a selection of these attributes.  The study limited its exploration to four areas: the course design team, measuring and improving the course, course content features, and instructor actions.

This study showed a high percentage of respondents — who are experienced designers or instructors of online education based on the number of years working in their roles — are incorporating quality features in their courses. The results also indicate that there are still a number of those working in online course design, who only rarely or never include many of the features. Comparing between the roles, there were three areas where the instructor of the online course was most likely to incorporate these quality attributes. This was a surprising result given the professional nature of the instructional designers involved in many of the designs.

Institutional support was also a critical element to ensuring quality attributes were embedded into the online course design process. While this might primarily be in the form of faculty development and training, it was also an important part of the support of practice, where courses are expected to be reviewed and improved annually. 

There are many questions that this study elicited that a future study might explore:

There may be any number of reasons underlying the decision to include or exclude quality attributes in online course design that need to be further explored which would allow these barriers, either real or perceived, to be removed. In the case of instructional designers there may be constraints on the latitude they have in incorporating innovative technologies. 

Although all participants indicated they could make changes in the courses they designed or taught, the extent to which the instructional designers can influence the incorporation of course features is an area that is recommended for further exploration.  In the case of instructors not incorporating more quality features, a qualitative exploration, specifically of instructors of online courses in higher education could shed more light on the reasons for this.

Appendix 1

Principles of Good Teaching (Graham et al., 2001)

Principle 1: Good Practice Encourages Student-Faculty Contact
Lesson for online instruction: Instructors should provide clear guidelines for interaction with students.

Principle 2: Good Practice Encourages Cooperation Among Students
Lesson for online instruction: Well-designed discussion assignments facilitate meaningful cooperation among students.

Principle 3: Good Practice Encourages Active Learning
Lesson for online instruction: Students should present course projects.

Principle 4: Good Practice Gives Prompt Feedback
Lesson for online instruction: Instructors need to provide two types of feedback: information feedback and acknowledgment feedback.

Principle 5: Good Practice Emphasizes Time on Task
Lesson for online instruction: Online courses need deadlines.

Principle 6: Good Practice Communicates High Expectations
Lesson for online instruction: Challenging tasks, sample cases, and praise for quality work communicate high expectations.

Principle 7: Good Practice Respects Diverse Talents and Ways of Learning

Lesson for online instruction: Allowing students to choose project topics incorporates diverse views into online courses.


Appendix 2: Survey Questionnaire

Table A1: Participant Demographics

Survey Question

N

%

1. Role in last course designed or taught: N=70

 

 

1. Instructor

12

17.1%

2. Instructional Designer

25

35.7%

3. Both Instructor and Designer

33

47.1%

2. Gender: N=70

 

 

1, Female

52

74.3%

2. Male

17

24.3%

3. Prefer not to Answer

1

1.4%

3. Age: N=70

 

 

1. 18-25

0

0

2. 26-35

13

18.6%

3. 36 - 45

24

34.3%

4. 46-55

17

24.3

5. 56-64

11

15.7%

6. > 65

5

7.1%

4. Length of Time in current role: N=70

 

 

1. < 1 year

7

10%

2. 1-3 years

17

24.3%

3. 4-6 years

19

27.1%

4. 7-10 years

13

18.6%

5. 11-15 years

7

10%

6. > 15 years

7

10%

5. Type of Institution: N=70

 

 

1. 2 year public

8

11.4%

2. 4 year public

43

61.4%

3. 4 year private not-for-profit

9

12.9%

4. 4 year for-profit

3

4.3%

Other: Continuing Education; Graduate Only Programs

7

10%

6. What is the location of the institution of the course you teach/design? N=70

 

 

USA

70

100%

7. Level of Course: N=70

 

 

1. Non-credit

7

10%

2. Undergraduate

25

35.7%

3. Graduate

38

54.3%

8. Course Delivery Mode: N=70

 

 

1. Fully online (students not required to meet in person)

55

78.6%

2. Blended or Hybrid (partially online and partially in-person)

15

21.5%

Table A2: Course Design Team Members

Survey Question

N

%

9. Design team includes content expert. N=69

 

 

1. Always present

51

73.9%

2. Occasionally present

7

10.1%

3. Rarely present

6

8.7%

4. Never present

5

7.2%

5. Do not know

0

 

10. Design team includes instructional designer: N=69

 

 

1. Always present

47

68.1%

2. Occasionally present

11

15.9%

3. Rarely present

5

7.2%

4. Never present

6

8.7%

5. Do not know

 

 

11. Design team includes instructor: N=69

 

 

1. Always present

56

81.2%

2. Occasionally present

10

14.5%

3. Rarely present

2

2.9%

4. Never present

1

1.4%

5. Do Not Know

0

0

12. Design team includes librarian: N=69

 

 

1. Always present

2

2.9%

2. Occasionally present

11

15.9%

3. Rarely present

16

23.2%

4. Never present

39

56.5%

5. Do not know

1

1.4%

Table A3: Measuring and Improving the Course

Survey Question

N

%

13. Were you able to make edits or changes in the last online course you taught or designed? N=70

 

 

1. Yes

70

100%

2. No

0

0

14. To what extent were you able to make changes in the course? N=70

 

 

1. Anything I felt was necessary

47

67.1%

2. Any changes as long as they did not deviate from the syllabus significantly.

16

22.9%

3. Only changes that would resolve technical or grammatical errors

4

5.7%

4. Other

3

4.3%

15. A rubric is used to measure quality in the online course: N=69

N

%

1. Always present

25

36.2%

2. Occasionally present

14

20.3%

3. Rarely present

15

21.7%

4. Never present

13

18.8%

5. Do not know

2

2.9%

16. Please indicate which rubric is used? N=39

 

 

1. Quality Scorecard (OLC)

2

5.1%

2. Quality Matters

16

41%

3. Quality rubric developed internally

19

48.7%

4. Other rubric: Communities of Inquiry, self-designed

2

5.1%

17. Course is reviewed and improved annually: N= 69

N

%

1. Always

28

40.6%

2. Most of the time

19

27.5%

3. Occasionally

6

8.7%

4. Rarely

6

8.7%

5. Never

2

2.9%

6. Do not know

8

11.6%

Table A4: Course Content Features

Survey Question

N

%

18. Communication between the learner and instructor is specifically outlined at the start of the course: N=68

 

 

1. Always present

59

86.8%

2. Occasionally present

8

11.8%

3. Rarely present

0

0

4. Never present

0

0

5. Do not know

1

1.5%

19. Navigational instructions make the course easy to understand: N=68

 

 

1. Always present

51

75%

2. Occasionally present

14

20.6%

3. Rarely present

3

4.4%

4. Never present

0

0

5. Do not know

0

0

20. Course addresses course netiquette or online behavior and communication expectations. N=68

 

 

1. Always present

43

63.2%

2. Occasionally present

17

25%

3. Rarely present

6

8.8%

4. Never present

1

1.5%

5. Do not know

1

1.5%

21. Contact information for LEARNING MANAGEMENT SYSTEM help is included at the start of the course: N=68

 

 

1. Always present

55

80.9%

2. Occasionally present

6

8.8%

3. Rarely present

4

5.9%

4. Never present

1

1.5%

5. Do not know

2

2.9%

22. Contact information for the LIBRARY is included at the start of the course: N=68

 

 

1. Always present

42

61.8%

2. Occasionally present

11

16.2%

3. Rarely present

4

5.9%

4. Never present

9

13.2%

5. Do not know

2

2.9%

23. Learner participation in asynchronous discussions is required: N=68

 

 

1. Always present

50

73.5%

2. Occasionally present

9

13.2%

3. Rarely present

3

4.4%

4. Never present

4

5.9%

5. Do not know

2

2.9%

24. Learner participation in asynchronous discussion is graded: N=68

 

 

Always present

44

64.7%

1. Occasionally present

12

17.6%

2. Rarely present

4

4.4%

3. Never present

7

10.3%

4. Do not know

2

2.9%

25. Instructor outlines expectations for discussion posts: N=68

 

 

1. Always present

46

67.6%

2. Occasionally present

11

16.2%

3. Rarely present

4

5.9%

4. Never present

5

7.4%

5. Do not know

2

2.9%

26. Synchronous opportunities are incorporated into the course: N=68

 

 

1. Always present

18

26.5%

2. Occasionally present

23

33.8%

3. Rarely present

15

22.1%

4. Never present

8

11.8%

5. Do not know

4

5.9%

27. Learners present their course projects to peers: N=68

 

 

1. Always present

21

30.9%

2. Occasionally present

28

41.2%

3. Rarely present

7

10.3%

4. Never present

8

11.8%

5. Do not know

4

5.9%

28. Learners produce or create a product at the end of the course: N=68

 

 

Always present

38

55.9%

1. Occasionally present

21

30.9%

2. Rarely present

7

10.3%

3. Never present

1

1.5%

4. Do not know

1

1.5%

29. Learners are encouraged to explore content beyond the boundaries of the course: N=68

 

 

1. Always present

40

59.7%

2. Occasionally present

18

26.9%

3. Rarely present

7

10.4%

4. Never present

0

0

5. Do not know

2

3%

30. Course uses videos, simulations, podcasts or other multi-media to communicate content: N=67

 

 

1. Always present

41

61.2%

2. Occasionally present

21

31.3%

3. Rarely present

4

6%

4. Never present

1

1.5%

5. Do not know

0

0%

31. Rubrics are used to assess and communicate expectations in course assignments: N=67

 

 

1. Always present

49

73.1%

2. Occasionally present

9

13.4%

3. Rarely present

5

7.5%

4. Never present

3

4.5%

5. Do not know

1

1.5%

32. Student peer assessment is used in the course as formative assessment: N=67

 

 

1. Always present

14

20.9%

2. Occasionally present

21

31.3%

3. Rarely present

13

19.4%

4. Never present

13

19.4%

5. Do not know

3

4.4%

33. Deadlines for completion of work are regularly scheduled throughout course

 

 

1. Always present

59

88.1%

2. Occasionally present

7

10.4%

3. Rarely present

3

4.5%

4. Never present

0

0

5. Do not know

1

1.5%

34. Instructions on how learners can meet course objectives are clearly stated. N=67

 

 

1. Always present

56

83.6%

2. Occasionally present

7

10.4%

3. Rarely present

3

4.5

4. Never present

0

0

5. Do not know

1

1.5%

35. Samples of excellent work or case studies are used to demonstrate expectations instructor has of the learners for an assignment. N=67

 

 

1. Always present

18

26.9%

2. Occasionally present

27

40.3%

3. Rarely present

9

13.4%

4. Never present

11

16.4%

5. Do not know

2

3%

36. Learners are able to submit work in multi-media formats (video, audio, simulation, digital storytelling): N=67

 

 

1. Always present

30

44.8%

2. Occasionally present

20

29.9%

3. Rarely present

10

14.9%

4. Never present

5

7.5%

5. Do not know

2

3%




References

Allen, I. E., & Seaman, J. (2015). Grade level: Tracking online education in the United States. Retrieved from http://www.onlinelearningsurvey.com/reports/gradelevel.pdf

Anderson, G., Tredway, C., & Calice, C. (2015). A Longitudinal Study of Nursing Students' Perceptions of Online Course Quality. Journal of Interactive Learning Research, 26(1), 5-21.

Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory. Englewood, NJ: Prentice Hall.

Barczyk, C., Buckenmeyer, J., Feldman, L., & Hixon, E. (2011). Assessment of a University-Based Distance Education Mentoring Program from a Quality Management Perspective. Mentoring & Tutoring: Partnership in Learning, 19(1), 5-24.

Bates, T. (2015). Teaching in a Digital Age: Guidelines for designing teaching and learning. Retrieved from http://opentextbc.ca/teachinginadigitalage/chapter/11-1-what-do-we-mean-by-quality-when-teaching-in-a-digital-age/
 
Bloxham, K. T. (2010). Using formative student feedback: A continuous quality improvement approach for online course development. ProQuest LLC.

Brown, B., Eaton, S. E., Jacobsen, D. M., Roy, S., & Friesen, S. (2013). Instructional Design Collaboration: A Professional Learning and Growth Experience. Journal of Online Learning & Teaching, 9(3), 439-452.

Cercone, K. (2008). Characteristics of Adult Learners with Implications for Online Learning Design. AACE Journal, 16(2), 137-159.

Chao, I. T., Saj, T., & Hamilton, D. (2010). Using Collaborative Course Development to Achieve Online Course Quality Standards. International Review of Research in Open & Distance Learning, 11(3), 106-126.

Chickering, A. W., & Ehrmann, S. C. (1996). Implementing the Seven Principles: Technology as a Lever. American Association for Higher Education Bulletin, 49(2), 3-6.

Chickering, A. W., & Gamson, Z. F. (1987). Seven Principles for Good Practice in Undergraduate Education. AAHE Bulletin, 3-7.

Ciobanu, A., & Ostafe, L. (2014). Student Satisfaction and Its Implications in the Process of Teaching. Acta Didactica Napocensia, 7(4), 31-36.

Dick, W. O., Carey, L., & Carey, J. O. (2004). The systematic design of instruction. Boston: Allyn & Bacon.

Fosnot, C.  (1996). Constructivism: Theory, perspectives, and practice.. New York: Teachers College Press.

Grace, D., Weaven, S., Bodey, K., Ross, M., & Weaven, K. (2012). Putting student evaluations into perspective: The Course Experience Quality and Satisfaction Model (CEQS). Studies in Educational Evaluation, 38(2), 35-43. doi:10.1016/j.stueduc.2012.05.001

Graham, C., Cagiltay, K., Lim, B.-R., Craner, J., & Duffy, T. M. (2001). Seven principles of effective teaching: A practical lens for evaluating online courses. Technology Source.

Harrison, R., Gemmell, I., & Reed, K. (2014). Student Satisfaction with a Web-Based Dissertation Course: Findings from an International Distance Learning Master's Programme in Public Health. International Review of Research in Open and Distance Learning, 15(1), 182-202.

Herman, J. H. (2012). Faculty Development Programs: The Frequency and Variety of Professional Development Programs Available to Online Instructors. Journal of Asynchronous Learning Networks, 16(5), 87-106.

Hill, L. H. (2014). Graduate Students’ Perspectives on Effective Teaching. Adult Learning, 25(2), 57-65.

Jaggars, S. S., & Xu, D. (2016). How do online course design features influence student performance? Computers & Education, 95, 270-284. doi:10.1016/j.compedu.2016.01.014

Knowles, M. (2005).The adult learner, sixth edition: The definitive classic in adult education and human resource development (6th ed.). London, UK: Butterworth-Heinemann.

Maringe, F., & Sing, N. (2014). Teaching large classes in an increasingly internationalising higher education environment: pedagogical, quality and equity issues. Higher Education, 67(6), 761-782. doi:10.1007/s10734-013-9710-0

Marmon, M., Vanscoder, J., & Gordesky, J. (2014). Online Student Satisfaction: An Examination of Preference, Asynchronous Course Elements and Collaboration among Online Students. Current Issues in Education, 17(3), 1-11.

Merrill, M. D. (2001). First Principles of Instruction. Journal of Structural Learning & Intelligent Systems, 14(4), 459.

Mitchell, R. L. G. (2010). Approaching Common Ground: Defining Quality in Online Education.. New Directions for Community Colleges, 2010(150), 89-94. doi:10.1002/cc.408

Monroe, R. M. (2011). Instructional design and online learning: A quality assurance study. ProQuest LLC.

MSCHE. (2011). Distance education programs: Interregional guidelines for the evaluation of distance education (Online learning).

Mudd, A., Summey, T., & Upson, M. (2015). It Takes a Village to Design a Course: Embedding a Librarian in Course Design. Journal of Library & Information Services in Distance Learning, 9(1/2), 69-88 20p. doi:10.1080/1533290X.2014.946349

Newhouse, R., Buckley, K. M., Grant, M., & Idzik, S. (2013). Reconceptualization of a Doctoral EBP Course from In-class to Blended Format: Lessons Learned from a Successful Transition. Journal of Professional Nursing, 29(4), 225-232. doi:10.1016/j.profnurs.2012.05.019

Parscal, T., & Riemer, D. (2010). Assuring Quality in Large-Scale Online Course Development. Online Journal of Distance Learning Administration, 13(2).

Patterson, B. P. (2007). A comparative study of factors related to attrition in online and campus based master's degree programs. East Carolina University.  

Piña, A. A., & Bohn, L. (2014). ASSESSING ONLINE FACULTY: More Than Student Surveys and Design Rubrics. Quarterly Review of Distance Education, 15(3), 25-34.

Poulin, R., & Straut, T. (2015). https://wcetblog.wordpress.com/2015/03/10/ipedsenrollments/.  Retrieved from https://wcetblog.wordpress.com/2015/03/10/ipedsenrollments/

Puzziferro, M., & Shelton, K. (2008). A Model for Developing High-Quality Online Courses: Integrating a Systems Approach with Learning Theory. Journal of Asynchronous Learning Networks, 12(3-4), 119-136.

Puzziferro, M., & Shelton, K. (2009). Supporting Online Faculty--Revisiting the Seven Principles (A Few Years Later). Online Journal of Distance Learning Administration, 12(3).

Reigeluth, C. M. (Ed.). (1999). Instructional-design theories and models: Vol. 2. A new paradigm of instructional theory. Hillsdale, NJ: Lawrence Erlbaum.

Reiser, R. (2001). A history of instructional design and technology: Part II: A history of instructional design. Educational Technology Research and Development49(2), 57-67.

Roehrs, C., Wang, L., & Kendrick, D. (2013). Preparing Faculty to Use the Quality Matters Model for Course Improvement. Journal of Online Learning & Teaching, 9(1), 52-67.

Romeu Fontanillas, T., Romero Carbonell, M., & Guitert Catasús, M. (2016). E-assessment Process: Giving a Voice to Online Learners. International Journal of Educational Technology in Higher Education, 13(1), 1-14. doi:10.1186/s41239-016-0019-9

Shelton, K. (2010). A Quality Scorecard for the Administration of Online Education Programs: A Delphi Study. Journal of Asynchronous Learning Networks, 14(4), 36-62.

Shelton, K. (2011). A quality scorecard for the administration of online education programs: A delphi study. (71), ProQuest Information & Learning, US. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&db=psyh&AN=2011-99091-186&site=ehost-live&scope=site Available from EBSCOhost psyh database.

Simpson, J. M., & Benson, A. D. (2013). Student Perceptions of Quality and Satisfaction in Online Education. Quarterly Review of Distance Education, 14(4), 221-231.

U.S. Department of Education, Office of Educational Technology. (2010). National Education Technology Plan: Transforming American education. Learning Powered by Technology. Retrieved from: http://files.eric.ed.gov/fulltext/ED512681.pdf

U.S. Department of Education, Office of Educational Technology. (2016). National Education Technology Plan: Future ready learning: Re-imagining the role of technology in education. Retrieved from http://tech.ed.gov/files/2015/12/NETP16.pdf

Vygotsky, L. (1926). Mind in society. Cambridge, MA: Harvard University Press.

Weaver, D., & Esposto, A. (2012). Peer Assessment as a Method of Improving Student Engagement. Assessment & Evaluation in Higher Education, 37(7), 805-816. doi:10.1080/02602938.2011.576309

Kathleen Anne Lenert, MET, is a graduate of the Masters of Educational Technology program at the University of British Columbia. She has worked for more than 20 years in research universities in Canada and the US, developing academic programs and integrating technology into graduate medical education programs. Her focus is on training instructors to design digital spaces where learning is built in a distributed environment. She currently works as the Associate Director of Research Learning & Development in the Office of Clinical Research at the Medical University of South Carolina. E-mail: lenertk@musc.edu

Dr. Diane P. Janes, M.Ed., MBA has taught in a senior teaching and academic positions in educational technology and instructional design in education and business, and now works with the Learning Engagement Office (LEO), Faculty of Extension at the University of Alberta. She has a track record spanning 20+ years in the rapidly changing and dynamic environment of higher education. With vast experience in continuing studies and professional studies programming, teaching, leadership, management and administration, Dr. Janes has spent the majority of her career within large university settings with significant experience across lifelong learning units, distance and online learning, new programming, instructional design and development, and program evaluation. She is focused on innovative practices in higher education teaching and management leadership with a strong emphasis on new media engagement, faculty development, strategic planning and sustainable teaching practice. E-mail: djanes@ualberta.ca