Learning in Communities of Inquiry: A Review of the Literature

Liam Rourke and Heather Kanuka

VOL. 23, No. 1, 19-48

Abstract

The purpose of this study was to investigate learning in communities of inquiry (CoI) as the terms are defined in Garrison, Anderson, and Archer's (2000) framework. We identified 252 reports from 2000Ñ2008 that referenced the framework, and we reviewed them using Ogawan and Malen's (1991) strategy for synthesizing multi-vocal bodies of literature. Of the 252 reports, 48 collected and analyzed data on one or more aspects of the CoI framework; only five included a measure of student learning. Predominantly, learning was defined as perceived learning and assessed with a single item on a closed-form survey. Concerns about the soundness of such measures pervade the educational measurement community; in addition, we question the validity of the particular items employed in the CoI literature. Bracketing these concerns, the review indicates that it is unlikely that deep and meaningful learning arises in CoI. Students associate the surface learning that does occur with independent activities or didactic instruction; not sustained communication in critical CoI. We encourage researchers to conduct more, substantial investigations into the central construct of the popular framework for e-learning and theorists to respond to the mounting body of disconfirming evidence.

Résumé

Le but de cette étude était d’examiner l’apprentissage dans des communautés d’investigation (COI) tel que défini par le dispositif de Garrison et coll. (2000). Nous avons identifié 252 rapports de 2000 à 2008 qui font référence à ce dispositif, et nous les avons passés en revue en utilisant la stratégie d’Ogawan et Malen pour synthétiser des corpus de littérature multivocaux. Des 252 rapports, 48 ont collecté et analysé des données sur un ou plusieurs aspects du dispositif COI; seulement cinq incluait une mesure de l’apprentissage étudiant. De façon prédominante, l’apprentissage était défini comme l’apprentissage perçu et évalué à l’aide d’un seul item sur un sondage de type fermé. Des inquiétudes quant à la valeur de telles mesures imprègnent la communauté de recherche en évaluation en éducation; de plus, nous remettons en question la validité des items employés dans la littérature COI. Entourant ces inquiétudes, notre revue des rapports indique qu’il est peu probable qu’un apprentissage profond et significatif se produit en COI. Les étudiantes et étudiants associent l’apprentissage superficiel qui se produit à des activités indépendantes ou à de l’instruction didactique; pas à de la communication soutenue en COI critique. Nous encourageons les chercheurs à conduire des études plus substantielles sur le construit principal de ce dispositif populaire pour le eLearning et les théoriciens à répondre au corpus d’évidences négatives grandissant.

Introduction

In 2000, Garrison, Anderson, and Archer presented a set of suggestions for improving higher, distance education with new communication media. In the ensuing years, their framework became the basis for a substantial number of studies: The Proquest Dissertations and Theses database identifies 18 doctoral and masters studies that explore aspects of the framework, and Google Scholar indexes 237 articles citing Garrison et al.'s germinal document. Unfortunately, few studies examine the framework's central claim. That claim is about deep and meaningful learning, yet researchers have been preoccupied with tangential issues such as student satisfaction with e-learning or techniques or measuring communicative action. Garrison et al.'s framework is not one of student satisfaction nor is it one of educational measurement. The purpose of this report is to examine learning in communities of inquiry.

Literature Review

The Community of Inquiry Framework

The impetus for Garrison et al.'s (2000) germinal article was the proliferation of computer conferencing throughout higher education at the turn of the 21st century. The technology, invented in the 70s and introduced into distance education in the mid 80s, reach a tipping point in popularity in the late 90s. Today, online, asynchronous, textual forums pervade higher education. However, the contagion brought the technology to practitioners who lacked the technical, experiential or theoretical background to deploy it productively. Opportunities for interaction and collaboration were new to practitioners of distance education, and opportunities for asynchronous textual communication among students were new to classroom instructors.

Attuned to the ruinous effects of the technological imperative on education (see for example, Cuban, 1986; 2001 and Postman, 1993; 2003), Garrison et al. (2000) sought to inform this transition with wisdom from the educational canon. Synthesizing a wide body of educational research, they identified a parsimonious set of issues for stakeholders to consider. Together, they are known widely as the Community of Inquiry framework (CoI).

The CoI is comprised of three elements: social presence, teaching presence, and cognitive presence. Garrison et al. (2000) presented the following definition of social presence: “The ability of participants in a community of inquiry to project their personal characteristics into the community thereby presenting themselves to others as real people” (p. 89). Operationally, social presence is defined by frequency counts of three types of communicative action in a computer conference: emotional, cohesive, and open.

The responsibilities of the instructor in online or blended learning environments are collectively called teaching presence. They are: a) instructional design, b) discourse facilitation, and c) direct instruction. Garrison et al. (2000) articulate several specific duties for each of these three broad categories.

Paramount in the framework is the construct cognitive presence, which the authors define as “the extent to which the participants in any particular configuration of a community of inquiry are able to construct meaning through sustained communication” (2000: 12). Operationally, cognitive presence is identified through frequency counts of four types of discourse: triggering events, exploration, integration, and resolution.

In tandem, the three presences constitute the CoI framework. Garrison has continued to refine the framework (Garrison & Cleveland-Innes, 2005; Garrison, 2003; Ice, Arbaugh, Diaz, Garrison, Richardson, Shea, & Swan, 2007; Rourke, Anderson, Garrison, & Archer, 1999; Vaughan & Garrison, 2005), but the core thesis is unchanged: In an environment that is supportive intellectually and socially, and with the guidance of a knowledgeable instructor, students will engage in meaningful discourse and develop personal and lasting understandings of course topics.

Some elements of this thesis have been explored empirically. In a casual review of this literature, Garrison and Arbaugh (2007) highlighted a few results, and conveyed the following conclusions:

Cognitive Presence

Social Presence

Teaching Presence

These conclusions, though modest, are supportive of Garrison et al.'s (2000) original projections. The triviality of these conclusions arises from two interrelated problems with the CoI as a program of research. First, the studies investigate issues that are peripheral in the CoI conceptual framework, making attention to these issues premature. Studies investigating student satisfaction and its relation to social and teaching presence typify the first aspect of the problem (Lomika & Lord, 2007; Nippard & Murphy, 2007; Shea et al., 2006; Shea, Pickett, & Pelt, 2003; Swan & Shih, 2005), as do essays that explore arcane issues of educational measurement (Garrison, Cleveland-Innes, Koole, & Kappelman, 2006; Rourke, Anderson, Garrison, & Archer, 2001). Neither issue is central to the CoI framework and investigations of them should be postponed until the fundamental claims of the model are established.

Secondly, few of these issues can be studied meaningfully until the central matters have been settled. Until researchers can identify instances and non-instances of deep and meaningful learning, prescriptions about, say, an instructor's responsibilities in an online forum, an idealized discursive process, or the nature of mediated, affiliative communication are untethered from any tangible criteria. Garrison et al.'s (2001; 2000) concern with new communication media is subsequent to their concern with deep and meaningful learning in higher education. In the CoI framework, they position social, teaching, and cognitive presence as predictive of deep and meaningful learning. In the language of experimental design, the presences are the independent variables that determine deep and meaningful learning, the dependent variable. Therefore, deep and meaningful learning is the primary issue to be investigated substantially before other issues become relevant and researchable.

Surprisingly, few of the 200-300 studies that index the CoI framework demonstrate a concern with establishing the existence of deep and meaningful learning. The closest researchers have come is the measurement of cognitive presence, and a worrisome conclusion emerges from these studies: Students engage only in the lower levels of the practical inquiry process (triggering events and exploration); instances of engagement in the higher levels (integration and resolution) are rare, and examples of groups of students engaging in a full cycle of cognitive presence have not been documented (Garrison, Anderson, & Archer, 2001; Vaughn & Garrison, 2005; Kanuka et al., 2007; Kanuka, 2003). An inability to identify first-hand instances of deep and meaningful learning presents a serious challenge to a prominent model of e-learning. It also makes studies of teaching, social, and cognitive presence inopportune.

Deep and Meaningful Learning

Articles about the the CoI framework focus on learning and teaching processes rather than outcomes, and Garrison et al. (2001) are vague about the learning objectives it addresses. Nevertheless, the framework tilts toward certain types of outcomes and locating those outcomes is essential to evaluating the framework and orienting research.

On various occasions, Garrison et al. (2001, 2000) mention two types of educational objectives: critical thinking and deep and meaningful learning. We focus on the latter in this study based on suggestions from the framework's lead author (R. Garrison, personal communication, January 4, 2008).

Meaningful learning. Hay (2007) explains that meaningful and deep, as the terms are used to modify learning, are related concepts. The current discourse of meaningful learning began with Ausubel (1961) and his distinction between meaningful learning and verbal learning. He associated verbal learning with educational activities such as rote and reception learning, and he identified them as forms of teaching in which the entire content of what is to be learned is presented to the learner in its final form. The learners' only responsibility is to internalize the ready-made concepts. Conversely, Ausubel associated meaningful learning with educational approaches such as discovery and problem-based learning. The commonality is the process of discovering the content that must be learned. The responsibility of the learner in this mode is more demanding and involves formulating relationships between substantive aspects of new concepts, and between new concepts and existing ones.

Deep learning. In higher education, the term deep learning is associated strongly with a program of research initiated by Marton and Saljo (1976) who proposed a distinction between surface and deep learning. They characterized surface learning as the uncritical acceptance of new facts and ideas and the attempt to store them as isolated items; conversely, they characterized deep learning as the critical examination of new facts and the effort to make numerous connections with existing knowledge structures. As Hay (2002) points out, there are correspondences between Ausubel's construction of meaningful learning and Marton and Saljo's construction of deep learning.

Deep and meaningful learning. In the CoI framework, the terms are joined, and it is evident throughout the CoI articles (Garrison et al., 2001; 2000) that deep and meaningful learning is consistent with Marton and Saljo's (1976) and Ausubel's (1961) definitional work.

The purpose of this study is to determine the extent to which deep and meaningful learning occurs in communities of inquiry. In this preliminary investigation, we do so by reviewing the empirical studies of Garrison et al.'s framework that investigate learning.

Method

Synthesis Strategy

Research on the CoI framework is a mix of qualitative and quantitative studies. Ogawa and Malen (1991) refer to such bodies of literature as multivocal, and they developed a complimentary method for reviews and syntheses based on the principles and procedures of the exploratory case study method. Gall, Gall, and Borg (2006) summarize the seven steps of this method:

  1. Define the focus of the review: The focus of our review is learning as it has been investigated empirically within the framework of Garrison et al.'s CoI.
  2. Search for relevant literature: Our search explored three databases, Google Scholar, the CoI web site (Teaching and Learning Centre, 2007), and Proquest Dissertations and Theses. We began with the first two databases because they allowed us to focus our review specifically on studies that built on the CoI framework-its assumptions, constructs, and instrumentation. This enabled us to distinguish between the large number of reports that deal generally with e-learning, blended learning, and distance education, and the topic that our review focuses on, i.e., the CoI. The progenitors of the CoI framework are diligently cataloguing all of the reports that build on their model, and Google Scholar is useful in a) aggregating peer-reviewed papers, theses, books, abstracts and articles, from academic publishers, professional societies, preprint repositories, universities and other scholarly organizations; and b) cross referencing the reports.
  3. In Google Scholar, we used the search terms Garrison Anderson Archer to locate the keystone articles (Garrison et al., 2001; 2000), and the 235 articles that referenced them.
  4. Classify the documents: Our classification of documents involved i) reading the abstracts of the documents that were returned in the searches, ii) categorizing the reports into the APA article-type categories of theoretical, review, or empirical, iii) selecting empirical studies in which evidence of student learning had been collected and analyzed. This yielded 48 studies dating from 2000 to 2008.
  5. Create summary databases: For these, we created a summary database in which we categorized the details of each study (See Table 1).
  6. Identify constructs and hypothesized causal linkages: The constructs that interested us were i) deep and meaningful learning, which we position as an educational outcome and, therefore, dependent variable based on our reading of Garrison et al.'s corpus (2003, 2006, 2005, 2001), and ii) the three presences (social, teaching, and cognitive), which we position as independent variables. Garrison et al. imply that CoI are an emergent property of the co-occurrence of the three presences, and that deep and meaningful learning is a property of CoI. In the boundaries of this study, we are concerned only with learning outcomes, not with learning processes which many CoI researchers have explored. This distinction between processes and outcomes, and their antecedent - subsequent relationship, is consistent with conceptual and empirical work on deep and surface learning (Hay, 2007; Kember, 2000; Entwistle & Tait, 1984) and meaningful and verbal learning (Novak, 1998; Novak & Symington, 1982).
  7. Search for contrary and rival interpretations: In evaluating the reports' results, we compared them with the empirical work on deep and surface learning and with current work in educational assessment.
  8. Use colleagues or informants to corroborate findings: Using a process similar to peer debriefing suggested by Guba and Lincoln (1985), we discussed our findings and provisional conclusions with the framework's progenitors and with researchers who have studied and published reports on the CoI. We asked them to evaluate the soundness of our conclusions, to recommend relevant reports not included in our literature review, and to suggest alternative interpretations.

Findings and Discussion

Our search of the three databases identified 252 reports that reference Garrison et al.'s (2001, 2000) keystone articles on the CoI framework. Of these, 57 took some element of the CoI as their primary focus:

• cognitive presence
• social presence
• teaching presence
• learning

(n = 26)
(n = 24)
(n = 23)
(n = 5)

Categorized into the APA's article types, the reports were:

• empirical
• theoretical
• review
(n = 48)
(n = 5)
(n = 2)

Invariably, learning was operationalized as perceived learning measured through self-reports with survey items. (See Table 1). We review those studies here.

Table 1 Studies of empirical studies of the community of inquiry framework 2000-2008

Study
AT
ML
ACOI
Akyol, Z., & Garrison, R. (2008). The development of a CoI over time in an online course: Understanding the TP progression and integration of social, cognitive, and CP teaching presence. JALN.
E
N
SP
TP
CP
Anderson, T., Rourke, L., Garrison, D. R., & Archer, W. (2001). Assessing teaching presence in a computer conferencing environment. Journal of Asynchronous Learning Networks, 5(2).
E
N
TP
Arbaugh, J.B. (2007). An empirical verification of the Community of Inquiry framework. Journal of Asynchronous Learning Network, 11(1), 73-85.
E
N
SP
TP
CP
Arbaugh, J. B., & Hwang, A. (2006). Does "teaching presence" exist in online MBA courses? The Internet and Higher Education, 9(1), 9-21.
E
N
TP
Garrison, D.R., & Arbaugh, J.B. (2007). Researching the community of Inquiry Framework: Review, issues, and future directions. The Internet and Higher Education, 10(3), 157-172.
R
N
SP
TP
CP
Garrison, D. R., Cleveland-Innes, M., Koole, M., & Kappelman, J. (2006). Revisiting methodological issues in the analysis of transcripts: Negotiated coding and reliability. The Internet and Higher Education, 9(1), 1-8.
Es
N
SP
TP
CP
Cleveland-Innes, M., Garrison, D.R., & Kinsel, E. (2007). Role adjustment for learners in an online community of inquiry: Identifying the challenges of incoming online learners. International Journal of Web-Based Learning and Teaching Technologies, 2(1), 1-16.
E
N
SP
TP
CP
Conrad, D. (2005). Building and maintaining community in cohort-based online learning. Journal of Distance Education, 20(1), 1-2
E
N
SP
Delfino, M., & Manca, S. (2007). The expression of socia presence through the use of figurative language in a web based learning environment. Computers in Human Behavior, 23, 2190-2211.
E
N
SP
Garrison, D.R., Cleveland-Innes, M., & Fung, T. (2004). Student role adjustment in online communities of inquiry: Model and instrument validation. Journal of Asynchronous Learning Network, 8(2), 61-74.
E
N
 
Garrison, D. R., Anderson, T., & Archer, W. (2000). Critical inquiry in a text-based environment: Computer conferencing in higher education. Internet and Higher Education, 2(2-3), 87-105.
Es
N
SP
TP
CP
Garrison, D. R., Anderson, T., & Archer, W. (2001). Critical thinking, cognitive presence, and computer conferencing in distance education. American Journal of Distance Education, 15(1).
E
N
SP
TP
CP
Garrison, D. R. (2003). Cognitive presence for effective asynchronous online learning: The role of reflective inquiry, self-direction and metacognition. In J. Bourne & J. C. Moore (Eds.), Elements of quality online education: Practice and direction. Volume 4 in the Sloan C Series. Needham, MA: The Sloan Consortium.
Es
N
SP
TP
CP
Garrison, D. R. & Cleveland-Innes, M. (2005). Facilitating cognitive presence in online learning: interaction is not enough. American Journal of Distance Education, 19(3), 133-148.
E
N
TP
Hall, T. (2005). Critical thinking, self-direction and online learning: A practical inquiry perspective in higher education. (Doctoral Dissertation, University of South Dakota, 2005). Proquest Dissertations and Theses, ATT 3172094.
E
N
 
Heckman, R., & Annabi, H. (2002). A Content Analytic Comparison of FTF and ALN Case-Study Discussions. Paper presented at the 36th Hawai International Conference on System Sciences, 2002.
E
N
CP
Hensley, R. (2003). Graduate nursing students' perceptions of online classroom environment, teacher use of humor, and course satisfaction. Grambling State University. (Doctoral Dissertation, Grambling State University, 2003). Proquest Dissertations and Theses, ATT 3119005.
E
N
 
Hobgood, B. (2007). Perceptions of motivation, enjoyment, and learning from online discussions by North Carolina high school students in online, Advanced Placement Psychology courses. (Doctoral Dissertation, University of North Carolina, 2007). Proquest Dissertations and Theses, ATT 3263471.
E
PL
 
Kanuka, H. (n.d.). An exploration into facilitating higher levels of learning in a text-based Internet learning environment using diverse instructional strategies.
E
SOLO
Taxonomy
 
Kanuka, H., Rourke, L. & Laflamme, E. (2007). The influence of instructional methods on the quality of online discussion. British Journal of Educational Technology, 38(2), 260-271.
E
N
TP
CP
Kanuka, H., Garrison, D.R. (2004). Cognitive Presence in Online Learning. Journal of Computing in Higher Education, 15(2), 30-48.
Es
N
CP
Lomicka, L. & Lord, G. (2007). Social presence in virtual communities of foreign language (FL) teachers. System, 35, 208-228.
E
N
SP
Liang, K. (2006). Promoting social presence: Building connectedness in educational cyberspace. (Doctoral Dissertation, University of British Columbia, 2006). Proquest Dissertations and Theses, ATT NR19894.
E
N
SP
McKlin, T., Harmon, S.W., Evans, W., & Jone, M.G. (2001). Cognitive Presence in Web-Based Learning: A Content Analysis of Students' Online Discussions. American Journal of Distance Education, 15(1) 7-23.
E
N
CP
MacLachlan, D. (2004). Exploring self-direction in an online learning community. (Doctoral Dissertation, University of Calgary, 2004). Proquest Dissertations and Theses, ATT NQ97752.
E
N
 
Maguire, K. (2005). Professional development in a blended e-learning environment for middle school mathematics teachers. (Doctoral Dissertation, University of Toronto, 2005). Proquest Dissertations and Theses, ATT MR07210.
E
N
 
Meyer, K. (2003). Face-to-Face versus threaded discussions: The role of time and gigher-order thinking. Journal of Asynchronous Learning Networks, 7(3), 55-65.
E
N
CP
Meyer, K. (2004). Evaluating online discussions: Four different frames of analysis. Journal of Asynchronous Learning Networks, 8(2), 101-114.
E
Bloom's Taxonomy
CP
Murphy, E. (2004). Identifying and measuring ill-structured problem formulation and resolution in online asynchronous discussions. Canadian Journal of Learning and Technology, 30(1).
E
N
 
Nippard, E., & Murphy, E. (2007). Social presence in the web-based synchronous secondary classroom. Canadian Journal of Learning and Technology, 33(1).
E
N
SP
Redmond, P., & Lock, J.V. (2006). A flexible Framework for online collaborative learning. The Internet and Higher Education, 9, 267-27
T
N
SP
TP
CP
Rogers, P., & Lea. M (2005). Social presence in distributed group environments: the role of social identity. Behavior & Information Technology, 24(2), 151-158.
E
N
SP
Richardson, J., & Swan, K. (2003). Examining social presence in online courses in relation to students' perceived learning and satisfaction. Journal of Asynchronous Learning Networks, 7(1).
E
PL
SP
Rourke, L., Anderson, T., Garrison, D. R., & Archer, W. (2001). Methodological issues in the content analysis of computer conference transcripts. International Journal of Artificial Intelligence in Education, 12.
R
N
 
Rourke, L., Anderson, T. Garrison, D. R., & Archer, W. (1999). Assessing social presence in asynchronous, text-based computer conferencing. Journal of Distance Education, 14(3), 51-70.
E
N
SP
Rourke, L., & Anderson, T. (2002). Using peer teams to lead online discussion. Journal of Interactive Media in Education, (1). Online [Available]: http://www-jime.open.ac.uk/
E
PL
TP
Rourke, L., & Anderson, T. (2002). Exploring social interaction in computer conferencing. Journal of Interactive Learning Research, 13(3), 257-273.
E
N
SP
Rovai, A.P. (2002). Sense of community, perceived cognitive learning, and persistence in asynchronous learning networks. The Internet and Higher Education, 5(4), 319-332.
E
PL
SP
CP
Shea, P.J. (2006). A study of students' sense of community in online learning environments. Journal of Asynchronous Learning Network, 10(1), 35-44
E
N
SP
TP
CP
Swan, K., & Shih, L.F. (2005). On the nature and development of social presence in online course discussions. Journal of Asynchronous Learning Networks, 9(3), 115-136.
E
N
SP
Pawan, F., Paulus, T., Yalcin, S., & Chang, C. (2003). Online learning: Patterns of engagement and interaction among in-service teachers. Language Learning & Technology, 7(3), 119-140
E
N
TP
Schirire, S. (2004). Interaction and cognition in asynchronous computer conferencing. Instructional Science, 32(6), 475-502.
E
N
CP
Schirire, S. (2005). Knowledge building in asynchronous discussion groups: Going beyond quantitative analysis. Computers in Education, 46(1), 49-70.
E
N
CP
Shea, P., Pickett, A., & Pelt, W. (2003). A follow-up investigation of teaching presence in the SUNY Learning Network. Journal of the Asychronous Learning Network, 7(2).
E
PL
TP
Shea, P., Li, C. S., & Pickett, A. (2006). A study of teaching presence and student sense of learning community in fully online and web-enhanced college courses. The Internet and Higher Education, 9(3), 175-190.
E
N
TP
Shi, S. (2005). Teacher moderating and student engagement in synchronous computer conferences. (Doctoral Dissertation, Michigan State University, 2005). Proquest Dissertations and Theses, ATT 3189741.
E
N
 
Strachota, E. (2003). Student satisfaction in online courses: An analysis of the impact of learner-content, learner-instructor, learner-learner and learner-technology interaction. (Doctoral Dissertation, The University of Wisconsin-Milwaukee, 2003) Proquest Dissertations and Theses, ATT 3100902.
E
N
 
Stein, D.S., Wanstreet, C.E., Glazer, H.R., Engle, C.L., Harris, R.T., Johnston, S.M., Simons, M.R., & Trinko, L.A. (2007). Creating shared understanding through chats in a community of inquiry. The Internet and Higher Education, 10, 103-115.
E
N
CP
Stein, D., & Wanstreet, C. (2005). Presence in a blended course: implications for communities of inquiry. Paper presented at the 21st Annual on Distance Teaching and Learning, Wisconsin.
E
N
SP
TP
CP
Tung, C. (2007). Perceptions of students and instructors of online and Web-enhanced course effectiveness in community colleges. (Doctoral Dissertation, University of Kansas, 2007). Proquest Dissertations and Theses, ATT 3284232.
E
N
N
Vaughan, N., & Garrison, D. R. (2005). Creating cognitive presence in a blended faculty development community. Internet and Higher Education, 8, 1-12.
E
N
CP
TP
Vaughan, N. (2005). Investigating how a blended learning approach can support an inquiry process within a faculty learning community. (Doctoral Dissertation, University of Calgary). Proquest Dissertations and Theses, ATT NR03893
E
N
 
Wanstreet, C. (2007). The effect of group mode and time in course on frequency of teaching, social, and cognitive presence indicators in a community of inquiry. (Doctoral Dissertation, Ohio State University, 2007). Proquest
Dissertations and Theses, ATT 3247933.
E
N
SP
TP
CP
Waterston, R. (2006). Interaction in online interprofessional education case discussions. (Doctoral Dissertation, University of Toronto, 2006). Proquest Dissertations and Theses, ATT NR21836.
E
N
 
Wever, B.D., Schellens, T., Valcke, M. & Keer, H.V. (2006). Content Analysis schemes to analyze transcripts of online asynchronous discussion groups: A review. Computers & Education, 46(1), 6-28.
R
N
SP
TP
CP

T = Article Type
ML = Measure of Learning
ACOI = Aspect of COI

Bloom = Bloom's Taxonomy
E = Empirical
Es = Essay
I = Integration
N = None
PL = Perceived Learning
R = Review
SOLO = Biggs' SOLO Taxonomy
SP = Social Presence
TP = Teaching Presence
CP = Cognitive Presence

Assessing Learning in CoI

Richardson and Swan (2003) used surveys to assess perceived learning in online courses. The item through which the authors assessed learning was: "My level of learning that took place in this course was of the highest quality." On a scale of one to six, the group's mean score for perceived learning was 4.7. The authors further analyzed the aggregated mean by learning activities and report that perceived learning was highest for written assignments and individual projects and lowest for group projects and class discussions.

In 2006, Shea, Li, and Pickett surveyed a random sample of 6088 students to gauge their sense of classroom community, teaching presence, and learning. Learning was explored with the prompt, "Overall, I learned a great deal in this online course." The authors do not provide a summary of the responses to this question but report correlations between learning and the three constituents of teaching presence (instructional design, facilitating discourse, and direct instruction). The correlations range from .43 to .64.

Hobgood (2007) analysed self-reports of learning as part of his dissertation. He surveyed senior high school students with eight items and their closed-ended, three-point response options (agree, don't know, disagree). The majority of his respondants agreed that online discussions "broadened their knowledge of the subject" (62%); but, their perceptions were equivocal on the remaining items. Substantial percentages of students responded disagree or don't know to the following statements: "The quality of my learning was improved by online collaborative learning" (60%), "I learned a great deal from my peers" (52%), "My ability to integrate facts was greatly improved" (48%), and "My ability to develop generalizations was improved" (45%) (p. 71).

Akyol and Garrison (2008) also included an item on perceived learning in their survey of a group of graduate students engaged in online learning. Their item read, "I learned much in this course." Like Shea et al. (2006), their main concern was with the relationship between learning and the three presences, and they report a significant correlation between teaching presence and perceived learning (r = .55, p = .03). Descriptive statistics on the students' responses to the perceived learning item are not provided.

Discussion

Taken together, these cursory efforts at assessment provide an unfavorable picture of learning in communities of inquiry. Bracketing, for now, any methodological concerns with the studies, the following interpretations emerge. Students believe they learn a lot in CoI (Akyol and Garrison, 2008; Richardson & Swan, 2003). They believe their learning is of the type we would categorize at the lower level of Bloom's taxonomy, but they are divided on their perceptions of achieving mid-level objectives (Hobgood, 2007). In addition to these perceptions of the quantity and type of learning, Hobgood's and Richardson and Swan's data provide an indication of how students learn in communities of inquiry. Hobgood's respondents were reluctant to agree with the statement "I learned a great deal from my peers," and Richardson and Swan's group preferentially associated learning with individual projects and written assignments over group work and discussions.

A synthesis of the data on perceived learning contradicts the assertion that students engage in deep and meaningful learning through sustained communication in critical communities of inquiry. According to Garrison et al. (2001, 2000), students should be acquiring the types of knowledge and higher order skills associated with a university education-critical thinking, epistemic development, deep and meaningful learning-and they should be acquiring these through sustained critical discourse. They are not.

Garrison et al. could provide the best accounting for the gulf between the projections of their model and the data from empirical studies. Here we speculate on three failures of the CoI framework; first, as a program of research, second, as a model of e-learning, and third, as a means to engender deep and meaningful learning.

Failure of the CoI as a Program of Research

In two keystone articles, Garrison et al. (2001, 2000) present several assertions about the optimal configuration of communication technology, instructors, and students in higher education. A progressive and productive program of research would investigate each of the assertions beginning with the central claim about deep and meaningful learning. Our review shows that in the period since the keystone articles were published, there has not been a substantive attempt to investigate learning in CoI. In the following paragraphs, we explain how the existing studies are deficient.

Self-reports of Perceived Learning. In this section, we consider the use of self-reports as a measure of learning. We review the general concerns that have been targeted at this technique, and we discuss particular inadequacies in the assessment of learning as it is constructed in the CoI framework.

In a study of perceived learning and affective communication, Rovai (2002) discusses some of the methodological implications of self-reports as a measure of learning. He suggests that the only alternative to self-reports is course grades, which he argues have restricted ranges, are unreliable, and bear little relationship to student learning. Self-reports of learning, on the other hand, are consistent over time and have convergent validity which is demonstrated by their consistency with other measures.

Gonyea (2005) concedes Rovai's (2002) assertion about the reliability of self-report data but disputes claims of their validity (the extent to which a survey actually measures what it purports to measure). Regarding the literature, Gonyea concludes that self-reports relate moderately well with attitudes toward courses but not with cognitive performance. He reports that self-perceptions offer very different portrayals of learning or performance than objective measures, and he suggests that personality factors (e.g., self-esteem, attitude, self-awareness) that have nothing to do with the particular construct being assessed strongly influence how people respond to self-report items.

Atop these general problems, there are particular problems with the self-report measures used in the CoI literature. These include problems with construct representativeness, ambiguous language, lack of information in reports about psychometric properties of instruments and administration.

Construct representativeness is the term used by specialists in educational measurement to indicate the extent to which items on a test fully capture the mulidimensional nature of a variable under study. Student learning, for instance, is a construct comprised of domains (e.g., cognitive, affective, psychomotor, conative), levels (e.g., remembering, understanding, applying), and subjects (e.g., topic one, topic two). Valid measures of learning include items that evoke rich information about each dimension.

Unfortunately, four of the five studies of learning in CoI use one, closed-form survey item to evoke information about learning. Concerns about content validity arise around studies that make claims about learning based on a single self-report item such as "I learned much in this course" (Akyol & Garrison, 2008), "Overall, I learned a great deal in this online course" (Shea et al., 2006), or "My level of learning that took place in this course was of the highest quality" (Richardson & Swan, 2003). As measures of learning, these processes are deficient in three respects. First, the construct under investigation is underrepresented. Second, the phrasing of the items is ambiguous, which permits various readings among the respondents and the researchers. For instance, if respondents agree strongly with the statement I learned much in this course, it is unclear whether they have had significant changes in their attitude toward the subject matter, are able to perform course-related tasks proficiently, or are able to recall several relevant facts. Third, and most important, these items do not appear to be designed to elicit information about deep and meaningful learning, the learning objective that Garrison et al. (2000) tie to CoI.

Hobgood's (2007) survey addresses some problems of content validity. His instrument is comprised of eight items, and the items appear to target different types of learning consistent with Bloom's hierarchically ordered taxonomy, including factual knowledge ("I broadened my knowledge of the subject"), understanding, ("My ability to integrate facts improved"), and synthesis ("My ability to develop generalizations improved"). Further improvements are discernable in work by Arbaugh (2005a; 2005b; 2004). He has also studied online learning, cast in the CoI framework, and used surveys to collect data on perceived learning. Like Hobgood (2007), Arbaugh's surveys are comprised of multiple items, which evoke information on types and levels of learning, and the prompts communicate clearly the sense of 'learning' that is under study:

Moreover, Arbaugh reports reliability and validity information about the survey for each administration (Arbaugh, 2005a; 2005b; 2004). This allows readers to evaluate the interpretations and conclusions he offers.

Arbaugh's (2005a, 2005b, 2004) and Hobgood's (2007) studies are models for assessing student learning through self-report on closed-form surveys. There are several legitimate reasons for using this technique despite its weaknesses. Nevertheless, as research on the CoI framework nears the 10-year mark, it becomes increasingly important to employ robust measures of deep and meaningful learning. In the next section, we explain three.

Substantive Measures of Deep and Meaningful Learning

Because higher-order learning is widely regarded as the hallmark of university education, assessing this outcome is a topic of intensive study. In this section, we describe 1) an instrument specifically designed to measure deep learning, 2), a concept-mapping technique that builds on the deep / surface literature, and 3) the test blue-printing technique.

Structure of Observed Learning Outcomes. Biggs, one of the principle contributors to the deep and surface learning discourse, developed two instruments. One, the Study Process Questionnaire (SPQ) (Biggs, 2002), is for measuring the processes of deep and surface learning (Biggs, 1987; 1999). Process has been investigated extensively in the CoI literature, including studies using the SPQ (Garrison & Cleveland-Innes, 2005), and it is not the focus of this study.

Biggs' other instrument is called the Structure of the Observed Learning Outcomes (SOLO) (Biggs & Collis, 1982). Like Bloom et al.'s taxonomy of educational objectives, the SOLO is comprised of hierarchically ordered categories of learning outcomes. The five categories, from highest to lowest, are:

  1. Extended Abstract: A coherent whole is generalized or reconceptualized at a higher level of abstraction.
  2. Relational: Aspects of an instructed domain are integrated so that the whole has a coherent structure and meaning.
  3. Multistructural: Several relevant aspects of a domain are presented but they are not integrated.
  4. Unistructural: The learner focuses the relevant domain, and presents one relevant aspect.
  5. Prestructural: The task is not taken up in an appropriate way; the student's product is distracted or misled into irrelevant aspects of a domain.

The basis of these categories is consistent with the formulation of student learning that emerges in the deep-surface literature and the meaningful-verbal literature. In those discourses, learning is formulated as the effortful integration of new concepts and existing knowledge structures.

Two researchers have used the SOLO to study learning in CoI. Schrire (2006) categorizes students' contributions to online forums in order to study learning processes, which is not the focus of this study. Kanuka (2002) examined the influence of five online, communicative activities on student learning. The activities were webquests, invited guests, brainstorming, debate, and nominal group. After completing the activitites, the students composed reflective position papers, and Kanuka classified these into the SOLO categories. She demonstrated that the various learning activities elicited different levels of learning from the students, with webquests prompting the largest number of papers categorized in the highest level of SOLO (extended abstract) and brainstorming the fewest.

Parenthetically, Kanuka's (2002) data also support the conclusion, emerging from the studies of self-reported learning, that deep and meaningful learning does not occur in CoI. Of the 95 student submissions, the majority were classified at the multi-structural level (41) or lower (unistructural [8], prestructural [2]). Recall that students' work is classified as multi-structural when several aspects of a topic are represented, but connections between the aspects are not made. Kanuka's results are particularly disappointing because they arise in a situation in which a high level of attention was devoted to the instructional design of the online, collaborative learning activities. Though the five activities evoked different levels of learning, each was selected based on existing evidence, both empirical and logical, of its effectiveness. Each activity was educationally more sound than the activity we identified as predominant through online learning in a previous literature reviews, i.e., the week-long, whole-group, open-ended forum.

Outside the CoI literature, several researchers have used the the SOLO to assess student learning in higher education (Biggs, 1979; Prosser & Trigwell, 1991; Trigwell & Prosser, 1991; Ivanitskaya, Clark, Montgomery, & Primeau, 2002; Chan, Tsui, & Chan, 2002; Boulton-Lewis, 1992). Reports indicate that it is a meaningful way to evaluate deep learning across disciplines, but that the process is time-consuming and unreliable. Burnett (1999) and Trigwell and Prosser suggested the addition of subcategories would make classification more stable, and Chan et al. (2002) reports on a successful test of this process.

Concept-Mapping. Since Ausubel introduced the notion of meaningful learning, its assessment has been conducted through concept mapping. Concepts maps are graphs that represent the structure of students' declarative knowledge. They consist of nodes (representing concepts), linked by labeled lines (representing knowledge structures) (Novak, 1998). Ruiz-Primo (2000) summarizes much of the key work that has been done on concept maps as a tool for assessing learning.

Hay (2007) synthesizes this work with the research on deep and surface learning. As we noted earlier, he draws out many of the connections between these related literatures, and presents a method in which maps created by students prior to instruction are compared to the maps they create at the conclusion of instruction. Like Bigg's and Collis' (1982) SOLO taxonomy, the maps provide information about the students' efforts to structure information, and the pre-post comparison provides information about the extent to which new concepts are incorporated into existing knowledge frameworks.

Test Blueprinting. Test blueprinting is the final method we will present for assessing deep and meaningful learning in CoI. A test blueprint is a matrix that represents the topics presented in a course of instruction and the level at which they are to be learned. Level is often operationalized in terms of Bloom's taxonomy of educational objectives (Bloom, Engelhart, Furst, Hill, & Krathwohl, 1956). The current version of the taxonomy (Anderson, & Krathwohl, 2001) has six levels: remember, understand, apply, analyze, evaluate, and create. The levels are organized hierarchically, therefore, the types of outcomes addressed through CoI would focus on the the final three or four.

Schrire (2006) and Meyer (2004, 2003) have conducted studies of the CoI framework in which they compare the types of educational outcomes represented in Bloom's taxonomy with the types represented in the SOLO taxonomy and with the phases of cognitive presence (Garrison et al., 2001). When categorizing segments of students' contributions to online forums, Schrire found that the segments classified in the higher levels of Bloom's taxonomy were also classified in the higher levels of Garrison's and Bigg's and Collis' (1982) taxonomies. In a similar study with comparable results, Meyer also argued for equivalencies between the deep and meaningful learning (Garrison et al.), structured learning (Biggs & Collis), and hierarchical learning (Bloom et al.). Thus, test blueprinting using Bloom's taxonomy is another useful way to assess deep and meaningful learning in CoI.

In this section, we examined some of the deficiencies of the CoI as a program of research and offered suggestions for subsequent studies. In the next section, we examine its shortcomings as a model of e-learning as we continue our attempt to account for the growing body of disconfirming evidence of CoI as forums for deep and meaningful learning. Specifically, we focus on one problematic element of the CoI framework, cognitive presence.

The specific problem we examine is the ambiguity of cognitive presence as either a description of student activity in online, educational forums, or a prescription for what they should do. Our point is that deep and meaningful learning may not occur because students are not engaged in the constituent processes.

No Cognitive Presence

A major problem for the CoI model is that researchers, including Garrison (Garrison et al., 2001), have not been able to identify clear instances of cognitive presence. As Garrison et al. describe it, cognitive presence is a four-stage process that begins with a trigger event-a dilema that emerges from the students' experience or, in a formal educational setting, from instructional materials. As a dilema, it evokes puzzlement, and this spurs students through three subsequent stages: 1) explorationÑa wide search for solutions to the dilema, 2) integrationÑthe construction of meaning from the ideas generated in the previous phase; and 3) resolutionÑa solution to the original dilema.

Several authors (Fahy, Garrison et al., 2001; Kanuka et al., 2007; McKlin, Harmon, Evans, & Jones, 2002; Stein et al., 2007; Schrire, 2004) have analyzed transcripts of students' contributions to online forums. Uniformly, they classify the bulk of students' messages as exploration (41 - 61%) a smaller fraction as integration (13 - 33%), and a neglible percentage in the culminating phase, resolution (1 - 9%) (See Table 2). (It is important to note that these percentages are overstated. The conventional process for coding segments of students' contributions involves, first, a gross classification of messages into one of the three presences (social, teaching, and cognitive). Thus, the percentages reported above reflect the proportion of trigger events, exploration, integration, and resolution within the subset of messages that have already been classified as cognitive presence-not the entire set of messages that comprise a conference. Researchers typically report that less than half of the messages in an online forum are classified as cognitive presence (Kanuka, Rourke, & Laflamme, 2007)

Garrison positions this integrated, multi-phase process, as the heart of the learning process in CoI. Teaching presence and social presence are subsidiary means to support and promote cognitive presence. Moreover, he explains that a CoI emerges only when all three presences occur. Thus, the lack of cognitive presence is a logical explanation for the absence of deep and meaningful learning in CoI.

Attempts to construct models of students' communicative activity in online forums-either idealized models such as Garrison et al.'s (2001) or empirical models (e.g., Gunawardena et al., Kanuka & Anderson)-have been frustrating (Fahy et al., 2000; Henri, 1992; Hillman, 1999; Jonassen & Kwon, 2000; Marttunen, 1997; Murphy, 2000; Salmon, 2000; Zhu, 1996). As we noted in a previous study:

Some models have been used more often than others by researchers in empirical studies; however, each time they are used, they are criticized, modified substantially, and often abandoned. The deployment of these conceptual frameworks across various authentic settings reveals the difficulty in modeling constructs, processes, outcomes, and relationships in teaching and learning online. (Rourke, 2005).

Garrison et al.'s model has not overcome these difficulties.

In the previous sections, we critiqued the CoI as a research program and as model of e-learning in order to account for the gap between the claims of the framework and the emprical data that has amassed over the last several years. In the next section, we offer one final critique, that of the CoI as a framework for engendering deep and meaningful learning.

Table 2. Percentages of communicative activity in CoI cognitive presence phase.

Study
TE
E
I
R
Vaughan, N., & Garrison, D. R. (2005). Creating cognitive presence in a blended faculty development community. Internet and Higher Education, 8, 1-12.
8
61
16
1
Stein, D. et al. (2007) Creating shared understanding through chats in a community of inquiry. Internet and Higher Education, 10(2) 103-115.
16
52
28
4
Garrison, D. R., Anderson, T., & Archer, W. (2001). Critical thinking, cognitive presence, and computer conferencing in distance education. American Journal of Distance Education, 15(1).
8
42
13
4
Schirire, S. (2004). Interaction and cognition in asynchronous computer conferencing. Instructional Science, 32(6), 475-502.
14
41
33
9
Kanuka, H., Rourke, L., & Laflamme, E. (2007). The influence of instructional methods on the quality of online discussion. British Journal of Educational Technology, 38(2), 260-271.
11
53
26
10
McKlin, T., Harmon, S.W., Evans, W., & Jone, M.G. (2001). Cognitive Presence in Web-Based Learning: A Content Analysis of Students' Online Discussions. American Journal of Distance Education, 15(1) 7-23.
3
39
9
1
Fahy, P. (2002). Assessing critical thinking processes in a computer conference. Retrieved, April 19, 2008, from: http://cde.athabascau.ca/softeval/reports/mag4.pdf
13
63
19
6
Mean
10.42 50.14 20.57 5

TE = Triggering Event
E = Exploration
I = Integration
R = Resolution

Weakness of CoI as a Means to Engender Deep and Meaningful Learning

If we attribute the current discourses on meaningful and deep learning to Ausubel and Marton and Saljo, then they are two to three decades old, respectively. During the ensuing years, much has been learned about how to support these outcomes.

In the following paragraphs, we explain the three main processes for supporting deep learning: 1) assessing students in appropriate ways, 2) reducing content, and 3) confronting students' misconceptions. Then we examine the empirical literature on the existence of these mechanisms in CoIs.

Assessment. Throughout empirical studies of deep and surface learning, three principles emerge. One concerns student assessment. Biggs (1999) stresses the role of assessment in learning by suggesting that students often replace the official course syllabi with their own unofficial one after analyzing the assessment events in their courses.

Based on this observation, Ramsden (2003) and Prosser and Trigwell (2001) argue that assessment tasks must focus on higher order processes. In this way, students will take deep approaches to learning and develop meaningful knowledge. Forms of student assessment promoted by Biggs (1999), Ramsden and Prosser and Trigwell include performance, authentic, criterion-based, self, and peer assessment.

Unfortunately, student activity in online forums is rarely assessed by any means other than through the provision of marks for participation. Conventionally, five to twenty-five percent of a course's total grade is reserved for participation in online forums, and students are awarded these marks based on the extent to which they fulfill the quantitative requirements for participation (e.g., posting a message twice a week) (Masters & Oberprieler, 2003).

Such a process for distributing marks seems more like a way to coerce students to engage in an activity than a form of assessment. ITEA (2003) defines student assessment as “the systematic, multi-step process of collecting evidence of student learning, understanding, and ability and using that information to inform instruction and provide feedback to the learner” (p. 22). It is difficult to unify this with participation grades.

Reducing content: The second suggestion for moving students toward deep approaches to learning and meaningful learning outcomes is to reduce content. In a prominent early study, Entwistle and Ramsden (1983) identified a positive correlation between workload and non-academic, reproducing orientations to course work (i.e., surface learning). Entwistle and Tait (1990) demonstrated this relationship again in a subsequent study, as have several other researchers (Richardson, 2005; Jackling, 2005; Reid, Duvall, & Evans, 2005). “The inevitable result of too much work,” notes Ramsden “is that students complete their courses with sketchy and confused knowledge” (p. 132). He explains that the overwhelming demands of the course leave little time for students to think about and integrate the content.
Instructors and students involved in online learning often describe the time and effort it demands as onerous. And, as Ramsden (2003) points out, the overwhelming demands interfere with their ability to reflect on the information they encounter. In a previous case study of computer conferencing, a student told us he found other's comments intriguing but did not acknowledge them because, “I haven't had a chance. In order to make a good response, I'd have to go back and read it again. Eighty percent of the things I'd like to say, I don't have time to actually post” (Rourke & Kanuka, 2007: 118).

Confronting misconceptions. The third suggestion for moving students toward deep and meaningful approaches and outcomes is to identify and confront their misconceptions (Biggs, 1989; Ramsden, 1987, 1988). Biggs relates the results of studies which portray students who, in one study, are able to describe photosynthesis in accurate detail but unable to see the difference in how plants and animals gain food, and in another study, university-level physics students predicting heavy objects will fall faster than light ones because the former have a bigger force. His explanation is that the students' misconceptions have gone unchallenged throughout their education.

Garrison claims that CoI are especially valuable for exposing and challenging students' misconceptions. A CoI requires students to both articulate their understandings of course topics and critique others. However, the preponderance of data indicates that the extent to which students engage in these dialogical activities is grossly overestimated. In a recent case study of graduate-level humanities students engaged in online discussion, we found few instances in which students challenged each others' opinions. Our results were typical. A review of the literature indicates that the percentage of student communicative activity that is classified as critical discourse, mutual critique, or argumentation, in whatever way it might be operationalized, ranges from 5 to 22% (Davis & Rouzie, 2002; De Laat, 2001; Duphorne & Gunawardena, 2005; Garrison et al., 2001; Gunawardena et al., 1997; Hara, Bonk, & Angeli, 2002; Jones, Scanlon, & Blake, 1998; Kanuka & Anderson, 1998; Marttunen & Laurinen, 2002; McLaughlin & Luca, 2000).

In the preceding paragraphs, we argued that the CoI fails as a model for achieving deep and meaningful learning because the procedures for achieving those outcomes do not materialize. This argument was part of a larger one criticizing the CoI as program of research and a model for e-learning.

Conclusion

The purpose of this paper was to investigate learning in CoI. At this preliminary stage, we did so by synthesizing data from existing empirical studies stemming from Garrison et al.'s (2001, 2000) germinal articles. The CoI framework has shaped many studies of e-learning in higher education, however, few authors have demonstrated an interest in assessing learning in communities of inquiry. Over 200 reports cite the framework, but only five measure student learning. In these studies, learning was uniformily operationalized as self-reports elicited through surveys. All but two of the surveys evoked reports of perceived learning with one item. As a measure of learning, self-reports are dubious in the best of circumstances; their application in the CoI studies does not embody the best of circumstances. Moreover, as measures of deep and meaningful learning-the type of learning Garrison et al. associate with CoI-the techniques are excedingly deficient. It is difficult to derive any conclusions about learning in communities of inquiry from five studies with methodological weaknesses.

Bracketing the methodological deficiencies of student assessment in the CoI literature, our review indicates that deep and meaningful learning does not arise in CoI. A synthesis of the self-report data produces the following picture: Students believe that they learn a lot in CoI, but the type of learning is lower-level, factual knowledge (we hesitate to characterize the outcomes as surface learning). Respondents believe that the processes and activities through which they gain this knowledge is didactic instruction and independent work.

Our main suggestion applies to subsequent research studies. Briefly, we encourage researchers to conduct more, substantial studies of learning in CoI. If we can identify situations in which students are and are not engaged in deep and meaningful learning, we can make evidence based suggestions about the types and quantities of teaching presence, social presence, and cognitive presence that are related to learning. If not, any suggestions are un-tethered from evidence. Results from studies of deep and meaningful learning will provide the best foundation from which to construct conceptual frameworks and prescriptions about e-learning and blended learning. Conceptual frameworks of social presence, teaching presence, and cognitive presence (and the corollary prescriptions for instructional designers) that are unconnected to empirical evidence of deep and meaningful learning are, on the face of it, groundless.

References

Akyol, Z., & Garrison, D.R. (2008). Community of Inquiry: The Role of Time. First International Conference of the new Canadian Network for Innovation in Education (CNIE), Banff, April.

Anderson, L. W., & Krathwohl, D. R. (Eds.). (2001). A taxonomy for learning, teaching and assessing: A revision of Bloom's Taxonomy of educational objectives: Complete edition. New York: Longman.

Anderson, T., Rourke, L., Garrison, D. R., & Archer, W. (2001). Assessing teaching presence in a computer conferencing environment. Journal of Asynchronous Learning Networks, 5(2).

Arbaugh, B. (2005a). How much does subject matter matter? A study of disciplinary effects in online MBA courses. Academy of Management Learning and Education, 4(1), 57-73.

Arbaugh, B. (2005b). Is there an optimal design for online MBA courses? Academy of Management Learning and Education, 4(2), 135-149.

Arbaugh, B. (2004). Learning to learn online: A study of perceptual changes between multiple online course experiences. Internet and Higher Education, 7(3), 169-182.

Arbaugh, J., & Benbunan-Fich, R., (2006). Separating the effects of knowledge construction and group collaboration in learning outcomes of web-based courses. Information and Management, 43(6), 778-793.

Arbaugh, J., & Hwang, A. (2006). Does teaching presence exist in online MBA courses? The Internet and Higher Education, 9(1), 9-21.

Ausubel, D. (1961). In defense of verbal learning. Educational Theory, 11, 15-25.

Benbunan-Fich, R., Hiltz, R., & Turoff, M. (2003). A comparative content analysis of face-to-face vs. asynchronous group decision making. Decision Support Systems, 34(4), 457-469.

Biggs, J. (2002). A revision of the Study Process Questionnaire. Higher Education Research and Development, 21(1), 73-92.

Biggs, J. (1999). Teaching for quality learning at university. Buckingham: Society for Research into Higher Education and Open University Press.

Biggs, J. (1987). Student approaches to learning and studying. Melbourne. AU: Council for Educational Research.

Biggs, J., Collis, K. (1982). Evaluating the quality of learning the SOLO taxonomy. New York: Academic Press.

Bloom, B., Engelhart, Furst, Hill, & Krathwohl, (1956). Handbook 1, Cognitive Domain. Longman: London.

Cuban, L. (1986). Teachers and machines: The classroom use of technology since 1920. New York: Teachers College Press.

Cuban, L. (2001). Oversold and underused: Computers in the classroom. Cambridge, MA: Harvard University Press.

Educational Resource Information Centre. (2001). Publication types. [Online] Available: http://ericir.syr.edu/Eric/Help/pubtypes.shtml

Entwistle, N. J., & Ramsden, P. (1983). Understanding student learning. London: Croom Helm.

Fahy, P. (2002). Assessing critical thinking processes in a computer conference. Retrieved, April 19, 2008, from: http://cde.athabascau.ca/softeval/reports/mag4.pdf

Finegold, A., Cooke, L. (2006). Exploring the attitudes, experiences and dynamics of interaction in online groups. Internet and Higher Education, 9, 201-215.

Gall, M., Gall, J., & Borg, W. (2006). Educational research: An introduction. New York: Allyn & Bacon.

Garrison, D. R. (2003). Cognitive presence for effective asynchronous online learning: The role of reflective inquiry, self-direction and metacognition. In J. Bourne & J. C. Moore (Eds.), Elements of quality online education: Practice and direction. Volume 4 in the Sloan C Series. Needham, MA: The Sloan Consortium.

Garrison, D.R., & Arbaugh, J.B., 2007. Researching the Community of Inquiry framework: Review, issues and future directions. The Internet and High Education, 10(3).

Garrison, D. R., & Cleveland-Innes, M. (2005). Facilitating cognitive presence in online learning: Interaction is not enough. American Journal of Distance Education, 9(3), 133-148.

Garrison, D. R., Cleveland-Innes, M., Koole, M., & Kappelman, J. (2006). Revisiting methodological issues in the analysis of transcripts: Negotiated coding and reliability. The Internet and Higher Education, 9(1), 1-8.

Garrison, D. R. & Cleveland-Innes, M. (2005). Facilitating cognitive presence in online learning: Interaction is not enough. American Journal of Distance Education, 19(3), 133-148.

Garrison, D. R., Anderson, T., Archer, W. (2001). Critical thinking, cognitive presence, and computer conferencing in distance education. American Journal of Distance Education, 15(1).

Garrison, D. R., Anderson, T., & Archer, W. (2000). Critical inquiry in a text-based environment: Computer conferencing in higher education. Internet and Higher Education, 2(2-3), 87-105.

Gilbert, P. & Dabbagh, N. (2005). How to structure online discussions for meaningful discourse: A case study. British Journal of Educational Technology, 3(61), 5-18.

Gonyea, R. (2005). Self-reported data in institutional research: review and recommendations. New Directions for Institutional Research, 127, 73-89.

Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Beverly Hills, CA: Sage.

Hay, D. (2007). Using concept mapping to measure deep, surface and non-learning outcomes. Studies in Higher Education, 32(1), 39-57.

Ice, P., Arbaugh, B., Diaz, S., Garrison, D. R., Richardson, J. Shea, P., & Swan, K. (November, 2007). Community of Inquiry Framework: Validation and Instrument Development. The 13th Annual Sloan-C International Conference on Online Learning, Orlando.

Jackling, B. (2005). Perceptions of the learning context and learning approaches: Implications for quality learning outcomes in accounting. Accounting Education 14(3), 271-291.

Kanuka, H., Rourke, L., & Laflamme, E. (2007). The influence of instructional methods on the quality of online discussion. British Journal of Educational Technology, 38(2), 260-271.

Kember, D. (2000). Action learning and action research: Improving the quality of teaching and learning. London: Kogan Page.

Lomicka, L., & Lord, G. (2007). Social presence in virtual communities of foreign language (FL) teachers. System, 35, 208-228.

Marton, F., & Säljö, R. (1976). On qualitative differences in learning. I. Outcome and process. British Journal of Educational Psychology, 46, 4-11.

Masters, K., & Oberprieler, G. (2003). Encouraging equitable online participation through curriculum articulation. Computers & Education, 42(4), 319-332.

McKlin, T., Harmon, S., Evans, W., & Jones, M. (2002). Cognitive presence in web-learning: A content analysis of students online discussions. Retrieved, April 16, 2008, from: http://it.coe.uga.edu/itforum/paper60/paper60.htm

Meyer, K. (2004). Evaluating Online Discussions: Four Difference Frames of Analysis. Journal of Asynchronous Learning Networks, 8(2), 101-114.

Meyer, K. (2003). Face-to-Face Versus Threaded Discussions: The Role of Time and Higher-Order Thinking. Journal of Asynchronous Learning Networks, 7(3), 55-65.

Nippard, E., & Murphy, E. (2007). Social presence in the web-based synchronous secondary classroom. Canadian Journal of Learning and Technology, 33(1).

Novak, J. (1998). Learning, creating, and using knowledge: concept maps as facilitative tools in schools and corporations. New Jersey: Erlbaum.

Novak, J. D., & Symington, D. (1982). Concept mapping for curriculum development. The Victorian Institute of Educational Research, 48, 3-11.

Ogawa, R. T., & Malen, B. (1991). Towards rigor in reviews of multivocal literatures: Applying the exploratory case study method. Review of Educational Research, 61(3), 265-286

Ramsden, P. (1987). Improving teaching and learning in higher education: The case for a relational perspective. Studies in Higher Education, 12, 275-286.

Ramsden, P. (1988). Studying learning: improving teaching. In P. Ramsden (Ed), Improving learning: New perspectives. London: Kogan Page.

Reid, W., Duvall, E., & Evans, P. (2005). Can we influence medical students' approaches to learning? Medical Teacher, 27(5), 401-407.

Postman, N. (1993). Technopoly: The surrender of culture to technology. New York: Vintage Books.

Postman, N. (2003). Questioning media. In M. S. Pittinsky (Ed.), The wired tower: Perspectives on the impact of the internet on higher education (pp. 181-200). Upper Saddle River, NJ: Prentice Hall.

Richardson, J. (2005). Students' perceptions of academic quality and approaches to studying in distance education. British Educational Research Journal, 31(1), 7-27.

Richardson, J., & Swan, K. (2003). Examing social presence in online courses in relation to students' perceived learning and satisfaction. Journal of Asynchronous Learning Networks, 7(1). Available at: http://sloan-c.org/publications/jaln/v7n1/pdf/v7n1_richardson.pdf

Rourke, L., Anderson, T., Garrison, D. R., & Archer, W. (2001). Methodological issues in the content analysis of computer conference transcripts. International Journal of Artificial Intelligence in Education, 12.

Rourke, L., Anderson, T. Garrison, D. R., & Archer, W. (1999). Assessing social presence in asynchronous, text-based computer conferencing. Journal of Distance Education, 14(3), 51-70.

Ruiz-Primo, M. (2000). On the use of concept maps as an assessment tool in science: what we have learned so far. Revista Electrónica de Investigación Educativa. Retrieved, April 15, 2008 from: http://redie.ens.uabc.mx/vol2no1/contenido-ruizpri.html

Schrire, S. (2006). Knowledge building in asynchronous discussion groups: Going beyond quantitative analysis. Computers & Education, 46(1), 49-70.

Shea, P., Pickett, A., & Pelt, W. (2003). A follow-up investigation of teaching presence in the SUNY Learning Network. Journal of the Asychronous Learning Network, 7(2).

Shea, P., Li, C. S., & Pickett, A. (2006). A study of teaching presence and student sense of learning community in fully online and web-enhanced college courses. The Internet and Higher Education, 9(3), 175-190.

Stein, D., & Wanstreet, C. (2003). Role of social presence, choice of online or face-to-face group format, and satisfaction with perceived knowledge gained in a distance learning environment. Paper presented at the Midwest Research to Practice Conference in Adult, Continuing, and Distance Education.

Swan, K., & Shih, L.F. (2005). On the Nature and Development of Social Presence in Online course Discussions. Journal of Asynchronous Learning Networks, 9(3), 115-136.

Vaughan, N.D. (2004). Technology in Support of Faculty Learning Communities. In M.D. Cox & L. Richlin (Eds.), Building Faculty Learning Communities: New Directions for Teaching and Learning, No. 97, 101-109. Jossey-Bass: San Francisco, CA.

Vaughan, N., & Garrison, D. R. (2005). Creating cognitive presence in a blended faculty development community. Internet and Higher Education 8, 1-12.

Liam Rourke: lrourke@ucalgary.ca

Heather Kanuka: heather.kanuka@ualberta.ca