Methodology Papers

The following articles which focus primarily on Community of Inquiry methodology papers are listed here in descending publication order (most recent at top).  Where possible, links to full-text versions are included.  If full-text copies are not freely available, links to pertinent journals are included.

Arbaugh, J.B., Cleveland-Innes, M., Diaz, S.R., Garrison, D.R., Ice, P., Richardson, & Swan, K.P. (2008). Developing a community of inquiry instrument: Testing a measure of the Community of Inquiry framework using a multi-institutional sample. The Internet and higher Education, 11(3-4), 133-136.

This article reports on the multi-institutional development and validation of an instrument that attempts to operationalize Garrison, Anderson and Archer’s Community of Inquiry (CoI) framework (2000). The results of the study suggest that the instrument is a valid, reliable, and efficient measure of the dimensions of social presence and cognitive presence, thereby providing additional support for the validity of the CoI as a framework for constructing effective online learning environments. While factor analysis supported the idea of teaching presence as a construct, it also suggested that the construct consisted of two factors—one related to course design and organization and the other related to instructor behavior during the course. The article concludes with a discussion of potential implications of further refinement of the CoI measures for researchers, designers, administrators, and instructors.

Swan, K., Shea, P., Richardson, J., Ice, P., Garrison, D. R., Cleveland-Innes, M., & Arbaugh, J. B. (2008). Validating a measurement tool of presence in online communities of inquiry. E-Mentor, 2(24), 1-12. http://www.e-mentor.edu.pl/_xml/wydania/24/543.pdf

This article examines work related to the development and validation of a measurement tool for the Community of Inquiry (CoI) framework in online settings. The framework consists of three elements: social presence, teaching presence and cognitive presence, each of which is integral to the instrument. The 34 item instrument, and thus framework, was tested after being administered at four institutions in the Summer of 2007. The article also includes a discussion of implications for the future use of the CoI survey and the CoI framework itself.

Wever, B.D., Schellens, T., Valcke, M. & Keer, H.V. (2006). Content Analysis schemes to analyze transcripts of online asynchronous discussion groups: A review. Computers & Education, 46(1), 6 – 28.

Research in the field of Computer Supported Collaborative Learning (CSCL) is based on a wide variety of methodologies. In this paper, we focus upon content analysis, which is a technique often used to analyze transcripts of asynchronous, computer mediated discussion groups in formal educational settings. Although this research technique is often used, standards are not yet established. The applied instruments reflect a wide variety of approaches and differ in their level of detail and the type of analysis categories used. Further differences are related to a diversity in their theoretical base, the amount of information about validity and reliability, and the choice for the unit of analysis.

This article presents an overview of different content analysis instruments, building on a sample of models commonly used in the CSCL-literature. The discussion of 15 instruments results in a number of critical conclusions. There are questions about the coherence between the theoretical base and the operational translation of the theory in the instruments. Instruments are hardly compared or contrasted with one another. As a consequence the empirical base of the validity of the instruments is limited. The analysis is rather critical when it comes to the issue of reliability. The authors put forward the need to improve the theoretical and empirical base of the existing instruments in order to promote the overall quality of CSCL-research.

Garrison, D. R., Cleveland-Innes, M., Koole, M., & Kappelman, J. (2006). Revisting methodological issues in the analysis of transcripts: Negotiated coding and reliability. The Internet and Higher Education, 9(1), 1-8.

Transcript analysis is an important methodology to study asynchronous online educational discourse. The purpose of this study is to revisit reliability and validity issues associated with transcript analysis. The goal is to provide researchers with guidance in coding transcripts. For validity reasons, it is suggested that the first step is to select a sound theoretical model and coding scheme. Particular focus is placed on exploring the advantages of the option of a negotiated approach to coding the transcript. It is concluded that researchers need to consider the advantages of negotiation when coders and researchers are not familiar with the coding scheme.

Heckman, R., Annabi, H. (2002). A Content Analytic Comparison of FTF and ALN Case-Study Discussions. Paper presented at the 36th Hawai International Conference on System Sciences, 2002.

While much research has shown that ALNs can produce learning equivalent to FTF classrooms, there has been little empirical research that explicitly and rigorously explores similarities and differences between the learning processes that occur in ALN and FTF activities. Transcripts from eight case study discussions, 4 FTF, 4 ALN, were content analyzed. The study used a content analytic framework derived primarily from previous work of Anderson, Archer, Garrison and Rourke. These authors developed a model that studies cognitive, social, and teaching processes in ALN discussions. Based on the work of Aviv [5], the current scheme also considers characteristics of the discourse process. The findings provide evidence that ALNs generate high levels of cognitive activity, at least equal to, and in some cases superior to, the cognitive processes in the FTF classroom.

Rourke, L., Anderson, T., Garrison, D. R., & Archer, W. (2001). Methodological issues in the content analysis of computer conference transcripts. International Journal of Artificial Intelligence in Education, 12.

This paper discusses the potential and the methodological challenges of analyzing computer conference transcripts using quantitative content analysis. The paper is divided into six sections, which discuss: criteria for content analysis, research designs, types of content, units of analysis, ethical issues, and software to aid analysis. The discussion is supported with a survey of 19 commonly referenced studies published during the last decade. The paper is designed to assist researchers in using content analysis to further the understanding of teaching and learning using computer conferencing.