Jump to content

Issues in Digital Technology in Education/Assessing Online Discussions

From Wikibooks, open books for an open world

There has been a push to embrace collaborative learning in education. Simultaneously, there is also an increasing demand for the integration of digital technology in education. As a result, the online discussion group has emerged. The online discussion group allows participants to engage in collaborative learning within a digital environment. The idea of group collaboration as a means to construct knowledge has become a widely accepted approach to pedagogy. Drawing on Vygotsky’s social development theory, the online discussion group provides a space for participants to engage in dialogue and develop a dynamic learning community which enhances and enriches understanding of content (Kayler and Weller, 2007). Two major forms have come to typify online discussion: the chat and the forum. The chat is characterized as a synchronous conversation that occurs between multiple participants who are online at the same time, allowing questions and responses to be posted simultaneously. The forum, however, is asynchronous and does not occur in real-time. Therefore, participants are afforded the flexibility and convenience to provide reflective and thoughtful contributions to an online discussion at their leisure (Wall Williams, Watkins, Daley, Courtenay, Davis, Dymock, 2001). The convergence of digital technology and collaborative learning has made the online discussion group increasingly popular in education, but questions have surfaced regarding relevant assessment and evaluation of such student interaction.

Advocates for online discussions have described them as tools that allow for an enriched learning experience that results in a higher order of thinking. In her research, Meyer (2003) observed that students involved in a threaded, asynchronous online discussion tend to exhibit a higher level of thinking that may not be seen in the classroom, particularly when they contribute comments that are exploratory in nature. Meyer (2004) also identifies the value in the written document that is produced in an online discussion that can be referred to and analyzed by both student and teacher at any time for assessment purposes.

Educators engaging their students through online discussion have discovered that successful participation and positive online collaboration must be perceived as a major component of course assessment, and therefore must include the integration of assessment activities (Goodfellow and Lea, 2005). According to researchers Hara, Bonk and Angeli (2000), initial assessment of online discussion emerged through a framework of content analysis established by F. Henri that categorized the learning process evident in electronic messages. The specific criteria identified in Henri’s framework included “student participation, interaction patterns, social cues, cognitive skills and depth of processing, and metacognitive skills and knowledge” (Hara, Bonk, Angeli, 2000, p. 121). Highlighted in this framework is the consideration of general student participation, the social interactions that occur in collaborative learning, as well as the quality and depth of contributions to the discussion, all of which has served as a general foundation for further development of assessment frameworks.

Meyer (2004) compared four different frames of assessment in an effort to determine the most appropriate assessment tools for online discussion. The study focused on King and Kitchener’s model of reflective judgement that is intended to capture students’ reasoning skills, Perry’s framework of intellectual and ethical development, Garrison’s cognitive-processing model designed for measuring critical thinking, and Bloom’s taxonomy, selected as a familiar method to classify student contributions. Meyer found there was a lack of consistency across frames of assessment, leading her to suggest that each framework measures unique qualities of student learning. She indicated that all four frames had something useful to offer and recommended that using a combination or variety of these frames for assessment purposes would be more beneficial for online forums.

Although her study was directed towards higher education, Meyer stated that, “one can see that the frames could also capture younger students’ thinking among the lower and middle levels of the frameworks. In other words, these four frames may be suitable for a range of student abilities and ages” (Meyer, 2004, p. 111). Since online discussion groups are quickly being applied at all levels of the education spectrum, this may be a very valuable notion for all educators.

In a different study, Holmes (2005) used Biggs and Collis’ SOLO Taxonomy method of assessment for a threaded discussion group. According to Holmes, the SOLO taxonomy is typically used to evaluate the complexity and depth of student learning outcomes. Using the taxonomy, responses in an educational discussion are coded according to how the content relates to assigned tasks. Responses could include social or unrelated content (prestructural), one-dimensional or very simple content (unistructural), several task aspects could be mentioned (multistructural), relevant and coherently integrated content (relational), or task aspects could be reconceptualized to show advanced thinking (extended abstract) (Holmes, 2005).

Holmes (2005) found that the majority of the content in the discussion’s posts were classified relational (30%) or unistructural (27%), indicating an interesting dichotomy between relevant knowledge construction and simple, one-dimensional exchanges. However, a separate study conducted by McLoughlin and Panko (2002) that also used the SOLO taxonomy to assess online discussion suggested that this assessment tool is hierarchical and does not consider the collaborative aspects of knowledge construction in its assessment.

On the surface, it appears that one consistent message has been revealed in this research area: unlike many traditional forms of assessment, there is not one single or best-fitted assessment tool that is recommended for the purpose of evaluating online discussion. Like most other areas of education, it seems to be up to the discretion of the course designer or instructor to select the assessment tool that best suits specific evaluation criteria or skills that online activities are intended to develop (McLoughlin and Panko, 2002).

According to Goodfellow and Lea (2005), the dilemma with online discussion assessment tools is because the very nature of the technology has allowed for a unique type of literacy to emerge. They argue that although online discussion is predominantly a written collaborative construction, it should be considered a unique genre of writing that reveals different literacy practices to form knowledge and understanding. As such, varied and unique formal assessment must also be developed, posing a significant challenge due to rhetorical complexities arising from this new form of literacy that are not quite understood by student, teacher, or course designers (Goodfellow and Lea, 2005).

As research continues in this area, researchers appear to unite in support of the collaborative qualities provided by online discussion groups and continue to encourage course designers to take careful consideration of this when developing online tasks, making them open-ended, unambiguous and engaging (Holmes, 2005). Although assessment tools continue to be weighed against each other, Hara, Bonk and Angeli (2000) state that every online discussion has the potential to be very unique, and so evaluation criteria may have to be identified on a case-by-case basis.


Resources:

Hara, N., Bonk, C., & Angeli, C. (2000). Content analysis of online discussion in an applied educational psychology course. Instructional Science, 28, 115-152.

Holmes, K. (2005). Analysis of asynchronous online discussion using the SOLO taxonomy. Australian Journal of Educational & Developmental Psychology, 5, 117-127.

Goodfellow, R., & Lea, M. R. (2005). Supporting writing for assessment in online learning. Assessment & Evaluation in Higher Education, 30(3), 261-271.

Kayler, M., & Weller, K. (2007). Pedagogy, self-assessment, and online discussion groups. Educational Technology & Society, 10(1), 136-147.

McLoughlin, C., & Panko, M. (2002). Multiple perspectives on the evaluation of online discussion. In P. Barker & S. Rebelsky (Eds.), Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications 2002 (pp. 1281-1286). Chesapeake, VA: AACE.

Meyer, K. (2003). Face-to-face versus threaded discussions: The role of time and higher-order thinking. Journal of Asynchronous Learning Networks, 7(3), 55-65.

Meyer, K. (2004). Evaluating online discussions: Four different frames of analysis. Journal of Asynchronous Learning Networks, 8(2), 101-114.

Wall Williams, S., Watkins, K., Daley, B., Courtenay, B., Davis, M., & Dymock, D. (2001). Facilitating cross-cultural online discussion groups: Implications of practice. Distance Education. 22(1), 151-167.