Jump to content

Scriptapedia/Empirically Testing Scripts

From Wikibooks, open books for an open world

In scripts, the section "Evaluation Criteria" indicates "how would someone using this script for the first time know if they have done the script correctly". (Hovmand, Rouwette, Andersen, Richardson, Calhoun, Rux & Hower, 2011). There are currently many ways to evaluate the outcomes of the script overall, from creating a list of variables to reaching group consensus.

In addition, scripts can be tested by units (e.g. creating a Causal Loop Diagram as a group vs doing individually). Based on Cresswell & Miller (2000) and Anderson, Vennix,

Richardson (1997), here are some ways scripts and the use of scripts can be tested:

A) Evaluation of Process Interactions

I) Consensus (Within group)

II) Communication (Level of participation within

heterogeneous group)

B) Evaluation of Artefact

I) Diversity (Is the description thick? Is it complex?)

II) Clarity (Is the implicit made explicit?)

III) Validity (Externally/ Internally validated? Triangulation? Is it refutable?)

IV) Reliability (Replicable?)

C) Evaluation of effects on participants

I) Effects of framing (Have the views or attitudes or feelings of problem shifted? Where

there new insights? Were there endogenous factors that were surprising or initially

thought of as exogenous? Do the respondents or parti

cipants understand loops?)

II) Satisfaction

III) Efficient (time spent)

IV) Commitment (Empowerment, subsequent action)

D) Overall

I) Is it cumulative?

II) Where there unexpected results?

Andersen, D. F., Richardson, G. P., & Vennix, J. A. (1997). Group model building: adding more science to the craft.

Creswell, J. W., & Miller, D. L. (2000). Determining validity in qualitative inquiry. Theory into practice, 39(3), 124-130.

Hovmand, P., Rouwette, E. A. J. A., Andersen, D. F., Richardson, G. P., Calhoun, A., Rux, K., & Hower, T. (2011). Scriptapedia: a handbook of scripts for developing structured group model building sessions.