On Formative Evaluation issues in the ISD model

Posted on Mar 3, 2010

This week’s readings discuss various issues with formative evaluation, research, and application of it in real settings.

Of most interest to me was the article by Dennis D. Gooler, “Formative Evaluation Strategies for Major Instructional Development Projects.” Although the article pertains to large-scale instructional programs, there were some general sentiments that were worthy, regardless of scale. His rhetoric about audience: “Who is the audience for formative evaluation?” inspires some thought. If the interface in an online learning class is badly designed and affects student output, what would be the the strategic starting point?

In a situation where a distance learning program produces less than expected results, who is/are the constituents? The university? Course designers? Student? In the area that I am focusing upon in my ISD model studies, I would like to examine the area of the user interface to determine if there were any aspects of the presentation of the learning material that may have affected the integrity of the messaging system. If the interface was flawed in some way that produced inadequate student output (I don’t know how that would be provable – this points to Gooler’s “Acceptable Evidence” issue), what would be the strategy for formative evaluation? Revise the content, or instruct the student about the proper response to the learning material?

I guess the bigger issue, from the viewpoint of my intended ISD model is: What is the framework for the relationship between the presentation of instructional material and the digital literacy level of the student? For example, here is a list of questions, in my mind, that precede the implementation of instruction via DL:

  1. Is it reasonable to expect that the student of a DL program has the appropriate level of digital literacy to function within the DL environment by virtue of their willingness to be immersed in it? (I think not – I believe that many new DL students have no idea what they are getting into, and they learn the language of it as they go, like a missionary, stammering in some pidgin English.)
  2. Is it reasonable to expect that an ID professional can design a DL program that takes into consideration various levels of digital literacy, and establish a pre-requisite entry point for the program, or is this too undue a burden on the institution to pre-qualify students? (We don’t force people to pass a parental licensing test to become parents – we just let them go make babies no matter their dysfunction. Has anyone ever been denied access to an online class because of their poor digital literacy skills? Absence of a certification of some sort?)
  3. Is there a need to discuss with students what McLuhan describes as the “background effect” of a certain medium on the context of the instruction? In other words, do we need to tell learners that, although the blog software offers a tiny little space for “comments”, what is expected from them in that space is a well thought out reflection? What exactly is a reflection anyway? Is that like an abbreviated essay? I call for further discussion about the relationship between speech and space, and definitions that describe those relationships in the DL environment for the purpose of qualifying learner’s expectations for output.

To Gooler’s point, what recourse does a learner have in their experience of a DL program if they found it to be unsatisfactory? If a formative evaluation of the program revealed that the interface was vague, disorienting, or poorly designed by agreeable standards, what happens next? We’re not talking about an argument over a textbook, or whether a teacher performed poorly here. Over the course of a semester, a student might be able to address or mitigate those issues, for whatever grievances he or she might have. But what dialog does a student have about their online learning interface? It is the SPF, or Single Point of Failure, in an online course that appears to me to be mostly out of one’s control to address – so it’s take it or leave it. In a hybrid class, this might not seems terribly critical. Though in a purely online class, it is paramount.

Finally, does the prevailing use of the Internet in various social media applications affect the quality of written student output in online learning? I ask this with respect to the IBM Evaluation Levels guide which states that Level 1 standard as student reactions to learning events. If a person has been socialized to function in an environment where short, low-level communication is the normal exchange, will he or she reliably react to the quality of an online learning environment where the expectation is for deeper, crafted written responses? How can formative evaluation be considered if such a person reacts negatively to the program?

Main Menu