Critique of various articles on Online Instruction – part 1

Posted on Mar 31, 2010

Below are some thoughts about some of the articles in the compilation offered in Module 7 of IDE 632: Analysis of Web-based Developmental Schemes – part 1

“Blueprint to Develop a Great Web Site” – T.H.E Journal, March, 2001

The author touches upon the salient points of good web design well enough. However, there are a few points worth noting since the article is over 9 years old.

  • First, web design need not be beholden to the old rule of 640 x 480 pixel dimensions. It is presumed at this time that enough of the web population is using higher resolution monitors. This is significant from a design standpoint since 1024 x 768 pixel dimension offer more real estate for content.
  • The author does not refer to the importance of making websites compatible with mobile devices, likely because it was not a feasible consideration in 2001.
  • There is no mention of blog design, again, presumably because blogs were not as pervasive as they are today. The importance of this factor relates to the distinction between blogs and traditional websites because the blog introduces interactivity with the visitor. Thus, the visitor needs to operate within the conventional literacy of blog activity, such as understanding CAPTCHA, registering an account, and the etiquette of a properly composed response to a post. Given the popularity of blogs as both a personal communication medium and as a vehicle for conducting instruction, I suggest that the blueprint offered here be expanded significantly.

One of my favorite examples of a well-designed and appropriately formatted blog is Alumni Futures. Mr. Shaindlin, its owner and author, has presented an uncluttered environment for reading, and has mastered the art of composing succinct, informative articles using formatting devices (paragraphing, indentation, boldface, links) in a way that makes them both scannable and readable. Further, he has developed a sophisticated blog “voice” that is both informative and inquisitive in a way that is somewhere between journalism and professional diary. His work has served as a premier example for me as I develop my own voice. (I might not ever be accused of being succinct!). I might also mention that I have known Mr. Shaindlin personally since 1970.

“What is web 2.0? Design patterns and business models for the next generation of software” – O’Reilly Media, September, 2005

Of central interest to me in this article is the premise of Collective Intelligence. I believe that ID professionals ought to explore its meaning carefully to prevent any misunderstanding about what is meant by this term. I agree with and advocate the use of data to improve the user experience. In a highly context-specific environment such as a dedicated website, this is a good thing. But let us not confuse “intelligence” with broader meanings of “virtue” or “truth”. Intelligence is a subjective interpretation of what is deemed valuable to the society it serves (“Towards a Feminist Reassessment of Intellectual Virtue” Jane Braaten, Hypatia, Vol. 5 No. 3, 1990). In colloquial terms, “intelligence” is often errantly equated with being smart, correct, educated, having wisdom, having valid experience in an area of knowledge or skill, etc.

I feel strongly that the collective opinion based on crowdsourced data collection means nothing more than a statistical point of interest. In context, this may be interpreted as “intelligence” in the same way a spy may collect “intelligence” information on the operations of enemies or what they think. But this information does not constitute, by any objective measure, evidence of intellectual virtue, rational thinking, or consideration of viable alternatives. In a “data happy” world, we are inclined to reflexively respond to patterns and trends in information – the so-called emergence phenomenon mentioned by Stephen Downes and Connectivists in general – rather than the inherent validity of the basis for the data trends. For example, how many of us cannot help but notice the comparison of “thumbs up/thumbs down” counts beside certain public comments in current online news articles? Is it not irresistible to generalize the collective mindset of Americans based on whether 2962 people “agree” that they think Obama is a Muslim and a radical Socialist versus 426 who do not?

In essence, I believe there is pragmatic use for crowdsourcing tools that may benefit the development of ISD, such as preferences for certain methods of instruction, or presentation of information in an online interface. However, I am cynical about “the wisdom of crowds”, and with detached (anonymous, unaccountable) online crowds in particular, and I resent the notion that anyone would capitulate to the reliability and validity of information simply because crowdsourcing tools are powerful. I propose, instead, that Collective Intelligence be renamed Collective Sentiment, Collective Opinion or Collective Collection. In other words, intelligence is the byproduct of collecting data, but that its “intellectual virtue” be measured by a more comprehensive set of variables that account for, among other things, the basis upon which those opinions are based.

Further, we should not equate the function of Collective Intelligence in the development of a business model, or in discerning audience preferences with the functions of Collective Intelligence in education. People can be persuasive, influential – and flat-out wrong, or unable to defend their position other than to point to statistics of “what other people think”. (This has been an annoying aspect of polls related to health care reform, where it seems most people express opposition to it because it “sounds Socialist”, but agree with certain separate principles that comprise the bill. The poll data is meaningless beyond describing what people think, not what they actually believe.)

Another point of contention is the threshold we may soon discover where persons attempting to do research will find that many of them are referring to the exact same online articles or “online sentiments” because those are the most readily available resources “crowdsourced” into highest relevance. This may be acceptable for an elementary project on the solar system, but not necessarily for aspects of higher education where the student is expected to synthesize research and derive an opinion of their own. I am intrigued by the absence or shallowness of risk assessment by any of the comments made on this article in its present form (March 31, 2010), other than the author alluding in a rather cursory sense that Collective Intelligence can foster an “echo chamber effect.”

This phenomenon appears to be addressed in John Seely Brown and Richard P. Adler’s article “Minds on Fire: Open Education, the Long Tail, and Learning 2.0” EDUCAUSE Review, vol. 43, no. 1 (January/February 2008): 16–32.

The authors advocate the forming of practicums, or “learning to be” in realistic learning contexts that are similar to the journeyman experience of traditional crafts. These Learning 2.0 environments are intended to benefit from both the social learning collective building of knowledge as well as the mentorship guidance needed to lead the course of learning towards “intellectual virtue”:

We need to construct shared, distributed, reflective practicums in which experiences are collected, vetted, clustered, commented on, and tried out in new contexts. One might call this “learning about learning,” a bootstrapping operation in which educators, along with students, are learning among and between themselves. This can become a living or dynamic infrastructure—itself a reflective practicum (p. 28).

I am more inclined to operate as an ID professional under these circumstances.

More to come…

Main Menu