Mark Jones, Comments on the SAPTF Draft Report of March 2013 (8 April 2013)

Comments on the APTF-Draft-Report-March-2013, as sent to Senator Chris Moyes, Chair, Senate Academic Planning Task Force (SAPTF), on 8 April 2013; slightly revised.  

Through no fault of the task force, this Draft Report appeared at a bad time of year for communal assessment.  The Draft is dense and 86 pages long, plus appendices; two and a half weeks after its appearance only one comment has been posted on the SAPTF website; the town-hall was attended by about 20 people, if that; and with the best of intentions and efforts I have been able to get only to p. 68 of the Report by April 8, when comments are due to the SAPTF (I therefore submit this response in unfinished form).  Given all this and the number of issues yet needing to be addressed (as explained below), I therefore feel that the Draft needs a longer review period than has been suggested, and that it should not be considered for approval by Senate before September 2013.

The Draft Report represents a massive amount of work on an issue that is controversial, multi-faceted, and continually changing.  The approach as described on pp. 9-10 seems right to me:  it is surely right to assume that online technologies are “here to stay” and to “ensure that what we do, we do well”; to focus “on the student experience”; and to say that “policies should be based on quality… rather than financial benefits.”  Perhaps the most useful thing the Draft has done is to include descriptions of online practices at Queen’s (pp. 27-34); a simple, practical overview of this kind may bring more clarity to the issue than anything else.  The following comments mainly highlight what appear to me as problems, but that is because they are meant to be practical and constructive.  I mean no disrespect to the Task Force, but quite the contrary; I think of William Blake’s proverb, “opposition is true friendship,” as the best motto for scholars.

1.  Specific Substantive Suggestions:

1.1. Recommendations # 1, 2, 9, 11, and 12 are phrased in the indicative (e.g., “Senate acknowledges”) but should be in the subjunctive (e.g., “That Senate acknowledge”).

1.2. Delete Recommendation # 1.  It is basically a tautology (“active learning” is by definition that which “engages students”), but it slips in the word “traditional” to equate the “traditional” with the “passive,” a tendentious and undemonstrated generalization.

1.3.  As explained at the end of section 3.1, below, the conclusion, “We conclude that there is a great deal of merit in promoting the use of online learning at Queen’s because of its proven effectiveness,” would be better phrased as “We conclude that there is merit in facilitating online learning at Queen’s because of its proven potential.”

1.4.  Recommendation 4, “Senate should promote efforts…” should read “Senate should facilitate efforts…”  (see 2.18, below).

1.5.  Delete Recommendation # 9, which rejects a “notion” it would do better to ignore.  The recommendation is merely attitudinal, and attitudes cannot be legislated by Senate approval.  It also comes across as both defensive and ill-founded—e.g., it assumes that there are presumptions in favour of the traditional and against innovation, which is unfounded and somewhat insulting.  A lot of what the Draft holds out elsewhere as “resistance” is not resistance to online per se but to ways it has been handled and promoted on campus and at the provincial level.

1.6.  Rephrase Recommendation #10 so it is parallel with the others, e.g., “Curriculum Committees should be staffed adequately to assure that…”

The advice in the text below this recommendation (p. 67)—basically that “course variants” should not be marked on transcripts—should be clarified (e.g., explain or quote the SCAP recommendation, or just speak directly to the issue without using SCAP to identify it).  But the SCAP recommendation seems to me to be belied by the problem acknowledged in the Draft’s comments on Recommendation 11, i.e., that grading patterns may differ systematically between on-campus courses and their online variants (p. 67).  If there are such differences, this is one reason why, in the interest of transparency, the difference in the mode of the course should be marked on transcripts.

2.  General Comments and Suggestions:

2.1.  The definition of “Blended Course” on p. 13 elides the fact that Queen’s practical definition of “blended” entails reductions in contact hours. At Queen’s, “blended” means that online components have been added to replace and reduce contact hours:  as the Arts and Science webpage puts it, “While blended courses have the same number of student learning hours as traditional lecture courses, the nature of the learning hours is different and fewer contact hours are involved.”[1]  It also means increases in student-faculty ratios (see 2.5, below).

It would be one thing to recommend that this practice be changed, but simply to use “blended” in a way that ignores the reality of this practice will be misleading.  It makes the applicability of external research on “blended” learning dubious, for instance (see 3.1, below).  At the least, the Draft should acknowledge that its definition of “blended” is at odds with Queen’s usage.[2]

2.2.  The use of “Residential Course” to describe all options except online courses (p. 13) implies that online courses are non-residential, when in fact about 85% of our current distance/online registrations are by students on campus (see 2.19 below).   Elsewhere, the Report rightly acknowledges that “Distance learning as a synonym for fully online courses is confusing” (p. 9).  But this usage of “Residential Courses” reinstitutes the same confusion in a different form.

On p. 55, for instance “residential versus online courses,” “residential and online courses,” are presented as alternatives.  But the fact is that Queen’s online courses are chiefly “residential.”  And p. 56 refers to “online courses and other variants of courses that operate outside of Queen’s residential campus.”  The word “other” in this sentence implies that the “online courses” operate “outside,” when they are about 85% on campus.  Could this problem be avoided by using terms such as “traditional” or “in-class” or “classroom” or “face-to-face” rather than “residential”?

2.3. The Draft does well to focus attention on the positive potential of online methods, but it should seek to do so without making contrastive negative generalizations about “traditional” methods, especially “lecture”—not just because the comparisons are invidious, but because the contrastive generalizations themselves are impressionistic and prejudicial.

“Traditional” learning is not necessarily passive, and “passive” learning is not necessarily traditional,[3] yet the Draft occasionally yokes the two rhetorically, as in Recom. 1.

“Lectures” vary by discipline, instructor, class size, etc., so it is illegitimate to refer to “traditional lectures, where students passively listen to an expert speaker and have little opportunity to interact with one another” (pp. 13-14).  There is a wide range of practices in “traditional” courses designated as “lecture,” and many of them involve significant interaction.  In humanities courses, for instance, it is common for lecture sections to be punctuated with discussion or even broken into discussion groups.  Some “lecture” courses function in ways similar to seminars, despite large numbers of students.  Some “lecture” courses falling under the Draft’s definition for “Traditional”  courses use electronic discussion fora to ensure that students arrive in class with questions to ask about their reading.  And so on.  So it is unfair to generalize that “traditional lecture-based courses [. . .] could generally be described as the ‘sage on the stage’ delivering 2-3 h of lecture” (p. 15) (the derogatory cliché is, in any case, gratuitously annoying).

2.4.  As the previous point suggests, Academic Planning must respect disciplinary differences.  The Task Force happens to have predominantly math and sciences representation, with one member from Education and none from Humanities, and, as it acknowledges, “much of the research included in this report emanates from the science disciplines.”  Yet the very sentence that acknowledges this, goes on to make claims “regardless of the discipline” (p. 14).  Elsewhere, the Draft cites two studies of students learning physics to generalize that “Online technologies have an important place in teaching because of their ability to incorporate active learning” (p. 17).  Should one generalize on this basis about all teaching?  Wouldn’t it be more precise to say “an important place in physics teaching” and leave more general applications to the reader?

2.5.  Hidden Financial Dimensions:  The Draft distinguishes between small and large classes to argue that use of technologies can be particularly helpful or necessary in the latter (e.g., p. 18).  But the class-sizes are not an immutable given, and in practice “blending” has been used to increase student-faculty ratios.  For instance, with the blending of PSYC 100 in 2011-12, the number of sections was decreased from 6 to 4 while the total enrolment was increased from 1600 to 1800[4]–thus facilitating a rise in this course’s student-faculty ratio from 267:1 to 450:1, or 69%.  And a recent FAS memo to faculty concerning “course redesign” stipulates that “Each blended course is expected to accommodate an enrolment increase of between 10 – 15%.”[5]  Is there any real distinction between (a) planning to facilitate teaching in large-enrolment lecture sections and (b) planning to facilitate enlarging enrolments?  The Draft does affirm “that the Senate should be focused on ensuring that academic priorities are well served,” and says it will “not address…financial issues directly” (p. 10).  But because of the linkage of these issues, it needs to address them directly to the extent of cautioning that when online is used to facilitate teaching in mega-sections, the apparent promotion of “academic improvement” may in fact serve a cost-saving agenda.

2.6.  In the section “What are the risks of online learning for students?” (pp. 20-22), every expression of concern or “risk” gets a quick counter-objection.  This is rhetorically annoying, and comes across as defensiveness. Worse, it is not even-handed:  the same is not done in the “What are the benefits?” section (pp. 19-20).  I suggest that these local editorial counter-objections be omitted.

2.7.  The section “The importance of current material” (p. 21) is quick to dismiss the objection that “online courses may be more prone to becoming stale,” but it seems to miss the point of the objection when it notes that “CDS spearheads a formal review,” etc.  A regular formal review process is good—but it is necessary precisely because the tendency to crystallize is intrinsic to the technological mode.  Consider:  if you have a lecture course that you repeat year after year, even supposing you have complete notes for it, you still have to relive it word-for-word every year; for most instructors, this necessity entails continuous renovation.  But if a course is “in the can,” whether on paper or on disc or in a series of e-prompts, and if the person responsible for its academic content is not even necessarily involved whenever it is re-run, it is much easier for it to go forward without substantial revision.  This contrast in dynamics is intrinsic to the media, and these concerns are therefore valid.

2.8.  The section from World Wide Learning under “benefits…for faculty” (pp. 22-23) includes two points that are hardly benefits for faculty. Point 1, that online “allows part-time instructors with full-time jobs the ability to perform…at their convenience” (p. 22), actually points to the casualization of the professoriate.  So does point 4, that online learning “makes it possible for more people to teach and earn extra income.”  These might be advantages to the employer and to would-be part-time employees, but not to “faculty” as that term is properly understood.  The whole dynamic by which these factors may undermine the employment of full-time, full-range academic faculty goes unnoticed here, though it is briefly treated on p. 24.  I suggest deleting Points 1 and 4 in this section.

2.9.  The section from U Illinois under “risks…for faculty” (pp. 23-24) does not fit under “risks for faculty”; I suggest deleting all four of these points.  Point 1 is about risks posed by resistant faculty, and is really an argument that faculty should accept and cooperate with online.  Point 2, on financial motivations, is a crude and simplistic formulation of this important problem, and like point 1 it is at bottom a pro-online argument.  Points 3 and 4 are about the need for proper training and competence, not about “risks for faculty.”

2.10.  Points 1, 2, and 3 under “risks for faculty” (p. 24) are good, as are the supplementary concerns (bullets, p. 25).  But a major concern gets left out here, and that is the teaching and assessment of writing (see next point).

2.11.  The teaching and assessment of writing. The 2011 Academic Plan emphasizes the need for education in disciplinary writing skills.  But some of the dynamics intrinsic to online learning pose challenges to both the teaching and the assessment of writing.  Chief among these is the way that online technology facilitates or encourages an increase in student-teacher ratios (see 2.5 above).  Online methods make many aspects of teaching more efficient, encouraging larger student numbers, which can then  be handled in many respects through an increased use of “assistants” or “markers” (here the numbers dynamic intersects with dynamics mentioned in points 2 and 3 on p. 24).  But we cannot fully rely upon markers or teaching assistants to teach and assess disciplinary writing; this is therefore a function that needs to be addressed, and at the very least closely supervised, by instructors.  At least in the humanities, where disciplinary writing is one of the chief skills to be learned as well as a major  medium for evaluation, this necessity for instructors to read and interact with students’ writing can be the neck of the bottle, imposing the numerical limit for fully effective teaching.  Much more could be said about this issue—e.g., I am aware that there are online courses that claim to teach writing skills[6] and even online writing courses—but at any rate it is a major issue and needs to be addressed in a significant way in our Academic Planning for online learning.

2.12.  The section on “risks of online learning for institutions” (p. 26) begins with the risk of “problems that will further hinder the expansion of online teaching.”  What is the logic?  Read the subject heading:  this section should forthrightly address the risks of online learning for the institutions rather than begin with the assumption that online learning is the good to be achieved.  The last sentence of the following paragraph (p. 26) has the same problem.

2.13.  The section “Scope of online learning initiatives” (p. 26) treats the Chem. course merely as a PR fiasco for the online initiative.  Again, look at the subject heading for this section.  How about considering this case as an illustration of an academic problem, which it was?

2.14.  The section “One size does not fit all” (p. 27) is welcome, but it does not go far enough.  The SAPTF should address the problem of disciplinary differences as affecting more than just hands-on activities or performance disciplines like Music.  See also 2.4 and 2.11.

2.15.  Another question the SAPTF should weigh in on is whether students should be required to take courses online.  As Leila Notash observed in Senate in May 2012, “The ‘mature student’ category for admission at Queen’s University has recently been changed such that it is now part of the ‘student interest’ admission process. That is, students who formerly would have been admitted as mature students must now take a series of online courses (amounting to 4 full courses) before they will be admitted to on-campus courses” (Senate Minutes, May 2012, p. 7; see also Dean MacLean’s response, pp. 7-8).

2.16.  Some of the material on the BLI is more spin than information, and some of it is irrelevant to this Report.  Concerning claims of “success” (p. 42), the success has not been demonstrated (see 3.3 below).  The sections on “Recruitment” and “Advancement” (p. 42) are not relevant to a Report that claims to ground its recommendations on “academic priorities” rather than on “revenue goals” (p. 10).  The section on “Community of pedagogical innovation” seems to be misleadingly worded:  “Monthly…gatherings attract instructors from 15…departments…”  (p. 42). Do representatives from 15 departments really come each month, as this suggests?  Or is the truth that, over many months they can count representatives from 15 departments?  The section “Failures” (p. 42) dismisses opposition to the BLI as due only to “persistent misunderstanding.”  This view is comprehensible, given that it is (like the rest that is cited here from p. 42) quoted from Dean Brenda Ravenscroft, the person in charge of the BLI.  But it is a partial perspective, and as Marx has said, “our opinion of an individual is not based on what he thinks of himself.”  A Senate report should present a more balanced account (e.g., maybe the problems experienced by the BLI are due to its being correctly understood).  Certainly the Draft itself should not dismiss critiques of the BLI unheard (as it does at the bottom of p. 42) as mere “resistance and hostility” and as “lack of collegial support.”  The BLI has fairly earned some distrust and opposition, e.g., through its secret “business case for growing distance enrolments” and through its misrepresentations of research data in support of “blended” learning.[7]

2.17.  On p. 43, it is said that “There are approximately 8000-9000 individual courses being offered annually as online additional qualifications (AQ) for teachers.”  What is the context and what does this mean–9000 in Ontario, in Canada, worldwide?  And what are counted as “courses”?  Just below, it says:  “In addition to these ‘AQ’ courses the faculty is introducing similar options…” (p. 43).  This implies that “these” 9000 courses are offered by Queen’s Faculty of Education—surely that can’t be right?

2.18. The terms “promote” and “promotion” are used in some places where “facilitate” or “facilitation” would be more appropriate.  “Promotion” has overtones of selling, hyping, or pushing; what the university needs is, rather, facilitation, a willingness to support innovation based on local pedagogical needs and properly balanced with critical assessment.  As noted on p. 34, most of the online success stories at Queen’s “began as grassroots efforts” – i.e., they are not the fruits of top-down planning or promotion, but of faculty initiative and administrative facilitation.

2.19.  The discussion of CDS on pp. 43-45 sidesteps a hugely important fact:  “Of the 661 FTEs enrolled in CDS, 85% are on-campus Queen’s students taking CDS courses because on-campus courses are full, or to resolve scheduling conflicts, or because CDS is the only Arts and Science unit offering courses in the Summer Term.”[8]  This fact is never mentioned in the Draft, and it should be.  One implication:  in keeping with the Report’s recommendation that we cease to use “distance” as a euphemism for on-campus online learning (p. 9), it should also recommend that “Continuing and Distance Studies” be renamed.

2.20.  Also in connection with the section on CDS:  At the bottom of p. 43, the Draft airily dismisses concerns about online learning (a) being promoted for financial benefits and (b) “becoming the dominant mode of teaching in FAS.”  But the Draft never mentions the many administrative and provincial statements, documents, and practices that underlie and support these concerns:  as for financial motivations, the “Business Case to Grow Distance Enrolments in the Faculty of Arts and Science,” developed in August  2011, is very explicit.[9] Queen’s University Budget Report 2011-12 states: “As part of its planning exercises (in the face of the need to balance the budget), Queen’s has been exploring various revenue-generating ideas. For example, […] offering Queen’s degrees and certificates through distance on-line learning.”[10] And at the provincial level as well, we constantly see “strategic expansion of online learning” recommended for purposes of “cost reduction.”[11]

As for growth in online learning, that is one of the constant key recommendations from the provincial level. An Ontario policy paper leaked in February 2012 suggested that three out of five university courses be offered online.[12] HEQCO’s most recent report recommends “The coherent development of more online learning opportunities.”[13]   Queen’s Proposed Mandate Statement (Oct. 2012) promises rapid development in “technology-enabled learning” (see p. 2); and as early as Where Next? (Jan. 2010) Principal Woolf offered this vision of an undergraduate’s school year at Queen’s:  “For example: one small (full year) class per year, two larger format classes, one offered virtually (through a combination of real-time and asynchronous discussions and lectures), and one as a research component that could be used to double up a credit?” (p. 8).  In this context (none of which is acknowledged in the Draft), it is hardly sufficient to argue that “Based on the number of online courses that exist, there appears little danger of this becoming the dominant mode of teaching in FAS.”

2.21.  Re: “The lack of evidence-based positions in the arguments against online teaching highlights a greater problem…” (p. 45, under recom. 5).  See Xu and Jaggars, cited in 3.4, below.

2.22.  Re: the advice to “Develop a Queen’s educational technology strategy” (p. 51):  in such a connection it is important to bear in mind the diversity and autonomy of disciplines, personnel, etc., and ensure that the “Queen’s strategy” has to do with facilitation of local objectives rather than with promotion or imposition of system-wide innovations.  The same caveat applies to the next item, which seems to envision using the Business School’s technological development as a model (p. 51).

2.23.  Re: advice to “Develop a business case for investing in classroom and lecture capture technology” (p. 52).  Not a “business case for investing in” (see bullet 3, p. 9, and “Priorities,” p. 10) but a “study of.”  What this section (on p. 52) actually describes sounds fine; “business case” is apparently a misnomer.

2.24.  Re: the commentary on Recommendation 7 (p. 53):  It is unacceptable that “robust discussion and dissent” should be regarded as a problem or as a symptom of “failure.”  This comment as a whole appears to presume the virtues of “promotion,” “advoca[cy],” and “advertising”–but even if one is in favour of online learning, one must value dissent, opposition, and critique as well as supportive research, or the latter quickly becomes bogus.

2.25:  Re: “Online courses more so than traditional courses present to the world an image of Queen’s”:  I understand the logic, but it is overstated and isn’t consistent with the way the Draft minimizes the prevalence of online courses on pp. 43-44.  Queen’s teaching reputation is deep and wide and is based almost entirely on its “traditional” courses.  So some rephrasing may be in order, e.g.: “Online courses have a far greater potential than do traditional courses to be considered out of context as representing Queen’s teaching standards and practice” (this point is made in similar terms somewhere else in the Draft, I’m not sure where).

2.26.  Re Faculty Board and the Curriculum Committee (p. 56, para. 3).  The Draft may possibly refer to something else altogether, but I think this should read:  “In the December meeting of FAS Faculty Board, a motion to require curriculum committee approval for all ‘course variants’ was referred to the Curriculum Committee.”  (See FB Minutes, December 2012, p. 8).  There has been much confusion about this motion and this issue, and I have therefore sought to clarify it elsewhere.[14]

2.27.  On the one hand the Draft acknowledges that “there would be no systematic way to assess whether the variants [i.e., online and class-room courses] are similarly designed” (p. 55, bullet 4).  On the other, it says “we see no reason to distinguish courses based upon the mode of instruction” (p. 57, para. 1; cf. p. 63).  This does not appear to be consistent.  The first statement acknowledges that online courses are or may be differently designed, but the second appears to reduce this difference to one in the “mode of instruction.”  The fact is that an online course is, or should be, considerably different in design from the on-campus equivalent; but whether it differs only in “mode of instruction” is precisely what we do not know.  Though current practice is to give the “equivalents” the same course number and credit, they are only nominally equivalent, for nothing is yet being done to ensure their academic equivalence.  The online course variants are not currently being assessed by curriculum committees.  They should be, and the Draft should recommend that they be.

2.28.  The break-down of course “categories” and the description of their respective development and evaluation processes (pp. 57-59) is confusing in that category 1, “Regular courses,” and “category 3, “Online courses offered through…CDS” are overlapping categories; and it is misleading in that the description thus manages to sideline the key problem that online course variants offered through CDS are not approved by curriculum committees.

As it stands, the Draft says that “Arts and Science courses are first approved by their home department and then sent to a Faculty Board Curriculum Committee” (p. 58, bullet 3).  But most courses offered through CDS are Arts and Science courses, and they are not, at present, sent to the curriculum committee.  This should be noted here; even in the sections on “Courses offered through organized initiatives” (pp. 59, 60), it is not explicitly stated that the CDS “course” is generally a variant of an existing traditional course and that it does not require approval by the Curriculum Committee because it is given a name and number identical to the existing traditional course.

It is important that the Report clarify rather than further muddy the water on this issue.

2.29.  The important middle paragraph on p. 63 argues that online courses “should not suffer a greater administrative burden than a regular course that has not been updated in many years.”  Two points:  first, as it stands, the problem is that the online variants produced by CDS have no requirement for approval by curriculum committee; all that has been proposed is that the “burden” of approval be equivalent, not greater.  Second, it is far from true to say that a traditional course “has not been updated in many years” just because the course description has not been updated or sent back to the curriculum committee.  Because instructors must physically re-iterate a traditional course every time they teach it, there is an intrinsic necessity that it be “reviewed,” at least by the instructor, every time it is taught. I cannot speak for everyone, but I don’t know anyone who teaches the same course in the same way every year.  But this necessity does not apply to courses or course components that can be technologically re-run, so this may be reason why online and blended courses would need formal review at intervals that would be excessive for traditional courses (see also 2.7 above).

This distinction may be good news in terms of resources, given the concerns expressed about overworking curriculum committees (pp. 64-65).   It may well make sense to review online courses every 5 to 8 years but to let traditional courses run until someone points out a problem.

2.30.  The claim that “active learning techniques do a better job of engaging students” (68) appears to me to be a tautology.

3. Matters of Evidence, Demonstration, and Advocacy

The Draft rightly expresses respect for “robust, peer-reviewed pedagogical research” and affirms that “policies recommended must be evidence based” (p. 12).  But

(A) pedagogical research has so many variables to deal with that it is difficult to do well and is highly subject to premature generalization and misapplication.   One must therefore take great care in representing it. And

(B) while the Draft cites some research studies in support of online teaching methods, it lapses into unfounded clichés and generalizations when it refers to “traditional” teaching methods:  e.g., that “traditional” learning is student-passive, “the sage on the stage,” etc. (see 2.3, above).

In connection with point (A), consider three cases (3.1 – 3.3) in which the Draft cites research, and a fourth case (3.4) in which it represents claims by advocates and opponents of online learning:

3.1.  U.S. Department of Education Meta-Analysis (“DEMA”).[15] I find the Draft’s use of DEMA misleading. DEMA’s executive summary of “key findings” does appear to support online and blended learning, and the SAPTF states that “We place a great deal of emphasis on this study because of its experimental nature and rigorous statistical approach” (p. 14).  It sums up DEMA as having “found that online and blended learning was more successful than face-to-face learning using a robust statistical analysis” (p. 15).   But if the SAPTF is to claim this, it should note two things.  First, DEMA’s usage of the term “blended” differs from usage at Queen’s and thus its conclusions are not strictly applicable to the support of what is called “blended” at Queen’s; second, the Draft should also quote DEMA’s own “caveats,” since they significantly qualify or put in doubt its own positive conclusions.

As for definition:  DEMA “distinguishes between two purposes for online learning:

  • “Learning conducted totally online as a substitute or alternative to face-to-face learning
  • “Online learning components that are combined or blended [. . .] with face-to-face instruction to provide learning enhancement” (Means et al., p. 9, emphasis in original).

This usage of “blended” may include any simple addition of online learning components to “face-to-face” instruction (cf. DEMA p. 51, as quoted below), so it is no surprise that such a practice should enhance learning.  This usage covers what the SAPTF Draft defines as “traditional” courses,[16] but is at odds with Queen’s usage of “blended learning” (see 2.1 above).  The “blending” of courses at Queen’s entails both decreasing contact-hours and increasing student-teacher ratios (see 2.5 above).  Simply adding online resources will predictably “enhance” learning outcomes, but the same may not be true where contact hours are subtracted and student-teacher ratios are increased.  So DEMA’s conclusions about “blended” learning are not scientifically applicable to what is called “blended learning” at Queen’s.  Yet the Draft cites DEMA as “finding…a statistically significant, beneficial effect seen in blended courses” (p. 38).

DEMA’s “caveats” are as follows:

However, several caveats are in order: Despite what appears to be strong support for blended learning applications, the studies in this meta-analysis do not demonstrate that online learning is superior as a medium, In many of the studies showing an advantage for blended learning, the online and classroom conditions differed in terms of time spent, curriculum and pedagogy. It was the combination of elements in the treatment conditions (which was likely to have included additional learning time and materials as well as additional opportunities for collaboration) that produced the observed learning advantages. At the same time, one should note that online learning is much more conducive to the expansion of learning time than is face-to-face instruction.

In addition, although the types of research designs used by the studies in the meta-analysis were strong (i.e., experimental or controlled quasi-experimental), many of the studies suffered from weaknesses such as small sample sizes; failure to report retention rates for students in the conditions being contrasted; and, in many cases, potential bias stemming from the authors’ dual roles as experimenters and instructors. (Means et al., p. xviii, emphases in original)

It should be noted that these caveats pertain to “the studies in this meta-analysis”—i.e., not to the studies DEMA rejected, but to those it used.  And if the learning experiences compared differed not only in media but in “in terms of time spent, curriculum and pedagogy,” if any improvement in results was due in part to “additional learning time and materials,” this should at the very least be acknowledged by anyone who invokes DEMA to claim that it “found that online and blended learning was more successful than face-to-face learning using a robust analysis” (Draft, p. 15; cf. p. 63).  The caveats point to an apples-and-oranges problem.

And as DEMA states further on, it

should not be construed as demonstrating that online learning is superior as a medium. Rather, it is the combination of elements in the treatment conditions, which are likely to include additional learning time and materials as well as additional opportunities for collaboration, that has proven effective. The meta-analysis findings do not support simply putting an existing course online, but they do support redesigning instruction to incorporate additional learning opportunities online.  (Means et al., p. 51, emphasis added)

I emphasize the word “additional” in the final sentence of this passage since it equates with what the SAPTF Draft defines as the “traditional course,” rather than with Queen’s usage of “blended” (see Draft, pp. 12-13).

On p. 34, the Draft cites DEMA again and adds:  “We conclude that there is a great deal of merit in promoting the use of online learning at Queen’s because of its proven effectiveness.”  A great deal is made to hang on DEMA here; and since it’s presented as a conclusion, this statement will inevitably be plucked out of the Report and quoted out of context.  I would therefore urge two cautions.  First, use “facilitating” rather than “promoting” (see 2.18, above); second, while “proven effectiveness” is not wholly unjustified, it’s a categorical and unqualified expression of an approval that DEMA itself makes in a much more qualified way.  Something like “proven potential in certain applications” would be truer to DEMA’s assessment. For these reasons this key sentence might better be phrased as:  “We conclude that there is merit in facilitating online learning at Queen’s because of its proven potential.”

3.2.  Deslauriers, L., E. Schelew, and C. Wieman, “Improved learning in a large-enrollment physics class.”  Science 332 (2011): 862-864. The Draft invokes this study on p. 17 as showing that students taught by “an inexperienced instructor using a teaching approach based on research in cognitive psychology and physics education” had “higher attendance, greater engagement, and more than twice the learning success than those taught by the expert in the field” through “face-to-face lecturing” (p. 17).

But when examined in detail, the Deslauriers study is not nearly so compelling as this makes it sound.  First, the study represents a tiny sample:  a one-off experiment comparing two sections of a single course over one week (the 12th week in term).[17]  Second, any differences reported might be due to the change in routine (or “the Hawthorne effect”):  for the method was to compare a “control section” which continued attending lectures as it had been doing for 11 weeks, with an “experimental section” (comprising students who had also been attending lectures for 11 weeks).  The latter was taken over for the 12th week “by two instructors who had not previously taught these students,” using different, non-lecture methods (p. 863).[18]  Third, the experimental group had not one but two instructors—a significant difference not noted by the SAPTF.  Fourth, while Deslauriers et al. report “engagement” of 85% in the experimental group as against 45% in the control group, their means of measurement are impressionistic.[19]  Actual “engagement” in a lecture consists of mentally following and understanding it, which does not correlate well with physical signification such as “gesturing” or “nodding in response to comment by instructor.”  On the other hand, the activities of the experimental group, such as “student-student discussion” and “small-group active learning tasks” are more apt to manifest “engagement” physically.  Fifth, while Deslauriers et al. report a very striking improvement in test scores by the “students in the interactive section,” they also note that “both sections, interactive and traditional, showed similar retention of learning 6 to 18 months later” (p. 862).  Sixth, this case concerns an introductory physics course taught in sections of close to 300 students; effects achieved in this single-study case may not be at all applicable to other disciplines, such as humanities, or to smaller lecture courses.

I go into much detail in these cases to make the point that the results of pedagogical research are exceedingly easy to exaggerate or misrepresent in the reporting—particularly when their results are as one could wish.  I have examined the SAPTF’s citation of the Deslauriers study merely as an example; I acknowledge that it is a minor item in the Draft’s battery of evidence, but the Draft does report its flashy results without pointing out its weaknesses.

3.3.  “Initial analysis of data from CLASSE student surveys shows a statistically significant increase in student engagement in the blended version…” (p. 42).  Where is the evidence?  How is one to assess such claims?  This seems to reiterate a claim made in an Arts and Science memo of 24 Jan. 2013.  In that case the evidence was not yet available; when I asked Brenda Ravenscroft about it, she responded that “The results are preliminary, as I indicated in the call for proposals. Once we have had time to discuss and review the results and to compile a report it will be shared, as appropriate” (personal email of 27 Jan. 2013).  As I responded to her, one should never publish claims for research results before one can cite the research.  Certainly we should not be expected to credit claims on grounds to be disclosed at some future date.  On 27 January I asked Dean Ravenscroft to let me know when this “report” should become publicly available, but so far I have heard no more about it.

It is worth noting that claims to measure “student engagement” are dodgy in any case, and especially in contrastive situations (e.g., in lectures vs. in discussion groups).  See 3.2, above, re the “engagement measurement” used by Deslauriers et al., which counts things like “nodding in agreement.”

3.4.  World Wide Learn vs. University of Illinois.  In order to represent both pro- and anti-online views, the Draft inserts passages from World Wide Learn (“an advocacy group”) and a U. Illinois website (pp. 19-21).  The comparison is rather biased, however:  over a page is given to the former, while a short paragraph is selected from the latter (the same website does list further objections).   And while there is no criticism of the former for its impressionistic and unfounded claims (e.g., points 7, 8, 11), the latter is criticized for impressionism when it warns that online education “is an inappropriate learning environment for more dependent learners.”  A footnote objects:  “We know of no novice-expert difference studies to support this, in fact, online learning is often more structured than lectures where you can attend or not and do what you want…”  The footnote itself is impressionistic.

Moreover, there is an important recent study that appears to support the U Illinois’s caution:  see Di Xu and Shanna Smith Jaggars,  “Adaptability to Online Learning: Differences Across Types of Students and Academic Subject Areas.”  Community College Research Center, Teachers College, Columbia University.  February 2013.  The abstract states:

Using a dataset containing nearly 500,000 courses taken by over 40,000 community and technical college students in Washington State, this study examines how well students adapt to the online environment in terms of their ability to persist and earn strong grades in online courses relative to their ability to do so in face-to-face courses. While all types of students in the study suffered decrements in performance in online courses, some struggled more than others to adapt: males, younger students, Black students, and students with lower grade point averages. In particular, students struggled in subject areas such as English and social science, which was due in part to negative peer effects in these online courses.

See also the discussion of this study by Jake New, “Online Courses Could Widen Achievement Gaps Among Students.” Chronicle of Higher Education, 21 Feb. 2013.

Since the APTF cites and quotes pro-online research, it should also cite some of the anti-online research. 

 4.  Typos, grammar, style, and other minor suggestions for revision:

  • [This list is omitted here.]

5.  Clearer expression needed:

  • p. 12, last sentence before the 2nd list of bullets (“The goal of these…”)
  • p. 16, 2nd full para, last sentence (“In seeing…”)
  • p. 49, 1st full para, 1st sentence (“Apart from…”)
  • p. 54, 2nd para, last sentence (“A common theme…”)
  • p. 55, 1st bullet, line 5 (“Few online courses…”) (“Few onlinecourses” should probably be “Few instructors of online courses,” but it is not clear how the “specifically” clause fits logically or grammatically or what it means)


[1] Faculty of Arts and Science, “Blended Learning,”, 31 Mar. 2013.  This definition has been widely used in FAS’s promotion of blended course development. The memo “New Initiatives in Online Learning in Arts and Science: Information for Departments and Instructors” (February 2011) ( states:  “The online component of a blended course–-frequently basic content delivery–-is primary, not supplemental . . . . contact hours are usually reduced in comparison to fully in-class courses, and there may be savings in teaching resources. The Faculty believes this is a cost-effective way of managing enrolment while enhancing the quality of teaching and learning.”  Another Arts and Science memo of May 2011 states that the “blended model generally includes online materials that replace lecture-style delivery of fundamental content, fewer contact hours…”  A Queen’s Journal article of March 2012, based on an interview with Brenda Ravenscroft, states: “Aimed at large first-year courses, the [blended learning] program emphasizes online readings and assignments in conjunction with decreasing the number of lectures and the number of hired faculty needed to teach courses” (

[2] Only much later does the Draft mention that Queen’s “blended” courses have “fewer classroom hours (to balance the additional student workload taking place online)” (p. 40), or that “Enrolment capacity has increased by 10-20% in each blended course” (p. 41).  Even here the representation is partial:  why don’t lecture classes have fewer classroom hours to balance the additional student workload taking place in outside reading?  And given that 6 sections of PSYC 100 were collapsed into 4 sections of the “blended” course (see sec. 2.5, below), the 10-20% rise in enrolment is only part of the story.  This issue is also mentioned, but dismissed, at p. 61. In any case, these matters should be highlighted in connection with the (re)definition of “blended” on p. 13.

[3] “Traditional” learning includes seminars, labs, and the writing of interpretive or research papers, which are clearly not examples of passive learning.  Conversely, a student may be passive in viewing lecture-capture and other online materials, which are not “traditional.”

[5]  Later in the Draft, Dean Ravenscroft gives this figure as 10-20% (p. 42), so it appears to be rising.  In MUSC P52, lecture-capture  and “marking assistance” have facilitated increasing enrolment from 75 to 200, according to the Draft, p. 28-29.

[6] E.g., FILM 260, as described in the Draft, p. 28.

[8] Brenda Ravenscroft et al., “Business Case to Grow Distance Enrolments in the Faculty of Arts and Science,” Aug. 2011,, p. 37.

[10] Queen’s University Budget Report, 2011-12 (n.d.; ca 2011),, p. 7.

[11] HEQCO, “Quality: Shifting the Focus.”  April 2013.  p. 8.

[13] HEQCO, “Quality: Shifting the Focus.”  April 2013.  p. 18.

[15] Barbara Means et al., “Evaluation of Evidence-Based Practices in Online Learning: A Meta-Analysis and Review of Online Learning Studies.”  U.S. Department of Education, Revised Sept. 2010.   See Draft, pp. 14-15, 34.

[16] “In a Traditional Course students attend class sessions in an assigned face-to-face environment and complete reading, practice and review in unstructured private time outside class. Such a course may use online technologies for simple support purposes, such as email exchanges with students, student notifications, and posting of course notes. Technology may also be used as a supplement to engage the students with the curriculum and learning process (optional discussion boards, electronic repository of readings, lecture slides, etc.).” (SAPTF Draft, pp. 12-13, italics added)

[17] This raises the possibility that its results are a fluke; the difference in test scores yields  an effect size of 2.5 standard deviations, which sounds too good to be true; the authors themselves note that “The test score distributions are not normal” (p. 863) and that “other science and engineering classroom studies report effect sizes less than 1.0” (p. 864).

[18] The authors include a comment dismissing the “Hawthorne effect,” but “only because this is such a frequent question raised” in response to their study (Deslauriers et al., “Supporting Online Material,” sec. 6).

[19] “The engagement measurement is as follows. Sitting in pairs in the front and back sections of the lecture theatre, the trained observers would randomly select groups of 10-15 students that could be suitably observed. At five minute intervals, the observers would classify each student’s behavior according to a list of engaged or disengaged behaviors (e.g. gesturing related to material, nodding in response to comment by instructor, text messaging, surfing web, reading unrelated book). If a student’s behavior did not match one of the criteria, they were not counted, but this was a small fraction of the time. Measurements were not taken when students were voting on clicker questions because for some students this engagement could be too superficial to be meaningful as they were simply voting to get credit for responding to the question. Measurements were taken while students worked on the clicker questions when voting wasn’t underway. This protocol has been shown by E. Lane and coworkers to have a high degree of inter-rater reliability after the brief training session of the observers. (Deslauriers et al., “Supporting Online Material,” sec. 1).

This entry was posted in Academic Planning, Virtualization and Online Learning. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s