Set up your Free Think University account to access free courses, unlock scholarships, and experience other community benefits.


Forgot your password? Click here.

Not a member? Click here.

Need help logging in? Click here.


Enter your email address below and we'll send you an email to reset your password.


We could not find your email address in our system. Please contact for additional help.


Your password has been sent to your email address on file.


Please contact the River Foundation for more information on your scholarship requirements.


By Gary Jason | Solid data show that online learning is effective. One of the most popular proposals for reforming America’s system of higher education—as well as its system of primary and secondary public education—is interactive learning online (ILO). It is also called “Internet-based education,” “distance learning,” or “digital learning.” The dream is to provide low-cost, high-quality courses to all students online.

Some have waxed enthusiastic about ILO, from columnist/commentator Juan Williams to cultural critic/computer scientist David Gelernter. Others, however, have expressed skepticism, such as Darryl Tippens (provost of Pepperdine University) and James Patterson.

The argument on both sides tends to be concept- and anecdote-driven, rather than data-driven. That is why a recent report by William Bowen, Matthew Chingos, Kelly Lack, and Thomas Nygren of Ithaka S+R (a non-profit organization devoted to furthering online education) is a welcome addition to the discussion.

The report, “Interactive Learning Online at Public Universities: Evidence from Randomized Trials,” summarizes the results of an experiment the team conducted to rigorously compare ILO with traditional classroom-based learning (CBL).

Specifically, Bowen et al. randomly assigned students who wanted to take an introduction to statistics course into two groups. The first (the control group) enrolled in a traditional classroom-based course. The second (the experimental or “treatment” group) took a prototype online course developed at Carnegie Mellon University; it also met face-to-face once a week with an instructor during which sessions they could ask specific questions and get personal assistance. All were subsequently tested on how well they had mastered the material.

The study found that there was no statistically significant difference in outcomes for students in their sample overall, and none for any particular subgroup (by gender, ethnicity, language spoken at home, year in college, or income level). It appears that students learned statistics just as well online as in the classroom.

When you consider how inexpensive online learning is (or is likely to be in the future) compared to classroom-based learning, this is—if confirmed—a major finding. For if true, it means that online learning is likely to be far more productive (though the study concedes that quantifying just how much more productive is not an easy matter).

This study seems to support the findings of a massive meta-study published by the U. S. Department of Education in 2009. The study analyzed more than a thousand empirical studies of online learning, finding 51 studies with standards rigorous and informative enough to be worth focusing on. The authors concluded that “on average students in online learning conditions performed better that those receiving face-to-face instruction.”

Although this study had its critics, and the authors qualified their assessment by noting that some of the good effect of online programs may have been due to the extra time given to students in programs that combined online instruction with additional face-to-face interaction with instructors, it was still suggestive. Online education certainly can be effective.

I’m impressed that Bowen and his group are using rigorous tests to discover what actually works in higher education. While randomized control group experiments are the norm in medical research, they are not as common in educational research.

One of the chief reasons I tend to be skeptical of “great new innovations” in education is that they are rarely rigorously tested. On the college level, I’ve seen the “audio-visual age” come and go, followed by the “computer-assisted instruction age” also come and go. And over the decades I’ve dealt with the results of educational “reforms” such as the New Math, the New New Math, Bilingual Education, and the “whole language” approach to reading in my own classes.

It is clear these ideas were seldom if ever tested in large scale control-group experiments. The results of those innovations have been uniformly disappointing and in some cases disastrous.

But I have some issues with the Bowen group study.

  • The report itself observes that “Levels of educational attainment in this country have been stagnant for almost three decades, while many other countries have been making great progress in educating larger numbers of their citizens.” True enough—but have any of the countries that have shown progress used ILO to achieve it? If not, perhaps our focus should be on adopting what actually works in other countries first.
  • This study only looks at an introductory statistics course—a basic math course. It would be nice to see similar experiments with arts, humanities and lab science classes to see if online approaches work there as well.
  • The report concedes that the term “online learning” covers a wide range of teaching experiences, ranging from “uploading material such as syllabi…to the internet, all the way to highly sophisticated interactive systems that use cognitive tutors and…multiple feedback loops.” What isn’t clear in this study is whether these various kinds of teaching are disaggregated. For example, did any of the classroom-based courses incorporate digital methods as well? If so, I missed it.
  • The study’s sample size was not large—605 students in total, to be exact. That is larger than is common in much educational research, but that is hardly a high standard.
  • The study did not include data from community colleges. But the community college students are generally weaker than students at four-year colleges, so it might be the case that online educationworks only (or at least better) for students who are fairly good to begin with.
  • The study admits that it “could not randomize instructors in either group and thus could not control for differences in teacher quality.” Conceivably, the quality of the Carnegie Mellon teachers could have made up for some unseen drawbacks of ILO.
  • The participants in the study are all self-selected, which could mean that distance learning is only or mainly effective with highly motivated students.

In short, this study is intriguing, but is only a modest contribution towards really testing the usefulness of online learning.

The authors, to their credit, recognize this. As they state, “We do not mean to suggest—because we do not believe—that ILO systems are some kind of panacea for this country’s deep-seated educational problems, which are rooted in fiscal dilemmas and changing national priorities as well as historical practices. Many claims about online learning (especially about simpler variants in their present stage of development) are likely to be exaggerated. But it is important not to go to the other extreme and accept equally unfounded assertions that adoption of online systems invariably leads to inferior learning outcomes and puts students at risk.”

In sum, while the few rigorous studies that have been done on distance learning show that it seems to work about as well as traditional methods, more research needs to be done to say that with confidence.

Of course, besides scientific testing, there is another method for empirically determining whether some particular practice works: the free market. If ILO continues growing in popularity with students and employers, its worth will be established by the free choices of consumers.