Set up your Free Think University account to access free courses, unlock scholarships, and experience other community benefits.

×

Forgot your password? Click here.

Not a member? Click here.

Need help logging in? Click here.


×

Enter your email address below and we'll send you an email to reset your password.

×

We could not find your email address in our system. Please contact support@thinker.education for additional help.

×

Your password has been sent to your email address on file.

×

Please contact the River Foundation for more information on your scholarship requirements.

×

By Lewis M. Andrews (Source) 

Three recent developments have raised the issue of whether social media should be able to censor opinions that differ from expert opinion. 

The first involves YouTube’s decision to take down a widely circulated video by the co-owners of a Bakersfield, California, “Urgent Care” clinic, Dr. Dan Erickson and Dr. Artin Massahi. Their presentation, based on their own research, suggests that widespread lockdowns are not necessary to combat the virus. 

The second is The Atlantics publication of an article by two law professors, Harvard’s Jack Goldsmith and the University of Arizona’s Andrew Keane Woods, in which they argue that some censorship of the internet “can do enormous public good.” This, they say, is because today’s technological sophistication, combined with data centralization and government-industry cooperation, makes it possible for social media to reliably screen out all unscientific ideas. 

“Significant monitoring and speech control are inevitable components of a mature and flourishing internet,” Goldsmith and Wood say. “And governments must play a large role in these practices to ensure that the internet is compatible with a society’s norms and values.” 

The final development is a joint statement on COVID-19 by America’s biggest internet providers, which easily could have been written by the aforementioned law professors. From now on, Facebook, Google, LinkedIn, Microsoft, Twitter, Reddit, and YouTube declare that they will collaborate to combat “fraud and misinformation” by systematically “elevating authoritative content on our platforms.” 

There is, it must be admitted, a seemingly valid case for allowing expert opinion to guide the results of online searches—especially where public health and safety are concerned. It goes without saying that credentialed authorities generally know more about their respective fields than laypeople. And conversely, there are more than a few people online who all too readily accept poorly documented answers to complex and difficult problems even if it puts them and their neighbors in jeopardy. 

But beneath these arguments is a false, if widely accepted, assumption about the knowledge these experts claim to possess. Much of what is accepted as experimentally true in the social sciences, education, medicine, and even biology cannot, in fact, be replicated. 

Although it has not received the journalistic coverage it should, concerns about what academics call “experimental irreproducibility” have been surfacing for nearly a decade. In 2012, scientists at the biotech firm Amgen found they could confirm the results of only six of 53 supposedly landmark cancer studies published in prominent journals. Four years later, Nature conducted an online survey of scientists, 70 percent of whom said they’d tried and failed to reproduce their colleagues’ published findings. 

In science, the ability to duplicate the results of a study is the ultimate test of its validity. And by this gold standard, the deference we have been taught to show medical and social science researchers turns out to be completely unjustified. John Ioannidis, co-director of Stanford University’s Meta-Research Innovation Center, believes that up to half the information published in peer-reviewed journals is wrong, an opinion he shares with The Lancet’s respected editor-in-chief, Richard Charles Horton. National Association of Scholars President Peter Wood says that many of the regulations, law, and programs routinely passed by Congress are based on little more than research flukes and, in many cases, outright “statistical manipulation.” 

If the findings of today’s irreproducible studies were wrong in some random, unpredictable way, then we could at least credit the researchers with trying their best. We could be generous enough to say that they are circling the truth and, with luck, some future measurement technique or statistical formula might get them even closer. 

But while today’s medical and social science experiments do not appear to be overwhelmingly biased in favor of a single theory, policy, or, in the case of medicine, cure, neither are they completely random. For no matter what these investigations conclude, they all somehow manage to shine a favorable light on the federal government, state government, or government-affiliated nonprofit sponsors. 

Whatever knowledge is actually gleaned from the annual $40 billion investment that Congress and state legislatures make in scientific research, the subsequent study write-ups always leave two impressions. The first is that the results are so promising as to justify “further investigation.” In other words, more public funding next year. The second is that those outside the academy who have the education, experience, and authority to implement what has supposedly been learned thus far—in others words, the government officials who subsidize the research—are, as a result, even wiser and more trustworthy than before. 

These impressions are reinforced by an army of college and university administrators, which grew by over 60 percent between 1993 and 2009, and which, not coincidentally, is supported by overhead fees collected from government research grants. (The base rate averages 52 percent nationwide. At elite institutions like Yale, it goes up to 67.5 percent.) As New York Times columnist Ross Douthat has observed, when college presidents are called on the fact that modern scientific research is not nearly as advanced as government grants to their schools would suggest, they all fall back on the same answer: “Give us even more money.” 

The idea of a tacit alliance between scientists and government bureaucrats is not new. As far back as the early 1970s, the late cultural philosopher Irving Kristol coined the phrase “New Class” to describe the rising number of economists, environmentalists, health care workers, policy analysts, and other professionals with a common need for increased public funding. 

Kristol was not suggesting anything as blatantly corrupt as a deliberate conspiracy. He was simply acknowledging the natural human tendency of federal and state officials to see the most value in those studies that promise to validate their authority. 

More recently, at least a few academics have gone public with their concerns about slanted research, even at their own universities. In theory, faculty have an objective “truth finding” and “truth telling” role, says University of California-Berkeley law professor Stephen D. Sugarman. But “because of their own financial interests, [they] may be dishonest in what they say they have discovered and/or how they describe the state of knowledge in their field.” Musa al-Gharbi, a Paul F. Lazarsfeld fellow in sociology at Columbia and a research associate at Heterodoxy Academy, has similarly suggested that nearly every academic paper touching on “how society should be best arranged” has likely been subject to “prejudicial design” based on how the outcome would end up profiting the author. 

Nowhere has the opportunistic embellishment of study findings been more blatant in recent years than with cancer research, funded with $6 billion annually from the federal National Cancer Institute and with even more from other government agencies and related nonprofits. Indeed, not a day goes by without a breathless headline that some investment led to “incredible progress” and millions of “cancer deaths averted.” A 2016 report by Kaiser Health News found that cancer experts often describe questionable treatments with terms such as “breakthrough,” “game changer,” “miracle,” “cure,” “home run,” “revolutionary,” “transformative,” “life saver,” “groundbreaking,” and “marvel.” 

Unfortunately, the reality of what’s been achieved, writes John Horgan, director of the Center for Science Writing at Stevens Institute of Technology in a recent issue of Scientific American, is far less impressive. While there have been some improved therapies for childhood cancers and cancers of the blood, bone-marrow, and lymph systems, Horgan reports that most highly publicized advances are statistically unimpressive. Indeed, “the current age-adjusted mortality rate for all cancers in the U.S., 152.4 deaths per 100,000 people, is just under what it was in 1930, according to a recent analysis.” 

Without condemning every academic researcher—some of whom are undoubtedly committed to the advancement of knowledge—it is not an exaggeration to say that a good percentage of contemporary scientists have, consciously or unconsciously, allowed short-term self-interest to distort their pursuit of truth. Nor is it an exaggeration to say that their actions have been enabled by equally self-interested experts in government, nonprofit organizations, and professional associations. 

Perhaps this situation will be corrected in the wake of the growing scandal over experimental irreproducibility. But clearly we have no reason to give credentialed elites a veto over what is available on the internet, even in their own fields. In the end, every argument about online censorship boils down to a debate over which is more likely to benefit society: trust in the collective integrity of the average end-user or in the collective interests of credentialed experts. 

Nothing we know about scientific research gives us any reason to opt for the latter. 

—— 

If you found this blog post of interest, you might want to explore these Thinker Education courses: 

For this third party post in its full context, please go to: 

The Subtle Tyranny Of The ‘Expert’ Class 

© 2020The American Conservative.