Any time “scientists” at a company purport to have done a
study involving said company in any way, the public has good reason to be
suspicious of the reported conclusions. Were the folks running the company
really intent on providing credible information, they would use independent
scholars (i.e., not being compensated by the company). Such a management would
want to obviate even the appearance of a conflict of interest—their desire to
provide the public with an answer being so strong. So the management at
Facebook may not have been very invested in providing the public an answer to
the question: how much influence do users actually have over the content in
their feeds? In May 2015, three “Facebook data scientists” published a
peer-reviewed study in Science Magazine
on how often Facebook users had been “exposed to political views different from
their own.”[1]
The “scientists” concluded that if users “mostly see news and updates from
friends who support their own political ideology, it’s primarily because of
their own choices—not the company’s
algorithm.”[2]
Academic scholars criticized the study’s methodology and cautioned that the
risk of polarized “echo chambers” on Facebook was nonetheless significant.[3]
I was in academia long enough to know that methodological criticism by more
than one scholar is enough to put an empirical study’s findings in doubt. Nowadays,
I am more oriented to the broader implications of the “echo-chamber” criticism.
The entire essay is at “Beyond
Facebook’s Impact.”
[i]
Alexander B. Howard, “Facebook
Study Says Users Control What They See, But Critics Disagree,” The
Huffington Post, May 12, 2015.
[ii]
Ibid. I put the quotes around “scientists” to make the point that the conflict
of interest renders the label itself controversial in being applied to the study’s
investigators.
[iii]
See, for example, Christian Sandvig, “The Facebook ‘It’s Not
Our Fault’ Study,” Multicast, Harvard Law School Blogs, May 7, 2015.