Back in April of this year I watched with great interest as Mark Zuckerberg appeared before Congress in the wake of the Cambridge Analytica scandal, amid growing concerns over Facebook’s commandeering of user data and overall privacy considerations. While I found the collision of these two worlds fascinating to watch unfold, of particular interest to me were some of the threads that emerged regarding the role that Facebook now plays in political elections around the world. This exchange with Senator Dianne Feinstein was perhaps the most salient example of Zuckerberg’s reckoning on this point—and his precisely calculated balance of remorse for the past and optimism for the future.
With all the partisan noise that has been generated over Russian interference in the 2016 election, it’s been hard to pin down exactly what to make of it all, and in particular to determine exactly to what degree Facebook is at fault. The ideologically weighted commentary provided by Left or Right leaning publications/networks doesn’t help either. Democratic voices seem hellbent on the legitimization of any and all accusations of Russian meddling; Republicans seem allergic to nearly every suggestion that our friends in Moscow had any part to play whatsoever. Listening to this chaotic game of ping pong, one begins to realize that both sides seem equally right and equally wrong here. Republicans are right to be skeptical towards the accusations that are tirelessly drummed up, but in their incredulity seem unable to acknowledge the all too present proof of Russian efforts to interfere in at least some instances. Conversely, Democratic commentators are right in calling attention towards the concerted effort that took place leading up to the election to disseminate targeted media content to specific demographics online, while erroneously ascribing almost no agency to those who received such content—if they saw a fake news piece about Hilary, it’s inevitable that they believed it, right?
In my estimation, this point is precisely where I begin to see problems in the logic driving much of the rhetoric on the Left, although it’s hard to say in our technological age to what degree their premise is mistaken. While the obstinance on the Right is fairly easy to identify and poke holes in, this assumption by many on the Left smells of a non sequitor to me; although, it also contains hints of truth as well and is therefore where I want to briefly hone in on.
In other words, telling Fox News to pull its head out of the sand is boring and cliched. However, the question of how much social, economic, moral and political autonomy people can expect to maintain a grip on in an age where we sip our curated stream of information through the straw provided us by our technological overlords is just slightly more interesting.
These questions are especially prominent in my mind after reading the recently published (and quite lengthy) New Yorker profile on Mr. Zuckerberg by Evan Osnos, in which the issue of election integrity—both past and future—is naturally one of the central threads throughout the piece. While the title itself is somewhat clickbaity and alerts the reader to a framing which assumes a significant degree of election shenanigans (“Can Mark Zuckerberg Fix Facebook Before it Breaks Democracy?”), the meat of the piece actually does a fairly good job of assessing the contours of this issue, at least from a pragmatic and tech standpoint.
In this vein, one of the main portions that caught my eye, particularly because it seems to contradict the answer Zuckerberg provided to Feinstein while testifying before Congress, was when Osnos recounts the aftermath of the election and the position Zuckerberg still seems to be firmly settled on:
“After the election, Facebook executives fretted that the company would be blamed for the spread of fake news. Zuckerberg’s staff gave him statistics showing that the vast majority of election information on the platform was legitimate. At a tech conference a few days later, Zuckerberg was defensive. ‘The idea that fake news on Facebook—of which, you know, it’s a very small amount of the content—influenced the election in any way, I think, is a pretty crazy idea,’ he said. To some at Facebook, Zuckerberg’s defensiveness was alarming. A former executive told Wired, ‘We had to really flip him on that. We realized that if we didn’t, the company was going to start heading down this pariah path.’
When I asked Zuckerberg about his ‘pretty crazy’ comment, he said that he was wrong to have been ‘glib.’ He told me, ‘Nobody wants any amount of fake news. It is an issue on an ongoing basis, and we need to take that seriously.’ But he still bristles at the implication that Facebook may have distorted voter behavior. ‘I find the notion that people would only vote some way because they were tricked to be almost viscerally offensive,’ he said. ‘Because it goes against the whole notion that you should trust people and that individuals are smart and can understand their own experience and can make their own assessments about what direction they want their community to go in.'”
A few paragraphs later we find Zuckerberg doubling down on this stance:
“He insists that fake news is less common than people imagine: ‘The average person might perceive, from how much we and others talk about it, that there is more than ten times as much misinformation or hoax content on Facebook than the academic measures that we’ve seen so far suggest.’ He is still not convinced that the spread of misinformation had an impact on the election. ‘I actually don’t consider that a closed thing,’ he said. ‘I still think that’s the kind of thing that needs to be studied.'”
The first thing worth noting here is the inescapable political hysteria that flooded in almost immediately after the results of the election. All possibility of deliberate and rational analysis vanished once approximately half of the country realized that Orange Hitler had just become the President of the United States. To everyone not living under a rock—especially the above mentioned Facebook executive—it was obvious that the masses were looking for blood. A deeply contrite course had to be tacked immediately, and it certainly had to be good enough to placate the sentiments of the raging Twitter masses. Zuckerberg’s supposedly “glib” comment absolving Facebook of any real responsibility in the matter was simply unacceptable, apparently because it failed to properly emit the required contrition and acknowledgement of his complicity in electing such a monster. The Facebook executive’s usage of the word “pariah” seems to confirm my reading of his commentary, and the overall philosophy driving their response to the criticisms: Zuckerberg had to be “flipped” not necessarily because he was factually errant (whether or not he was is beside the point here), but because it just looked really bad. Public relations considerations were obviously the dominant factor at play, over and above any efforts to exercise robust analysis from a multitude of angles. I highlight this simply to demonstrate the difficulty of approaching this question with anything resembling a productive and helpful framework.
That aside, it’s interesting to note Zuckerberg’s continued adherence to such a position. These many months into the discussion, it is well documented (by none other than Facebook themselves) that there was indeed a widespread and concerted effort to spread politically curated content to various voter groups. Niall Ferguson, in his recent conversation with Sam Harris, even cites the strategy employed by the Russians of creating events on Facebook which people actually showed up to, thereby mobilizing political momentum far greater than that generated by sharing a fake news article. If anyone on this planet is privy to the full weight of this type of evidence, that person would be none other than Mr. Zuckerberg. So then, why such an obstinate resolve on this point?
The more I reflect on this, the more of a complex puzzle this all becomes. Partisan politics aside, what we seem to be dealing with here is the incoherence that is generated by the clashing of two vastly different ethical, social and rhetorical ecosystems. And in a morbidly fascinating way, Zuckerberg himself is serving as a microcosm of this battlefield. It’s almost as if we can watch the fall of the old world and the rise of a new one happening in the thoughts, statements and actions—both individual and corporate—of a single man. The classical system of debate, oratory, and in-depth reflection are crumbling each day, as well as all the other features of the more rigorous system of discourse that accompanied our traditional modes of communication. They are too slow, too arcane. In their place are erected new frameworks to support the furious speed and the massive volume of data now flooding the marketplace of communication. In this new world, speed is an unadulterated good. Or, as the Facebook motto was in the company’s embryonic years, “Move fast and break things.” Even delayed pronouncements by public figures or companies on quite literally anything are usually taken as signs of equivocation and lack of moral fortitude.
So, when Zuckerberg recites his incredulity over Facebook’s supposed complicity in the voting outcome of 2016, he is undoubtedly appealing to the old way of doing things. Furthermore, his notion of individuals coming together and determining “what direction they want their community to go in” assumes a premise that is tenuous at best: that there is a large enough swathe of the population which hasn’t already handed over the reigns of human progress to Silicon Valley in hopes of arriving at some sort of technologically utopian society. The great irony which one cannot escape here is that while Zuckerberg is still operating in the classical realm of discourse and deliberation, he has simultaneously erected and fortified over the past few decades an entire empire whose logic is altogether antithetical to that system and those principles. There is no such thing as real reflection and debate on his platform, or any other prominent social media platform for that matter. Osnos wonderfully illustrates earlier on in the piece this addictive and coercive logic built into Facebook’s systematic approach:
“New hires learned that a crucial measure of the company’s performance was how many people had logged in to Facebook on six of the previous seven days, a measurement known as L6/7. ‘You could say it’s how many people love this service so much they use it six out of seven days,’ Parakilas, who left the company in 2012, said. ‘But, if your job is to get that number up, at some point you run out of good, purely positive ways. You start thinking about ‘Well, what are the dark patterns that I can use to get people to log back in?’ ‘
Facebook engineers became a new breed of behaviorists, tweaking levers of vanity and passion and susceptibility. The real-world effects were striking. In 2012, when Chan was in medical school, she and Zuckerberg discussed a critical shortage of organs for transplant, inspiring Zuckerberg to add a small, powerful nudge on Facebook: if people indicated that they were organ donors, it triggered a notification to friends, and, in turn, a cascade of social pressure. Researchers later found that, on the first day the feature appeared, it increased official organ-donor enrollment more than twentyfold nationwide.
Sean Parker later described the company’s expertise as ‘exploiting a vulnerability in human psychology.’ The goal: ‘How do we consume as much of your time and conscious attention as possible?’
Facebook engineers discovered that people find it nearly impossible not to log in after receiving an e-mail saying that someone has uploaded a picture of them. Facebook also discovered its power to affect people’s political behavior. Researchers found that, during the 2010 midterm elections, Facebook was able to prod users to vote simply by feeding them pictures of friends who had already voted, and by giving them the option to click on an ‘I Voted’ button. The technique boosted turnout by three hundred and forty thousand people—more than four times the number of votes separating Trump and Clinton in key states in the 2016 race. It became a running joke among employees that Facebook could tilt an election just by choosing where to deploy its ‘I Voted’ button.”
So perhaps, amidst all the hemming and hawing from Zuckerberg, what he really meant to say was something like: “I can’t accept that Russian interference could have influenced voters to such a degree. That doesn’t account for people’s ability to think for themselves. Only myself and the team at Facebook are capable of wielding such power over our users.”
Now to be clear, my point here is not that there is a one-to-one correlation between viewing a fake news article and casting a ballot. I still maintain that such an argument easily drifts into the realm of hysteria, and goes too far in robbing social media users of whatever skeletal version of true autonomy they still possess online. Instead, I’m more interested in drawing attention to the inherent limitations and embedded pathologies that accompany the usage of social media as a realm for conducting real political discourse. The greater degree to which we rely on these shallow technologies to do the legwork for us in these serious and deliberative realms, the more we will be subjected to their shortcomings in increasingly painful and disastrous ways. As we have increased our mindless consumption in arenas like Facebook, we have simultaneously eroded our capacity for the rigorous intellectual exercises of discernment and wise judgement. We have truly become like ships tossed around at sea in a storm of data, at the mercy of the raging currents and unable to chart a course towards land.
So perhaps this is what we can say about how much Facebook can be held accountable for the 2016 outcome: they are indeed responsible, but not in the way we would imagine. Donald Trump taking up residence in the White House cannot really be chalked up directly to evil Russian bots posting articles about Hilary Clinton’s involvement in a child trafficking ring. Positing something of this nature is quite honestly absurd. What seems a more plausible and sobering assessment is that our political discourse seems to be transpiring in a manner increasingly devoid of the substance and gravity which it demands. It is playing out less and less in contexts that allow for robust cross-examination in close quarters. Arenas such as Facebook or Twitter—to which our society is turning more and more—are those which possess the least ability to foster the kind of deep consideration that Zuckerberg was claiming still reigns supreme. It’s bad enough that politics have always been conducted with the specter of flawed human passions looming large, but perhaps they’ve never been so utterly enslaved to their every whimsical demand as they are now.
What I think we often forget is how utterly new all of these technologies are. Facebook hasn’t emerged from the deep and misty past of our civilization, but is actually barely older than most middle schoolers running around today. And yet, most of us can’t imagine a day without it—or Instagram or Twitter—in our pockets, let alone picture a world devoid of its influence. And the more we adopt these mediums of communication, the more we take on board the inevitable logic upon which they operate. This has driven us further and further away from the type of rational deliberation to which Zuckerberg was appealing, and, to the extent that we continue to traffic in this shallow world of social media nurtured by none other than himself, certainly shows no signs of trending back in that direction.
Michael Sacasas, writing over at his blog The Frailest Thing, recently reflected on the logic inherent to the world that Facebook has wrought for us. His insight here seems a fitting way to conclude this conversation:
“It is not a matter of stupidity or education, formally understood, or any kind of personal turpitude. Indeed, by most accounts, Zuckerberg is both earnest and, in his own way, thoughtful. Rather it is the case that one’s intelligence and one’s education, even if it were deeply humanistic, and one’s moral outlook, otherwise exemplary and decent, are framed by something more fundamental: a distinctive way of perceiving the world. This way of seeing the world, including the human being, as a field of problems to be solved by the application of tools and techniques, bends all of our faculties to its own ends. The solution is the truth, the solution is the good, the solution the beautiful. Nothing that is given is valued.
The trouble with this way of seeing the world is that it cannot quite imagine the possibility that some problems are not susceptible to merely technical solutions or, much less, that some problems are best abided. It is also plagued by hubris—often of the worst sort, the hubris of the powerful and well-intentioned—and, consequently, it is incapable of perceiving its own limits. As in the Greek tragedies, hubris generates blindness, a blindness born precisely out of one’s distinctive way of seeing. And that’s not the worst of it. That worst of it is that we are all, to some degree, now tempted and prone to see the world in just this way too.”