The Google Question of Evil

Google’s slogan, famously, is “Don’t be evil.” Cute, right? Except when Google starts, for example, to intergrate your name and picture into advertisements without permission, and then things get uncomfortable. After all, that certainly feels creepy, dishonest, and socially harmful—even, perhaps, a bit evil.

Over at the Atlantic last week, Ian Bogost asked the important, obvious questions: what the hell does Google mean by evil? And how will Googlers know when Google has overstepped Google’s Google-imposed boundaries?

Silicon Valley is not known for excessive concern about moral dilemmas (Bogost, a writer and video game designer, lives in the South, where moral angst is de rigueur). Still, Google has tried to answer these Big Questions. In a passage that will be illuminating for the religiously-inclined, Bogost discusses how Google chairman Eric Schmidt grapples with the problem of evil:

In an NPR interview earlier this year…Schmidt admits that he thought it was “the stupidest rule ever” upon his arrival at the company, “because there’s no book about evil except maybe, you know, the Bible or something.” The contrast between the holy scripture and the engineer’s fist is almost allegorical: in place of a broadly construed set of sociocultural values, Google relies instead on the edict of the engineer….Even back in the pre-IPO salad days of 2003, Schmidt explained “Don’t be evil” via its founders’ whim: “Evil is what Sergey says is evil.”

I’m no philosopher, but I’m pretty sure there are books about evil other than the Bible. And I’m not a right-wing religious pundit, but I’m also pretty sure that Google has finally confirmed the religious right’s worst fears about secularization: that, in the absence of some guiding moral authority, we’ll all slide into a self-imposed set of moral regulations that will gradually draw us into a morass of oppression and debauchery. Google may not be the Whore of Babylon, but Schmidt isn’t exactly inspiring confidence in its capacities for moral self-policing.

Still, it’s worth asking: what could the Bible say about topics such as user privacy? Sure, the Book of Job might help us understand the relationship between incomprehensibly huge authorities and the individuals who must engage with them. And Leviticus might remind us that, sometimes, you just have to ban things. But, with all due respect to Ecclesiastes, digital technology is something new under the sun. Laws about stealing, and lying, and loving one’s neighbor aren’t so easy to apply to issues of global connectedness and digital privacy.

Back in June, writing about the NSA surveillance scandal, Daniel Schultz pointed out in Christian Century that:

there has been minimal reaction by religious groups. A quick survey of eight denominations found that only one—the Presbyterian Church (U.S.A.)—had a statement on government surveillance, dating from 2006. [A subsequent correction found one more statement, from the United Methodist Church].

Schultz wasn’t especially surprised. These are new, slippery issues. But he was concerned, understandably, by the propsect of religious groups being unequipped to respond, in any substantive way, to an issue of obvious moral import.  

I’m not saying that Google should add some priests to its board and start requiring employees to read the Sermon on the Mount. Nor am I saying that you need God to be moral. But Bogost is right: the moral “edict of the [Google] engineer” may not be enough regulation for what is, arguably, the world’s most influential corporation. Google’s slogan does a service, in that it reminds us that digital technology is not neutral—that is has the capacity to become morally charged. To this, let’s hope that Google doesn’t add a corollary lesson: that moral self-policing, without something or someone to keep you accountable, will always become corrupt.