#algorithmic-evil

What does Google's new privacy policy and Facebook's ever-changing privacy policy have in common?
 
They are both harbingers of evil.
 
Not evil in the (dualistic) sense of being the opposite of good; but evil because they represent an arguably Faustian exchange of something essential for convenience.
 
Let me explain.
 
Ayn Rand defines evil in The Virtue of Selfishness as follows:
 
"Since reason is man’s basic means of survival, that which is proper to the life of a rational being is the good; that which negates, opposes or destroys it is the evil." [1]
 
So, according to Ayn Rand, evil is anything that impairs our ability to apply reason.
 
Rand further elaborates this perspective in Atlas Shrugged:
 
"Thinking is man's only basic virtue, from which all the others proceed. And his basic vice, the source of all his evils, is that nameless act which all of you practice, but struggle never to admit: the act of blanking out, the willful suspension of one's consciousness, the refusal to think—not blindness, but the refusal to see; not ignorance, but the refusal to know. It is the act of unfocusing your mind and inducing an inner fog to escape the responsibility of judgment—on the unstated premise that a thing will not exist if only you refuse to identify it, that A will not be A so long as you do not pronounce the verdict 'It is.' Non-thinking is an act of annihilation, a wish to negate existence, an attempt to wipe out reality. But existence exists; reality is not to be wiped out, it will merely wipe out the wiper. By refusing to say 'It is,' you are refusing to say 'I am.' By suspending your judgment, you are negating your person. When a man declares: 'Who am I to know?'—he is declaring: 'Who am I to live?'
"This, in every hour and every issue, is your basic moral choice: thinking or non-thinking, existence or non-existence, A or non-A, entity or zero."[2]
 
Thus, according to Rand, evil is anything that impairs our ability to reason, think for ourselves and apply our own judgment.
 
I like this definition.
 
It follows then that: Google and Facebook are harbingers of evil because they threaten to seduce us into accepting and substituting algorithmic reasoning, thinking and judgment for our own.
 
While we may not be experiencing the depth of denial articulated by John Galt, in my opinion we are collectively primed to reach it, and we are not that far away.
 
These two companies are as pervasive as the air we breathe. With our complicity they have insinuated themselves as mediators of our most cherished activities: how we make sense of the world around us, and how we interact with the people in our lives.
 
They were not always evil. At least not when they were simply connectors responding to *our* desires. Back then they helped us to reason, think, judge and interact with information and individuals by simply connecting us to information that we were *seeking* and to individuals that we *selected*. That was good.
 
Then somewhere along the way the roles gradually changed until the server became the master. We are now regularly consuming what is being fed to us, not what we seek. We are being led, directed and shaped by algorithms - we are no longer being served by them.
 
It was subtle at first. We were advised to create simple profiles so that Google and Facebook could serve us with better information and people search results. Then the profiles became more detailed, personal and intimate -- allowing Google to offer us more "relevant" search results, enabling Facebook to connect us with people we forgot, lost track of or perhaps didn't realize that we had shared interests or connections with, while also, not surprisingly, permitting both to serve us "better" advertisements.
 
In hindsight, we now know that we were gradually drawn, perhaps coerced, into a process that traded these conveniences for a new and undervalued (by us) currency: our preferences, our relationships, and our secrets. We soon recognized the value of this new, very personal, *social currency* and ultimately required Google and Facebook to publish privacy policies in order to keep us comfortable with sharing such intimate information about ourselves.
 
Eventually product innovation was reduced to (a) increasing user convenience through more "sophisticated" algorithmically determined recommendations (server vs. user directed requests and results) and (b) privacy policy innovation that enabled both companies to achieve greater monetization of their products (you, me and the hundreds of millions of other users like ourselves).
 
Now, we find ourselves locked into a cycle of trading (relinquishing?) more and more of our social currency (ourselves) in exchange for incremental increases in convenience. However, I am convinced that the more we accept and rely on these purported conveniences, the less we are required to reason, think and apply our own judgment. According to Ayn Rand. That is evil.
 
So, just how real and widespread is this? Read the following and make up your own mind:
 
http://www.guardian.co.uk/news/datablog/2012/mar/09/big-data-theory
http://www.theatlantic.com/technology/archive/2012/03/i-didnt-tell-facebook-im-engaged-so-why-is-it-asking-about-my-fianc/254479/
http://www.forbes.com/sites/kashmirhill/2012/02/16/how-target-figured-out-a-teen-girl-was-pregnant-before-her-father-did/
 
What's even more concerning is that this is just the tip of the proverbial iceberg. We can see this only because we are permitted to give permission for certain things in order to preserve the appearance that we are still in control. Surely there are occasions where we are not given this privilege. Currently, we have no way of knowing whether or not this is occurring; but I am willing to bet that it is -- because Google and Facebook possess the equivalent of our digital essence via our profiles. And, this is extremely valuable to those who see advantage in predicting or influencing our behavior. So if company X wants to buy your profile so that they can interrogate it directly and use their own algorithms to "discover" how to best serve you themselves? Do you believe that Google and Facebook will decline the opportunity? I don't.
 
Google and Facebook have the motivation, and will likely engineer the means, to sell your profile and mine to those who value them. This is occurring indirectly today through the ads we see on search listings, gmail and facebook pages. What's to stop them from selling the profile itself. Do you think they will ask your permission to sell your profile to Company X? Or Company Y? Or Company Z? I don't.
 
The problem is that these profiles are wholly inadequate representations of you and me. Imagine a world where your online and offline experiences are the result of a series of calculations made from your Google or Facebook profiles? Is that comforting? Imagine further, that this information is shared and used via direct computer-to-computer transactions -- without human oversight or application of human judgment in interpreting algorithmic conjecture and conclusions. Imagine further that these algorithmic conjectures and conclusions get served up as (or resembling) fact to individuals or other computers that accept them as such. Are you still comfortable?
 
Some might argue that this is nothing new. We are simply applying new technology to old practices. That we should become more knowledgeable about how this new system works and adjust our online behavior and interactions accordingly. Caveat emptor they say. But I believe this *is* different. Computers are pervasive and computer mediation of our online and offline activities is so widespread that it's become almost invisible -- and we are approaching a point where it will be practically invisible to most people. So we need to erect conscious safeguards to ensure that these systems do not become intrinsically and irredeemably evil.
 
To do that we need an algorithmic moral code.
 
#how-not-to-be-evil
 
In my opinion, Google and Facebook can avoid being evil by just being a good search engine and a good social network and nothing more. That's because those are the essential utilities for the users of Google's and Facebook's applications. Everything else -- including the cool features that use socially-derived inferences to amaze or creep you out, is there to serve Google and Facebook's economic interests. These features allow Google and Facebook to transform our social currency into hard currency.
 
I don't blame them for doing this. They are businesses and they exist to make money -- either directly from their users or from those who are willing to pay for their users. Google and Facebook have built, and we have either knowingly or unknowingly opted into the latter system. Those of us who are comfortable with this need do nothing. The rest of us need to articulate or outline a reformed or alternative system. Then demand it.
 
Here are my thoughts on some basic dos and don'ts of such a system:

  1. Do answer the questions I ask; but don't offer information or answers to questions I haven't asked.
  2. Don't substitute your judgment for mine without giving me the choice on a case-by-case basis -- not as some global profile setting.
  3. Don't try to figure me out by mining my emails, search history, video view history, contacts/friends, phone calls, "likes", pictures, relationships, or whatever else you're tracking. 
  4. Don't sell my profile information or share it with anyone or anything I haven't explicitly given you permission to sell or share it with. You don't own my profile or any other facsimile of me. It exists solely to help you serve me -- not to help you serve yourself or to help your partners to serve me. And you only have a right to use it while I have a relationship with you.
  5. Do offer me ads if that's how you need to make money, but do not violate #1-#4 in doing so.

 
These precepts will probably be rejected by Google, Facebook and their ilk because it effectively rolls back the "innovations" they have blessed us with over the past decade; but that is exactly the point as these companies became evil when they unilaterally redefined their roles to be our advisors instead of just good connectors. This rollback in ambition also makes their work less sexy and interesting, but more importantly it also rolls back their overreach and encroachment on our cognitive sovereignty. Finally, they may or may not be adequate to solve the perceived problem; but maybe they are fodder for conversation that might contribute to a solution.
 
#end-game
 
Where is this likely to lead? I don't know, but I'm uneasy with an algorithmic moral arc that bends towards more of this form of evil. I am also uncomfortable with the hubris of those who believe that we can or should trust algorithms to reason, think or make judgments about us and for us -- especially those with the genius and resources to pursue such outcomes.
 
There is not much that I can do to stop this; but I won't sit idle waiting for things to play out. To paraphrase Ayn Rand: I refuse to blank out, to willfully suspend my consciousness, to not see, to unfocus my mind and not pass judgment. On the contrary, I will pronounce the verdict: 'It is evil'
 
And I refuse to be complicit with such a system. So, I am personally opting out -- starting with Google and Facebook.
 
What about you?
----------------------
[1] http://en.wikipedia.org/wiki/Evil
[2] John Galt's speech, Atlas Shrugged

Add new comment