I admit I was quite amused and even very surprised when I read those results. I had to turn to Kohlberg’s stages of moral development. I’d claim that Kohlberg is sometimes an optimist, sometimes a pessimist: democracy is a form of government that is related to Stage 5 of moral development, but most individuals are on Stages 2 and 3. Put into other words, while the founders of democracy understand that a better society is one that assumes the notion of a social contract — people behave to benefit the whole of the group by adhering to a standard set of rules they agree with — we assume that most people only adhere to these same rules because they fear repercussion. The interesting result observed by the Freakonomics authors showed otherwise. Even without enforcement, most people will observe a code of conduct most of the time that benefits the group as a whole. By “most” I’m assuming a relatively high percentage — 85 to 96% of all people. As gregarious animals, we shouldn’t be surprised: although we are highly individualistic, very few among us can really survive outside all group structures, and we recognise the need to conform to the group’s rules in order to maximise our benefits. A very low percentage is able to “tweak the system” in order to gain direct benefits while making the rest of the group suffer, and this is common throughout most of the so-called civilised world (very understandably, when we suddenly realise that this small group of egotists are actually the ones most often elected as representatives, we are disgusted with the corruption of a democratic nation).
If we bring the example to Second Life, for instance, we’ll see that the vast majority of residents could be completely anonymous if they wished. They could easily get an email address on Google Mail, register an alt, and grief the whole grid. There is no deterrent, no enforcement. So why aren’t we all griefers, if we don’t fear retribution or consequences? Surprisingly, the number of griefers is actually quite low; and similarly, even though content can be freely copied, the number of content pirates is quite low as well, almost insignificant compared to the overall resident population (just note that while the number of people in absolute or relative terms might be very small, the impact of their actions be tremendous!). This is, and will always be, a very startling result, since our “common sense” makes us believe otherwise. In effect, we usually think “if I have the freedom to grief, and fear no repercussion, I could be griefing the whole of SL, and everybody else is thinking the same as I am, so we could all be griefing without fear”. Ironically, we never ask ourselves the question: “well, why don’t I become a griefer then?” If we ask that, the answer is that it simply doesn’t feel right, even in the absence of deterrents. It’s not merely a purely altruistic behaviour, i.e. deliberately seeking group approval by following naturally the rules “because they are right, and I’m a righteous person”. In fact, most people don’t think that way. They follow the principle of the “intelligent egoist”: pissing everybody else is a bad strategy, since it means being shunned by others. By contrast, benefiting others will make them more likely to benefit you in return, because they’re happy with your behaviour. Ironically, this “intelligent egoist” — a hypocrite, if you wish — is more common than we imagine, and it’s the foundation of most communities. We don’t “behave badly” not because we feel that we’re very good persons with a golden heart, but simply because that strategy will lead to having a whole group very happy with you, and thus allowing you to extract some benefits from the group. It’s an egotistical approach to adhesion to social norms: “I’ll be politically correct so as to get a good name inside this group, and that way, I’ll be more able to manipulate others to give me what I wish of them”. The strategy works well. It created whole stable civilisations, where criminality, although it exists, is always small (in relative numbers).
But what is this behaviour but merely establishing reputation? In the digital world, where “actions” are not directly perceived, behaving as the group expects you to behave will allow you to extract the most benefits, as the group spreads out the meme “this person is ok, this person conforms to the rules”. Nothing shows this better than the SL-friendly social microblogging site Plurk. Plurk’s owners had a dilemma. They have seen how Facebook and Twitter operate, and seen the rise of social networking spamming. The principle is simple: most people are used to reciprocate friendship offers. This means that all that a viral marketeer has to do is to add friends randomly. Most will be socially compelled to return the favour. That way, you get a list of people who have “signed-in” to your private spamming list. They can’t even complain that they’re getting spammed: after all, they did add the “friend” in the first space.
Plurk added an intriguing concept, “karma”, which follows some rules. If you add friends (expand your social network), you get more positive “karma”. If you send them messages (thus exchanging information and making Plurk more content-rich), you get more “karma”. If you stop sending messages for a while, your “karma” will lower (thus incentivating the need to be signed-in on Plurk and add more and more messages to your network). Clearly this would mean that a viral marketeer adding everybody on sight and spamming them very aggressively would make a huge impact — and skyrocket your “karma” (which is a measure of “trust” based on perceived reputation). But that’s not what happens. Why? Because when someone else drops you as a friend, your “karma” lowers — very dramatically so.
Viral marketeers have thus to be very careful. If they send too much spam to their “friends”, they will cancel the friendship with you, thus making your “karma” drop so much, that it’s highly unlikely that they’ll accept an invite from you again. But if you don’t send any spam at all, your purpose of using Plurk is wasted. This artificial measure of someone’s reputation actually works quite well. Plurk, to the best of my knowledge, doesn’t have anti-spamming filters — but have a surprisingly low spam ratio. You can game the system, artificially inflating your “karma” by creating thousands of different accounts and sending them fake messages just to raise your “karma”, and thus look at “reputable” when hunting for more friends. But a single spam message that makes one of those friends to drop you from their list will affect “karma” so negatively that from then onwards, people might simply never connect to that marketeer’s profile again.
Plurk might not be the best choice out there, but they have proven with their system how reputation on the digital world can actually be self-filtering. Plurk doesn’t need to rely on any other mechanism to weed out viral marketeers, spammers, scammers, or members with unacceptable behaviour. All they need to allow is for people to blackball someone.
Alas, of course the system is not perfect. You can practice ostracism quite effectively. If someone “influent” on the network (meaning that they have high “karma” and tons of friends) suddenly gets angry with someone else, who might be completely innocent, they can simply ask for all their friends to drop the connection with that person, thus blackballing them by lowering their “karma” so much that it will be highly unlikely that they sign back in again. In practice, however, this is not that easy to do — it requires a lot of charisma to be able to persuade a large enough number of friends to ostracise an enemy. Not everybody has that degree of charisma; in fact, I’m prepared to admit that this number is tiny, possibly much smaller than the number of individuals in a given group that are fundamentally dishonest.
Other digital environments use different mechanisms and strategies to record people’s reputation. eBay and Amazon.com allows users to rate merchants or to leave comments about them. PayPal users get an internal rating depending on the successful transactions completed. Although I think these systems are quite ineffective — and are gamed as well, of course — there is still the ability to warn others outside the system, i.e. writing blogs or posts on public forums that have nothing to do with the company providing the service. Thanks to Google Search (or Bing!), searching for someone’s reputation online becomes easier and easier. Sometimes it becomes too easy for a residential, private user to track people down (thanks to Soft Linden for the link!).
With over 2 billion human beings on the Internet, this is now a global reputation marketplace 🙂 Of course I’m not claiming that the system is “perfect” — no system ever is. However, it has become quite simpler to locate some information on a specific individual (or company), and read what people know about them. As most people are quite loose with what they reveal online about themselves — at the very least, they publish their list of friends and a few groups of interest — it’s easy to validate “digital reputation” very quickly and effortlessly. A lot of that information might be rumours and it might be false; some might be exaggerated; often, the most vocal ones are not part of the majority; but as a rule of thumb, digital reputation matters. How much it actually matters is very apparent on two major services: eBay and Second Life.