I have been a fan of Nassim Taleb ever since I read his book Fooled by Randomness. In his article below Mr. Taleb discusses the incentives and disincentives that business/government face in performing their jobs. The long and the short of is there is commonly no downside risk, that is, if you do not accomplish your job you don't lose it or take a cut in pay. Therefore, your incentive is to only worry about the upside. As a result this skews behavior.
The article is in italics and the bold emphasis is mine. From Project Syndicate.
If you are interested in reading more thought provoking articles on economics and other social issues I strongly recommend Project Syndicate.
If you want to read some of Nassim Taleb's books try this Amazon site.
New Year's Resolution - Read More Stuff by Nassim Taleb.
Those who have the upside are not necessarily those who incur the downside. For example, bankers and corporate managers get bonuses for “performance,” but not reverse bonuses for negative performance, and they have an incentive to bury risks in the tails of the distribution – in other words, to delay blowups.
The ancients were fully aware of this incentive to hide risks, and implemented very simple but potent heuristics. About 3,800 years ago, the Code of Hammurabi specified that if a house collapses and causes the death of its owner, the house’s builder shall be put to death.
This simple tenet is at the origin of “an eye for an eye” and the Golden Rule in ethics (“Do unto others as you would have them do unto you”). But, beyond ethics, this was simply the best risk-management rule ever.
The ancients understood that the builder always knows more about the risks than the client, and can hide sources of fragility and improve his profitability by cutting corners. The foundation is the best place to hide risk. The builder can also fool the inspector; the person hiding risk has a large informational advantage over the one who has to find it.
Why do I believe that a certain class of people has an incentive to “look good” rather than “do good”? The reason is simply the absence of personal risk. And the problems and remedies are as follows:
First, consider policymakers and politicians. In a decentralized system – say, municipalities – these people are checked by a feeling of shame upon harming others with their mistakes. In a large centralized system, by contrast, the source of errors is not so visible, and a spreadsheet does not make one feel shame. This penalty, shame, in addition to other arguments, is a case for decentralization.
Second, we misunderstand corporate managers’ incentive structure. Contrary to public perception, corporate managers are not entrepreneurs. They are not what one could call agents of capitalism. Since 2000, in the United States, the stock market has lost – depending on how one measures it – up to $2 trillion for investors (compared to returns had they left their funds in cash or treasury bills).
So, one would be inclined to think that since managers’ pay is based on performance incentives, they would be incurring losses. Not at all: there is an asymmetry. Money-losing managers do not have negative compensation. There is a built-in optionality in the compensation of corporate managers that can be removed only by forcing them to eat some of the losses. Because of the embedded option, while shareholders have lost, managers have earned more than a half-trillion dollars for themselves.
Third, there is a problem with academic economists, quantitative modelers, and policy wonks. The reason why economic models do not fit reality is that economists have no disincentive, and are never penalized for their errors. So long as they please the editors of academic journals, their work is considered fine.
As a result, we use models such as portfolio theory and similar methods without the remotest empirical reason. The solution is to prevent economists from teaching practitioners. Again, this highlights the case for decentralization: a system in which policy is decided at a local level by smaller units – and thus is not in need of economists.
Fourth, predictions in socioeconomic domains do not work, but predictors are rarely harmed by their forecasts. Yet we know that people take more risks after they see a numerical prediction. The solution is to ask – and only take into account – what the predictor has done, or will do in the future.
I tell people what I have in my portfolio, not what I predict; that way, I will be the first to be harmed. It is not ethical to drag people into these exposures without incurring the risk of losses. In my book Antifragile, I tell people what I do, not what they should do, to the great irritation of the literary critics. I do so not for autobiographical reasons, but only because the other approach would not be ethical.
Finally, there are warmongers. To deal with them, the onetime consumer advocate and former US presidential candidate Ralph Nader has proposed that those who vote in favor of war should place themselves or a descendent into military service.
One can only hope that something will be done in 2013 to implement some skin in the game heuristics. A safe and just society demands nothing less.
No comments:
Post a Comment