A Critic's Meta Review: 4/5
Nassim Nicholas Taleb’s book Skin in the Game (2018) argues that balancing risk and reward is essential to creating a fair society, and that a lack of such balance creates a world fraught with unnecessary complexity and risk.
Understanding the balance or imbalance of risk and reward in social systems helps to explain how the world works. When people subject others to situations involving risk—as in a financial trade, for example—they should share the consequences of failure just as they would share the reward of a positive outcome. In other words, people who have power over others should have skin in the game. Too often, a balance between risk and reward does not exist. Ensuring that people are involved in the consequences of their actions reduces the gap between talk and action, and between the abstract and the concrete. Only in this way can balance and symmetry be restored to the world.
Examples of imbalance exist throughout the modern world. When visiting a doctor for treatment of a serious heart ailment, for instance, patients can be assured that the doctor has a stake in their well being. If the doctor misdiagnoses the problem or treats it incorrectly, patients can sue for malpractice. This threat of a lawsuit ensures the relationship between doctor and patient is a trusting one. Hospital administrators and pharmaceutical companies, however, do not have a similar stake in patients’ well being. They have a reason to brush aside ailments to reduce their own costs. Because they have no stake beyond extracting profit from the systems they control and influence, hospital administrators and pharmaceutical companies are the reason health care is so bad in the United States.
Another familiar example involves worker–employer relationships. At any company, employees are more reliable than contractors because they have a stake in the well being of the company where they work. Companies can try to write stiff penalties into their agreements with outside contractors, ensuring some consequences for them if they fail to perform their job. But even with these measures, employees who do the same work will always have more skin in the game and perform better for longer. Not only does full-time employees’ entire income depend on their job, but they often have staked their identity on it as well. This relationship between work and well being makes an employee more committed than a contractor to performing the job effectively.
By evaluating different social systems and relationships with an eye toward who shares in the consequences and rewards—that is, who has skin in the game and who does not—people can see the world as a system built on interdependencies. Where people do not share the consequences of their risks, social systems are destined to have problems.
When two parties enter into an arrangement involving some risk, they should share both risk and reward.
Abuse of power occurs when an individual enters a situation or agreement where risk and opportunity are inherent but then takes measures to limit his or her exposure to risk at the expense of another. Relationships that involve risk and reward must be symmetrical or else they are unfair and ultimately bad for society.
Finance is one area of the US economy that many Americans believe to be imbalanced in terms of risk and reward. After billions of dollars in shoddy investments at the major banks nearly brought down the world economy in 2008, some investors at these banks received large bonuses, causing a public outrage. Disgruntled politicians and voters argued that banks should not be allowed to give out large bonuses when their investments fail, because doing so encourages reckless behavior. Knowing they would be paid handsomely whether or not their risky bets paid off, bankers would only continue to act carelessly with their clients’ money.
Curbing bonuses might reduce excessively risky behavior in the finance sector. For a 2017 working paper, Anya Kleymenova and A. İrem Tuna studied banks in the United Kingdom, where the government had introduced bonus caps in the wake of the financial crisis. They found that the British banks became less risky, and that their risk of bank failures fell with the introduction of bonus caps. Bank shareholders reacted positively to the bonus caps, apparently anticipating that the banks would be more risk averse. However, the researchers found that there were some unintended consequences to the legislation; for example, the compensation schemes laid out in CEO contracts became more complex following the law’s introduction, making it harder for regulators to keep up with how these executives were paid.
People can really only care about those in their immediate vicinity.
In small towns where everyone knows practically everyone else, people’s relationships are real and immediate because everyone has a stake in the lives of everyone else. But people can’t have a stake in the lives of those they never interact with, such as people who live in another, faraway country.
In his book Grooming, Gossip, and the Evolution of Language (1998), anthropologist Robin Dunbar actually put a number to the limits of meaningful relationships. Humans, he said, could maintain a stable relationship with a maximum of 150 people. Dunbar estimated this was the number of people with whom a given individual could have a personal relationship including mutual understanding and respect. According to Dunbar, the figure of 150 matched social groupings throughout history. From the time of the Thirty Years’ War (1618-1648), when the structure of modern armies began to take shape, the company became the standard unit of organization. Under Sweden’s King Gustavus Adolphus, the size of a company was set at 106 troops. Despite huge advances in technology and methods of warfare since then, most modern armies around the world have companies that are rarely larger than 150 troops. Apparently, successive generations of military planners have realized that with companies of this size, troops can have meaningful social relationships with each other. Trying to have larger numbers of troops cooperate with one other in the same way is ineffective. When organizing larger groups of soldiers, such as a battalion or an entire army, military leaders rely on formal systems of hierarchy instead of social relationships to disseminate their orders. To inspire people, they have to invoke abstract concepts like God, the Empire, or the Queen. While individual soldiers undertake actions of bravery to protect or save the lives of comrades in their companies, it is harder to convince them to put their lives on the line for distant or abstract objectives.
Society rewards people for avoiding negative consequences, promoting them into positions where they have power over situations in which they have no stake.
Too often, people are promoted for professing expertise in subject areas they know nothing about. This causes problems for everyone, because people shouldn’t have influence over situations where they have no understanding of the consequences of outcomes, let alone a stake in one or another. In 2012, Thomas E. Ricks, a journalist who covers the defense sector, said the US military had in recent years become bloated with officers who were promoted into powerful positions despite glaring failures and a lack of appropriate experience. While the American public has largely blamed the nation’s political leadership for the failures in the Iraq and Afghanistan wars, Ricks believes the weakness of the officer class is equally to blame. Officers are rewarded and promoted for staying on the job, not for taking risks and winning battles. As a result, US military leaders tend to strive for mediocrity, avoiding the kind of confrontational tactics that are typically necessary to win wars.
Regulations created by people who do not have a stake in their outcome are worthless.
Regulations written by people who have no stake in whatever it is they are regulating typically fail to solve any problems. Worse still, they deprive people of their freedom. As regulations pile on top of each other, they become more complicated and more detached from whatever behavior they are nominally meant to influence. As a result, the only people who benefit from regulations are the people who write them and can understand their complexities.
In his book The Rise and Decline of Nations (1982), economist Mancur Olson wrote that groups with niche interests tend to stymie a government’s ability to function over the long haul. When businesses and other interest groups ask for and receive favors, tax breaks, and exemptions, a system of government intended to serve everyone warps into a patchwork of tailored rules that apply differently to different people and industries. Since governments change laws primarily by adding new ones instead of repealing old ones, bad regulations simply pile on top of each other.
Sometimes legislators must be reminded of the practicality or impracticality of what they propose in regulations. In 2013, Congress debated cuts to the Supplemental Nutrition Assistance Program (SNAP), commonly known as “food stamps.” Several members, led by Congresswoman Barbara Lee of California, publicly committed to limiting their food budget to $4.50 per day—the average SNAP allocation for a single-person household—for a week. They challenged proponents of the cuts to do the same. Critics argued that the so-called “SNAP challenge” was a political stunt, and that for most people who used them, food stamps were not the sole source of a family’s food budget. Lee countered that she knew firsthand that many people relied entirely on food stamps to feed themselves. As a single mother in the 1970s, Lee said she had relied only on food stamps to feed her children. By taking the SNAP challenge, she argued, members of Congress could demonstrate to their peers how difficult it is to survive on food stamps alone. Nevertheless, SNAP cuts totaling $5 billion went into effect that November, impacting 47 million Americans.
Although legislators tend not to experience the consequences of their actions themselves, voters have occasionally chosen to hold them accountable anyway. In 2010, after the state legislature repeatedly failed to pass a balanced budget on time, California voters approved a proposition to make passing a balanced budget a condition of legislators’ pay. In 2011, then-State Controller John Chiang invoked the new law and docked pay for state legislators after he deemed they had failed to pass a balanced budget on time. The move infuriated lawmakers, but it endeared Chiang to voters who were glad to see legislators suffer for their inaction. In 2018, Chiang, by then California’s treasurer, announced he would run for governor, citing his holding legislators accountable as part of his experience.
Laws are for correcting an imbalance of risk and consequence.
Since antiquity, laws have been created to ensure people share risks and consequences. So-called “eye for an eye” justice still exists in some corners of the world, even though many advocacy groups argue it’s an inhumane approach to correcting social problems. In Saudi Arabia, for example, courts have allowed people who have lost an eye by another person’s hand to gouge out the offender’s eye. Human Rights Watch condemned the Saudi courts’ sentence, saying it amounted to “torture masquerading as justice.”
Although the legal systems in most developed countries do not allow for people to maim one other when they are injured, they carry on the tradition of reciprocal damage by allowing people who have been wronged to take a sum of money from those who have wronged them in proportion to the loss or damages they caused. In the United States, for instance, people can sue each other when they feel wronged under the tort system. Torts are a body of laws that describe when and how people can sue each other.
In his book Tort Law for Paralegals (2015), lawyer and teacher Neal R. Bevans describes how “eye for an eye” justice eventually became the tort system of regulations that allow people to demand financial compensation for their injuries. The tort system grew out of the Industrial Revolution in the United States and England, as heavy machinery became ubiquitous and people were frequently injured due to the negligence of machine operators. Railroads were particularly dangerous, leading to many injuries to both workers and passengers. Initially, courts sided with the railroads. But over time, they developed new theories of negligence, wherein companies responsible for large numbers of people could be held responsible for neglecting passengers’ safety. Slowly, courts developed a system that awards damages to people for their injuries, leading to the tort system people rely on to correct wrongdoing with financial compensation today.
Key Insight 6
What people do is more important than what they think or say.
The only advice worth considering is that of people who take risks based on their own advice. People who talk as if they understand a phenomenon but do not risk anything themselves cannot be trusted. Their opinions are worthless.
Some analysts have traced the rampant abuse of power at the investment bank Goldman Sachs to a dissemination of risk away from the people managing the company. For most of its history, Goldman Sachs was organized as a partnership. To join, partners had to contribute a substantial portion of their personal wealth and could not withdraw their investment until they retired. Once they joined, partners were personally liable for the bank’s failed investments, meaning they had to spend their own money and even sell their cars or houses to recoup losses if necessary.
In 1999, however, the partnership was dissolved, and Goldman Sachs became a public company. Instead of concentrating risk in the hands of the people who made the investments, the change spread out risk among shareholders. Knowing they could take risky bets without suffering the consequences themselves, traders behaved more recklessly. After the financial crisis of 2008, when the failed bets of Goldman Sachs put the entire world economy in jeopardy, the federal government purchased a large share of its bad investments, effectively disseminating risk to an even larger group of people: American taxpayers and holders of American debt.
In a 2012 opinion piece published in The New York Times, Greg Smith, a departing Goldman Sachs executive, said that Goldman’s bankers no longer cared about the interests or goals of their clients. Instead of carefully considering clients’ needs, and how to meet them, the company’s bankers were only interested in making money for themselves, even if it meant betraying clients and acting recklessly with their money.
Values are not universal.
Since humans can only really understand something when they have a stake in it, rules and values pertain to an individual’s sphere of experience. Nevertheless, people often rely on high-minded ideals about how others should behave and try to apply those rules everywhere. This is foolish. Abstract ideals and practical rules are not the same thing. When people try to make rules based on ideals alone, they cause problems and create useless rules.
In an essay titled “The Case Against Human Rights,” University of Chicago law professor Eric Posner argues that efforts to codify an internationally applicable set of human rights were always doomed to fail. Following the Second World War, members of the newly formed United Nations began writing a list of human rights. Negotiators split into two camps. Nations siding with the Soviet Union wanted to define human rights as “economic rights,” such as health care and education, while nations aligning with the United States wanted to define them as “political rights,” such as the right to self-expression and the right to vote. As a result, the UN’s approach to human rights proceeded in two at times contradictory directions at the same time.
When political leaders have tried to let human rights inform their policies, the results have often been hypocritical. In the late 1970s, following the indiscriminate killing of civilians during the Vietnam War, US President Jimmy Carter differentiated himself from his predecessors by invoking the term “human rights” in foreign policy proclamations. But while he claimed to support universal standards, the political realities of the day meant that Carter’s emphasis on rights was always just talk. Although he lambasted the Soviet Union for its poor human rights record, he remained an unwavering supporter of highly repressive regimes in Iran and Saudi Arabia. Following Carter, US presidents continued to use the term “human rights” when laying out foreign policy that included support of repressive governments. When presidents talk about “human rights,” Posner says, they are either speaking in the abstract or providing a morally palatable cover for their strategic ambitions.
Living an honorable life is the truest form of success.
People who willfully share risks with those they interact with are more honorable than people who do not. People who do not share risk may enjoy greater material success at times, but honor is more important.
Writing in Forbes, angel investor Amy Rees Anderson said that, if she could impart only one piece of advice to other people, it would be the understanding that success can be temporary, but “integrity is forever.” Having integrity, she said, means always doing the right thing, no matter the consequences, even when no one is watching. The benefit of having a reputation for integrity is having a large network of relationships based on mutual trust. But while developing a reputation for integrity can take years, it can be lost in a single careless moment.