Conceptual Progress

It is easy to take values like freedom and democracy for granted, and that speaks volumes about how good we have it (at least in some parts of the modern world). For the overwhelming majority of human history, across almost every society, ideas like individual liberty, human rights, and equality were not even conceived, let alone practice.

In the approximately 200,000 years that homo sapiens have existed, only in the last three thousand or so years did such concepts even emerge, and even then they were quaint ideas limited in scope and agree — the ancient republics of Athens and Rome still had slavery and the disenfranchised women, as did the republics of the United States and France.

We are fortunate to live in a time when we have higher aspirations and ideals to live up to. People speak of realism versus idealism, but at least better values and principles exist to be attained, if even only conceptually. It was not that long ago that the very idea that slavery was morally monstrous, that women were fully humans, that children warranted rights, and that people should have a say in their governance, simply did not exist in the minds of even the most heightened intellectuals, let alone the largely impoverished and illiterate masses.

We have come a very long way as a species, even if we have an even longer ways to go.

Who, or What, is to Blame for Inequality?

Over at the Washington Post, columnist Matt O’Brian reveals how inequality has less to do with a small class of super wealthy elites, and more to do with the structure and culture of many big U.S. companies

The easiest way to think about this is to think about the different types of inequality. There isn’t just inequality between everyone, but also between everyone at a single company. Why does this matter? Well, if CEOs really are gobbling up a bigger and bigger slice of the profit pie, then inequality within society at large should have increased because inequality within companies increased. But that’s not what happened. The research team of Jae Song of the Social Security Administration, Fatih Guvenen of the University of Minnesota, and David Price and Nicholas Bloom of Stanford were able to look at what had previously between private earnings data for every company between 1978 and 2012—the best data we have so far—and found that the pay gap between executives and their own workers had barely changed during this time. What had changed, though, was the pay gap between every worker at the highest-paid firms and everyone else. In other words, inequality exploded because the top 1 percent of companies were making more and paying all their employees more. This was true across the country and across industries.

It is not entirely clear why this is the case, but one hypothesis is that technological innovation has made every industry “winner-take-all”, meaning it is easier than ever for the most ruthless and resourceful companies to dominate a particular market. This explains the rise of global behemoths like Google, Amazon, Apple, and Facebook, all of which lack any true competitors in their respective industries.  Continue reading

The Rise of Megacities

For thousands of years, cities have been at the center of human experience, social organization, and innovation. Even though the vast majority of humanity throughout history has, until very recently, lived in rural areas, it was the cities from where rulers governed, goods and services were traded, and ideas were born and disseminated.

Given that precedent, it is no surprise that today’s cities — bigger and more sophisticated than ever — have begun to rival whole nations, including the very ones in which they are located, as centers of culture, economic activity, scientific research, and political influence.

Writing in Quartz, Parag Khanna discusses the emergence and future of “megacities” — metropolises numbering tens of millions of citizens and accounting for anywhere from a third to even half of a nation’s economic output. Spanning every continent, but most especially Asia and Africa, these massive urban conurbations will reshape our species’ development in every sphere, from economy to culture.


For a larger version of the above map, click here.

As can plainly be seen, the developing world — once largely rural — will lead the way in the formation of megacities, albeit not by design; most megacities have formed organically, driven by heady economic growth and the influx of migrants from rural areas and smaller cities. The process has often been as rapid and haphazard as the political, social, and economic forces of the cities’ nations.

Within many emerging markets such as Brazil, Turkey, Russia, and Indonesia, the leading commercial hub or financial center accounts for at least one-third or more of national GDP. In the U.K., London accounts for almost half Britain’s GDP. And in America, the Boston-New York-Washington corridor and greater Los Angeles together combine for about one-third of America’s GDP.

By 2025, there will be at least 40 such megacities. The population of the greater Mexico City region is larger than that of Australia, as is that of Chongqing, a collection of connected urban enclaves in China spanning an area the size of Austria. Cities that were once hundreds of kilometers apart have now effectively fused into massive urban archipelagos, the largest of which is Japan’s Taiheiyo Belt that encompasses two-thirds of Japan’s population in the Tokyo-Nagoya-Osaka megalopolis.

China’s Pearl River delta, Greater São Paulo, and Mumbai-Pune are also becoming more integrated through infrastructure. At least a dozen such megacity corridors have emerged already. China is in the process of reorganizing itself around two dozen giant megacity clusters of up to 100 million citizens each. And yet by 2030, the second-largest city in the world behind Tokyo is expected not to be in China, but Manila in the Philippines.

For its part, the United States, which is the world’s third most populous nation, and which is expected to grow steadily over the next century, is seeing the rise of several megacities thus far: the Northeast Megalopolis, which runs from Washington, D.C. through New York City to Boston; the Southern California Megaregion, which runs from San Francisco to San Jose; and the Texas Triangle, which includes Dallas-Fort Worth, Houston, Austin, and San Antonio. Though not as large as their counterparts in the developing world, they will be formidable economic and cultural centers in their own right, and are already economically larger than some medium-sized countries.


Khanna goes on to note that the sheer size and influence of these megacities, in conjunction with the rapid pace of globalization, will make them as much a part of the world as of the nations in which they are located.

Great and connected cities, Saskia Sassen argues, belong as much to global networks as to the country of their political geography. Today the world’s top 20 richest cities have forged a super-circuit driven by capital, talent, and services: they are home to more than 75% of the largest companies, which in turn invest in expanding across those cities and adding more to expand the intercity network. Indeed, global cities have forged a league of their own, in many ways as denationalized as Formula One racing teams, drawing talent from around the world and amassing capital to spend on themselves while they compete on the same circuit.

Megacities will also redefine the relationship between the developed and developing worlds, and as well as between themselves and the rest of their countries. They will be polities of tremendous influence to reckon with in their own right.

The rise of emerging market megacities as magnets for regional wealth and talent has been the most significant contributor to shifting the world’s focal point of economic activity. McKinsey Global Institute research suggests that from now until 2025, one-third of world growth will come from the key Western capitals and emerging market megacities, one-third from the heavily populous middle-weight cities of emerging markets, and one-third from small cities and rural areas in developing countries.

There are far more functional cities in the world today than there are viable states. Indeed, cities are often the islands of governance and order in far weaker states where they extract whatever rents they can from the surrounding country while also being indifferent to it. This is how Lagos views Nigeria, Karachi views Pakistan, and Mumbai views India: the less interference from the capital, the better.

Needless to say, megacities will pose as many challenges as they do opportunities: urban planning, social organization, resource management, law and order, and infrastructure will need to be subject to considerable investment and re-imagining. Political challenges will no doubt emerge between certain megacities and their smaller peers, as well as their national governments.

Khanna concludes that these issues, along with the sheer potential and influence of megacities, should change the way we map the world — metropolitan areas should be given as much attention as the 200 or so countries that make up the world. It is an interesting argument, and one that I think bears some consideration. I look forward to exploring the topic further in Khanna’s new book Connectography.

What are your thoughts?


How Cicero’s Political Campaign is Still Relevant Today

What does it say about the nature of human political life that analyses and advice dating from the first century B.C.E. is still applicable today? Stripped of its cultural and historical context, the Commentariolum Petitionis, or “Little Handbook on Electioneering”, which was ostensibly written to the great Roman orator and statesman Cicero by his younger brother, Quintus, can just as well describe contemporary American politics.

For example, it starts by outlining the importance of connections and patronage networks — especially among the wealthy and elites of society — for political advancement. Continue reading

The World Has Never Been More Peaceful

Over at Slate, Steven Pinker and Andrew Mack, two leading proponents of humanity’s moral progress, make their provocative case as to why the world is far safer and less violent than ever before.

First, they explain why the vast majority of people think the world is in an historically worst state than it really was. A lot of it comes down to human psychology.

News is about things that happen, not things that don’t happen. We never see a reporter saying to the camera, “Here we are, live from a country where a war has not broken out”—or a city that has not been bombed, or a school that has not been shot up. As long as violence has not vanished from the world, there will always be enough incidents to fill the evening news. And since the human mind estimates probability by the ease with which it can recall examples, newsreaders will always perceive that they live in dangerous times. All the more so when billions of smartphones turn a fifth of the world’s population into crime reporters and war correspondents.

We also have to avoid being fooled by randomness. Cohen laments the “annexations, beheadings, [and] pestilence” of the past year, but surely this collection of calamities is a mere coincidence. Entropy, pathogens, and human folly are a backdrop to life, and it is statistically certain that the lurking disasters will not space themselves evenly in time but will frequently overlap. To read significance into these clusters is to succumb to primitive thinking, a world of evil eyes and cosmic conspiracies.

Finally, we need to be mindful of orders of magnitude. Some categories of violence, like rampage shootings and terrorist attacks, are riveting dramas but (outside war zones) kill relatively small numbers of people. Every day ordinary homicides claim one and a half times as many Americans as the number who died in the Sandy Hook massacre. And as the political scientist John Mueller points out, in most years bee stings, deer collisions, ignition of nightwear, and other mundane accidents kill more Americans than terrorist attacks.

The only sound way to appraise the state of the world is to count. How many violent acts has the world seen compared with the number of opportunities? And is that number going up or down? As Bill Clinton likes to say, “Follow the trend lines, not the headlines.” We will see that the trend lines are more encouraging than a news junkie would guess.

The rest of the article lays out a comprehensive, case-by-case explanation for why violence has generally declined in every form, from large-scale conflict to homicide to child abuse. It is a lot more data than I can present spare to go over here, but I will highlight some key points.

Homicide. Worldwide, about five to 10 times as many people die in police-blotter homicides as die in wars. And in most of the world, the rate of homicide has been sinking. The Great American Crime Decline of the 1990s, which flattened out at the start of the new century, resumed in 2006, and, defying the conventional wisdom that hard times lead to violence, proceeded right through the recession of 2008 and up to the present.

England, Canada, and most other industrialized countries have also seen their homicide rates fall in the past decade. Among the 88 countries with reliable data, 67 have seen a decline in the past 15 years. Though numbers for the entire world exist only for this millennium and include heroic guesstimates for countries that are data deserts, the trend appears to be downward, from 7.1 homicides per 100,000 people in 2003 to 6.2 in 2012.

The global average, to be sure, conceals many regions with horrific rates of killing, particularly in Latin America and sub-Saharan Africa. But even in those hot zones, it’s easy for the headlines to mislead. The gory drug-fueled killings in parts of Mexico, for example, can create an impression that the country has spiraled into Hobbesian lawlessness. But the trend line belies the impression in two ways.

One is that the 21st-century spike has not undone a massive reduction in homicide that Mexico has enjoyed since 1940, comparable to the reductions that Europe and the United States underwent in earlier centuries. The other is that what goes up often comes down. The rate of Mexican homicide has declined in each of the past two years (including an almost 90 percent drop in Juárez from 2010 to 2012), and many other notoriously dangerous regions have experienced significant turnarounds, including Bogotá, Colombia (a fivefold decline in two decades), Medellín, Colombia (down 85 percent in two decades), São Paolo (down 70 percent in a decade), the favelas of Rio de Janeiro (an almost two-thirds reduction in four years), Russia (down 46 percent in six years), and South Africa (a halving from 1995 to 2011). Many criminologists believe that a reduction of global violence by 50 percent in the next three decades is a feasible target for the next round of Millennium Development Goals.

In short, murder is a rarity in a large proportion of societies, and is rapidly declining in most of the remainder of the world. The few places with a relatively high murder rate by today’s already lower standards, are generally doing better than they have historically, with the long-term trend continuing downward.

What about violence towards women, who for much of human history and in most large societies, fared poorly in every sphere — politically, economically, and socially. The writers admit that the data are harder to come by, but they do point to an encouraging and historically unprecedented global trend.

In 1993 the U.N. General Assembly adopted a Declaration on the Elimination of Violence Against Women, and polling data show widespread support for women’s rights, even in countries with the most benighted practices. Many countries have implemented laws and public awareness campaigns to reduce rape, forced marriage, genital mutilation, honor killings, domestic violence, and wartime atrocities. Though some of these measures are toothless, and the effectiveness of others has yet to be established, there are grounds for optimism over the long term. Global shaming campaigns, even when they start out as purely aspirational, have led in the past to dramatic reductions of practices such as slavery, dueling, whaling, foot binding, piracy, privateering, chemical warfare, apartheid, and atmospheric nuclear testing.

To be sure, women still have a long way to go until they are accorded more rights, dignity, and sociopolitical equality. But at least the world seems to be moving in that direction, and today’s seemingly idealistic advocacy campaigns are tomorrow’s momentous paradigm shifts, if the historical precedent holds.

Violence Against Children. A similar story can be told about children. The incessant media reports of school shootings, abductions, bullying, cyberbullying, sexting, date rape, and sexual and physical abuse make it seem as if children are living in increasingly perilous times. But the data say otherwise: Kids are undoubtedly safer than they were in the past. In a review of the literature on violence against children in the United States published earlier this year, the sociologist David Finkelhor and his colleagues reported, “Of 50 trends in exposure examined, there were 27 significant declines and no significant increases between 2003 and 2011. Declines were particularly large for assault victimization, bullying, and sexual victimization.”

Similar trends are seen in other industrialized countries, and international declarations have made the reduction of violence against children a global concern.

Nowadays, we take it as a given that children are innocent and vulnerable members of society that must be protected at all costs. In many societies throughout history, children were regarded as inherently degenerate, and treated accordingly — corporal punishment and strident exploitation were the norm. In the developed world and much of the developing world, children enjoy both greater rights and more social support.

Democratization. In 1975, Daniel Patrick Moynihan lamented that “liberal democracy on the American model increasingly tends to the condition of monarchy in the 19thcentury: a holdover form of government, one which persists in isolated or peculiar places here and there … but which has simply no relevance to the future.” Moynihan was a social scientist, and his pessimism was backed by the numbers of his day: A growing majority of countries were led by communist, fascist, military, or strongman dictators. But the pessimism turned out to be premature, belied by a wave of democratization that began not long after the ink had dried on his eulogy. The pessimists of today who insist that the future belongs to the authoritarian capitalism of Russia and China show no such numeracy. Data from the Polity IV Project on the degree of democracy and autocracy among the world’s countries show that the democracy craze has decelerated of late but shows no signs of going into reverse.

Democracy has proved to be more robust than its eulogizers realize. A majority of the world’s countries today are democratic, and not just the wealthy monocultures of Europe, North America, and East Asia. Governments that are more democratic than not (scoring 6 or higher on the Polity IV Project’s scale from minus 10 to 10) are entrenched (albeit with nerve-wracking ups and downs) in most of Latin America, in floridly multiethnic India, in Islamic Turkey, Malaysia, and Indonesia, and in 14 countries in sub-Saharan Africa. Even the autocracies of Russia and China, which show few signs of liberalizing anytime soon, are incomparably less repressive than the regimes of Stalin, Brezhnev, and Mao.

To be sure, democracies of all shades and degrees are not without their problems; state violence and repression can and do still persist at all levels and forms, albeit to varying extents. But once again, it is all about the relative and historical picture, and by that token most denizens of the world are immeasurably freer and less oppressed than ever, even if that is still a tenuous gain. Indeed, the very concepts of consent of the governed, human rights, civil liberties, etc. were practically nonexistent in most of human history.

Genocide and Other Mass Killings of Civilians.The recent atrocities against non-Islamic minorities at the hands of ISIS, together with the ongoing killing of civilians in Syria, Iraq, and central Africa, have fed a narrative in which the world has learned nothing from the Holocaust and genocides continue unabated. But even the most horrific events of the present must be put into historical perspective, if only to identify and eliminate the forces that lead to mass killing. Though the meaning of the word genocide is too fuzzy to support objective analysis, all genocides fall into the more inclusive category of “one-sided violence” or “mass killing of noncombatant civilians,” and several historians and social scientists have estimated their trajectory over time. The numbers are imprecise and often contested, but the overall trends are clear and consistent across datasets.

By any standard, the world is nowhere near as genocidal as it was during its peak in the 1940s, when Nazi, Soviet, and Japanese mass murders, together with the targeting of civilians by all sides in World War II, resulted in a civilian death rate in the vicinity of 350 per 100,000 per year. Stalin and Mao kept the global rate between 75 and 150 through the early 1960s, and it has been falling ever since, though punctuated by spikes of dying in Biafra (1966–1970, 200,000  deaths), Sudan (1983–2002, 1 million), Afghanistan (1978–2002, 1 million), Indonesia (1965–1966, 500,000), Angola (1975–2002, 1 million), Rwanda (1994, 500,000), and Bosnia (1992–1995, 200,000). (All of these estimates are from the Center for Systemic Peace.) These numbers must be kept in mind when we read of the current horrors in Iraq (2003–2014, 150,000 deaths) and Syria (2011–2014, 150,000) and interpret them as signs of a dark new era. Nor, tragically, are the beheadings and crucifixions of the Islamic State historically unusual. Many postwar genocides were accompanied by splurges of ghastly torture and mutilation. The main difference is that they were not broadcasted on social media.

The trend lines for genocide and other civilian killings, fortunately, point sharply downward. After a steady rise during the Cold War until 1992, the proportion of states perpetrating or enabling mass killings of civilians has plummeted, though with a small recent bounce we will examine shortly.

Granted, any number of people killed in warfare, especially noncombatants, is a travesty. But as morbid, not to mention logistically difficult, as historical comparisons of death rates may be, fewer deaths even when deaths occur points to steadier smaller and less brutal conflicts, and overall less suffering than there otherwise would be. Today’s civilians are literally several thousand times less likely to be targeted in today’s wars than they would have been in the mid-20th century.

And thankfully, the wars that usually form the backdrop to such mass killings are increasingly rarer and less deadly than ever:

War. Researchers who track war and peace distinguish “armed conflicts,” which kill as few as 25 soldiers and civilians caught in the line of fire in a year, from “wars,” which kill more than a thousand. They also distinguish “interstate” conflicts, which pit the armed forces of two or more states against each other, from “intrastate” or “civil” conflicts, which pit a state against an insurgency or separatist force, sometimes with the armed intervention of an external state. (Conflicts in which the armed forces of a state are not directly involved, such as the one-sided violence perpetrated by a militia against noncombatants, and intercommunal violence between militias, are counted separately.)

In a historically unprecedented development, the number of interstate wars has plummeted since 1945, and the most destructive kind of war, in which great powers or developed states fight each other, has vanished altogether. (The last one was the Korean War). Today the world rarely sees a major naval battle, or masses of tanks and heavy artillery shelling each other across a battlefield.


Though the recent increase in civil wars and battle deaths is real and worrisome, it must be kept in perspective. It has undone the progress of the last dozen years, but the rates of violence are still well below those of the 1990s, and nowhere near the levels of the 1940s, 1950s, 1960s, 1970s, or 1980s.

The author’s conclude that, overall, every kind of violence has declined in most of the world, and political and economic freedom is steadily, if tenuous, continuing apace. Again, this is all based on general global trends and comparisons to humanity’s depressingly poor precedent in these areas.

None of this is to say that the multitude of grave problems humanity still faces should not be taken seriously and addressed accordingly. Far too many people continue to suffer and die at the hands of other people in all sorts of wars, often beyond clear-cut violence — look at economic exploitation for example, or the costs of environmental degradation.

But to deny that humanity has not nonetheless made some measurable progress is both empirically unfounded and morally counterproductive. The more we see, acknowledge, and learn from our progress, the more we can keep it going. If we remain mired in fear, cynicism, misanthropy, and despair, it will be much harder to improve our condition and those of our fellow humans.

Let us celebrate how far we have come as a species without being complacent. Let us see our incredible potential for moral progress and continue pushing the boundaries further. For all our flaws and problems, we have come to far to give up now.

What are your thoughts?

The World Goes a Little Less Hungry

For most of us in the developed world, hunger is no worse than a nuisance, and can be easily rectified by the abundance of options offered by restaurants, fast food joints, convenience stores, and supermarkets. So it is mercifully easy to forget the horrific toll that malnutrition and chronic hunger continue to reap across vast swathes of humanity.

A person who is chronically hungry would feel more than just hunger pangs. The body produces less energy and develops a daily sense of weakness. “They feel tired, they don’t feel like they can perform their work optimally,” says Rafael Perez-Escamilla, a chronic disease epidemiologist at Yale University. “They feel fatigued and a sense of apathy.” He adds that the hunger can become so severe that a person barely has the ability to get up from bed.

The lack of nutrients is especially detrimental for children under 5, for whom hunger is the leading cause of death. Each year, hunger kills some 3.1 million children under 5, accounting for 45 percent of child mortality within that age group. Those who survive suffer a lack of physical and mental development. Roughly 100 million are underweight, and 1 in 4 children are stunted, meaning their height is below the fifth percentile for their age.

… And To The Brain

Perez-Escamilla warns that the physical consequences are only part of the problem. “The vast majority of people facing chronic hunger cannot concentrate very well,” he says. “You start having a headache and getting into a bad mood, and you can’t concentrate on your work.”

Now, he says, imagine that happening every day. Add the distress of not being able to provide for your family. He recalls a study in which he asked people what hunger meant. “People talked about how hunger is the worst form of violence against human beings,” he says. “It’s the worst thing that can happen to the dignity of a human being.”

Given such grim details, it is all the more gratifying to see that this scourge has been declining at an impressive speed: according to the most recent published last summer, 795 million people were hungry as of 2014 (the most recent year for which there is reliable data). While that is still a terribly high number, it is over 200 million less than in 1990, when 1  billion people — one out of five people — were hungry, compared to one in nine today.

Also keep in mind that the world’s population has grown by another 2 billion, making this achievement even more impressive.

To top it all off, the rate of hungry has nearly halved, from 23.3 percent in 1990 among developing countries, to a little less than 13 percent today.


Countries in green have either halved the proportion of people who are malnourished, or reduced it to less than 5 percent; those in yellow have made slow progress, while red indicates no progress.

For a larger version of the above map, click here.

As the map shows, much of the progress was led by East Asia, Latin America and the Caribbean. China halved its malnourished population, while Vietnam and Korea lifting millions out of hunger. The number of underweight children dropped dramatically in Brazil, Chile, Guyana, Nicaragua, Peru, and Uruguay, with only Guatemala seeing its undernourished population grow.

What accounts for such incredible progress? As you might imagine with an issue of this magnitude, quite a lot of strategies have been involved, including improvements in infrastructure and communications, which ensures more quality food makes it to more tables; public and private investments in agriculture, particularly to boost yields and grow more nutrient-dense food; government programs to provide greater food access for the poor; and a decline in abject poverty.

Clearly, a lot of work remains in reducing chronic hunger in this world of plenty. But given the incredible progress thus far, even the challenges posed by climate change might be overcome if we continue to apply solutions across the political, economic, and technological spheres.

Sources: NPRNational Geographic

Universal Basic Income is About to Get Its Biggest Test Yet

After decades of discussion and support among academic circles, as well as a few small-scale but promising experiments, universal basic income is about to get its biggest and possibly most decisive try yet. Writing for Slate, the co-founders of one of the world’s leading nonprofit charities lay out their plan to give this innovative idea its due:

The organization that we founded, GiveDirectly, has decided to try to permanently end extreme poverty across dozens of villages and thousands of people in Kenya by guaranteeing them an ongoing income high enough to meet their basic needs—a universal basic income, or basic income guarantee. We’ve spent much of the last decade delivering cash transfers to the extremely poor through GiveDirectly, but have never structured the transfers exactly this way: universal, long-term, and sufficient to meet basic needs. And that’s the point—nobody has and we think now is the time to try.


We’re planning to provide at least 6,000 Kenyans with a basic income for 10 to 15 years. These recipients are some of the most vulnerable people in the world, living on the U.S. equivalent of less than a dollar. And we’re going to work with leading academic researchers, including Abhijit Banerjee of MIT, to rigorously test the impacts.

By “rigorous” we mean a few things. First, the test must be experimental, so that we generate unbiased and transparent estimates of impact. Second, the guarantee must be a long-term commitment. We already know quite a bit about the beneficial effects of giving people money for a few years; the key question is how the knowledge that your livelihood is secured for more than a decade affects your behavior now. Do you take more risk? Get more schooling? Look for a better job? Third, the guarantee needs to be universal within well-defined communities, since the goal is as much to understand social dynamics as individual behaviors. While various other basic income pilots have been conducted in the past, none so far have met all three of these criteria.

The group estimates that this evaluation will cost roughly $30 million, 90 percent of which will go directly to the poorest households, and the remainder to the staff, office, payment fees needed to deliver it. Because it is being tried in a developing country, the costs to make people’s needs will be lower, thereby making the project more affordable. It will also complement similar experiments being planned in FinlandCanada, and elsewhere.

And even though it will provide some well needed empirical evidence, on the largest scale thus attempted, the authors point out that there is plenty of data already available from prior attempts:

As it turns out, that assumption was wrong. Across many contexts and continents, experimental tests show that the poor don’t stop trying when they are given money, and they don’t get drunk. Instead, they make productive use of the funds, feeding their families, sending their children to school, and investing in businesses and their own futures. Even a short-term infusion of capital has been shown to significantly improve long-term living standards, improve psychological well-being, and even add one year of life.

On the other hand, well-intentioned social programs have often fallen short. A recent World Bank study concludes that “skills training and microfinance have shown little impact on poverty or stability, especially relative to program cost”. Moreover, this paternalistic approach is often for naught: Jesse Cunha, for example, finds no differences in health and nutritional outcomes between providing basic foods and providing an equally sized cash program. Most importantly, though, the poor prefer the freedom, dignity, and flexibility of cash transfers—more than 80 percent of the poor in a study in Bihar, India, were willing to sell their food vouchers for cash, many at a 25 to 75 percent discount.

Whatever the results might be, I am definitely looking forward to seeing what we can learn from this approach. Favored by economists and political scientists across the political spectrum, UBI promises a streamlined, transparent, and affordable way to alleviate poverty, stimulate economic activity, and adapt to a possible future of mass automation and scarce jobs. Or so we will see in due time, hopefully!

Can and Should Governments Promote Happiness?

Back in February, the United Arab Emirates announced the creation of a “minister of state for happiness” that would “align and drive government policy to create social good and satisfaction”, whatever that means. (The same statement announced the creation of minister for tolerance, perhaps in response to the country’s rapidly growing multicultural population.)

Needless to say, the idea of a “happiness minister” was met with a lot of confusion and amusement, both from within the country and beyond. What does it means to promote happiness on a policy level? What would this entail? And should governments even take it upon themselves to worry about this?

As The Washington Post points out, the U.A.E. is only the latest of several countries to go this route. Both Ecuador and Venezuela announced similar initiatives in 2013 — a state secretary of “good living / well-being” and a “vice ministry of supreme social happiness”, respectively — and the small Himalayan nation of Bhutan pioneering the concept back in 1972 with its “gross national happiness” (GNH) index.

In theory, these ministries work to try to improve the levels of happiness in the countries through a variety of policies. David Smilde, a senior fellow at the Washington Office on Latin America, says that despite its grandiose name, Venezuela’s ministry actually has a “pretty reasonable mandate” – measuring the effectiveness of the government’s various social welfare programs. In Ecuador, Ehlers has implemented or plans to implement a variety of policies that included both labeling foods based on their health values and meditation classes for schoolchildren, the Miami Herald reported last year.

Bhutan’s GNH measure is especially interesting, as it was devised to shift public policy focus away from economic concerns — as signified by the near-universal interest in gross domestic product (GDP) and towards promoting several components of happiness, such as mental and physical health, leisure time, and standard of living.

While there is no minister directly responsible for happiness in the tiny Himalayan nation, the Gross National Happiness Commission is tasked with surveying the levels of happiness in the nation. The information they gather is then used by the government to make decisions.

Butan’s big idea has since proven popular around the world and now a variety of countries all around the world – including Thailand and the United Kingdom – have begun measuring happiness with an aim to using it to devise policy. Dubai actually announced plans for its own Happiness Index in 2014, with Hussain Lootah, director general of the municipality, telling the National newspaper that it would be designed to “create an excellent city that provides the essence of success and comfort of sustainable living.”

Interestingly, despite leading the way in prioritizing social well being as government policy, Bhutan’s performance has been mixed at best: according to the most recent U.N. World Happiness Report, which was inspired by the GNH idea, the country ranks only 79th out of 158, not terrible but not all that great. Bhutan has also dealt with faced issues such as pervasive poverty and discrimination against non-Buddhist minorities.

Believe it or not, the same U.N. report ranked Venezuela a respectable 44th in 2016, a significant drop since 19th in 2012, when the Orwellian-sounding “vice ministry of supreme social happiness” was created. Given the country’s plethora of social, economic, and political problems — ranging from food shortages to high crime — this decline is unsurprising, though still not as damaging as one would think.

For their part, Bhutan ranked 84th, the U.A.E. 28th, and Ecuador 51st. (Wikipedia has a great breakdown of the report’s results and methodology here.) As The Post points out, the top ten countries — all of which were northern European states and small Anglophone nations — had another thing in common besides being wealthy liberal democracies:

None of the top 10 countries rated “happiest” in the U.N. report have a government ministry devoted to happiness – although given the rarity of such ministries, it’d be very surprising if they did. There’s certainly little doubt that government policies can influence levels of happiness, but whether an entire ministry is needed is not so certain. Generally, when it comes to improving levels of happiness, “what matters is how things are done across government as a whole,” says John Helliwell, a co-editor of the World Happiness Report and a senior fellow at the Canadian Institute for Advanced Research. And Carol Graham, a fellow at the Brookings Institution who has studied attempts to measure well-being, says that the creation of ministries for happiness can be a “diversion” and may even “border on the government telling people how to be happy or that they should be happy.”

And while most of the top nations were indeed highly developed, broadly prosperous states, there was a smattering or poorer or middle-income countries, such as Costa Rica (14th place), Puerto Rico (15th), Mexico (21st), Chile (24th), and Panama (25th). It goes to show that, as with individuals, there is no magic bullet when it comes to well-being and life satisfaction.

Granted, it seems to be the general rule that financial wealth, stability, and freedom — both on an individual and societal level — generally correlates with happiness. But values, community life, leisure time, and culture count for a lot, too; people or places that are lacking in some factors, but excel in others, might still end up happier on the whole.

In my view, the best thing governments can do is create the proper conditions within which happiness can thrive — effective rule of law that safeguards personal safety and stability, less intrusion into civil liberties, more public spaces for leisure and community engagement, and so on. In other words, cultivate a physical and social environment that maximizes the individual’s ability to improve their own well being. More proactive measures, such as making healthcare and education more accessible, would certainly help, too, but this could be politically unpalatable in places wary of government intrusion, like the U.S. (which, by the way, ranked a good 13th place in the U.N. happiness report.)

What are your thoughts?


Philosopher Convicts

One of the nation’s finest debate teams lost to a group of New York inmates. It reads like something from a feel-good movie, but it happened back in October, and I had only recently heard the news. According to The Guardian:

The inmates were asked to argue that public schools should be allowed to deny enrollment to undocumented students, a position the team opposed.

One of the judges, Mary Nugent, told the Wall Street Journal that the Bard team effectively made the case that the schools which serve undocumented children often underperformed. The debaters proposed that if these so-called dropout factories refuse to enroll the children, then nonprofits and wealthier schools might intercede, offering the students better educations. She told the paper that Harvard’s debaters did not respond to all aspects of the argument.

The Harvard team directed requests for comment to a post on its Facebook page that commended the prison team for its achievements and complimented the work done by the Bard initiative.

“There are few teams we are prouder of having lost a debate to than the phenomenally intelligent and articulate team we faced this weekend, and we are incredibly thankful to Bard and the Eastern New York Correctional Facility for the work they do and for organizing this event,” the debate team wrote days after their loss.

Aside from Harvard, the team has beaten rivals from West Point and the University of Vermont.

Launched in 2001, Bard Prison Initiative is a privately funded program by nearby Bard College that offers inmates over sixty courses in the liberal arts; it has already expanded to six prisons. Anyone with a GED or equivalent can apply, and there is so much interest in it that each available spot has almost ten applicants.

While in prison, Kenner said students are encouraged to “make the most of every opportunity”.

Carlos Polanco, a 31-year-old from Queens and a member of Bard’s winning debate team, is among the roughly 15% of inmates at the correctional facility in Napanoch who has taken advantage of the education program.

“We have been graced with opportunity”, Polanco, who is in prison for manslaughter, told the Wall Street Journal after the debate. “They make us believe in ourselves”.

Indeed, only 2 percent of Bard’s graduates return to prison within three years (the usual assessment period) — compared to 40 percent statewide. It is amazing what an education, particularly in the humanities, can do for the human spirit. Here’s hoping more programs like this emerge around the country.

Lessons From Ikea on the Merits of Better Pay

While many American employers regard higher wages as anathema to business success, Ikea, the world’s largest furniture retailer, is thriving in the U.S. in large part because of its generous compensation. As HuffPo reported:

Under the system that the ready-to-assemble furniture maker first established in January [2015], the starting wage for any given store in the U.S. reflects the cost of living in that particular area as determined by the MIT Living Wage Calculator, which takes into account the local cost of rent, food, transportation and the like. After the second round of raises, which is slated for this coming January, all of the company’s U.S. stores will be paying at least $10 per hour, and the average minimum wage across all locations will be $11.87 — a 10.3 percent increase over the previous year, according to the company.

Rob Olson, chief financial officer for Ikea U.S., told The Huffington Post that the company is already reaping dividends from its decision to hike the wage floor and to factor in the local cost of living in doing so.

“We’re very pleased so far,” Olson said.

So what types of benefits has Ikea seen?

For one, less turnover. Although it’s only been six months since the raises went into effect, Olson said Ikea is on pace to reduce turnover by 5 percent or better this fiscal year. Holding onto employees longer means the company is spending less on recruiting and training new replacements.

Ikea is also attracting more qualified job seekers to work at its stores, according to Olson. Pay for retail sales workers in the U.S. is generally very low, with an average industry wage of just $12.38 per hour, according to the Bureau of Labor Statistics. But Ikea’s average store wage is heading north of $15. After its living wage announcement [in 2014], the company opened two new locations — one in Merriam, Kansas, and another in Miami — and the higher wages (and attendant publicity) likely helped the company lure more candidates.

“At both of those stores, the applicant pool was fantastic,” Olson said.

The Swedish multinational is seeing the same benefits that just about every company that pays its workers well enjoys: lower turnover (and thus savings from training and recruitment); higher morale and productivity; and the attraction and retainment of talented, quality employees. It should be intuitive that when people are given a stake in a company — through better wages, benefits, and overall treatment — they will feel invested enough to stick around and work hard, thus giving back to their employers. That has always been the logic behind paying executives and higher managers so well, so why shouldn’t it apply to everyone else?

While I am not keen on giving the government too much power, I am not so sure companies are any more rational or knowledgeable about the matter either. Ikea is hardly the first anecdote proving that paying and treating workers better generally results in higher productivity and thus better long-term performance. Indeed, that makes intuitive sense, whatever the empirical evidence, and it is precisely this logic that companies use to justify the high payouts to executives (though rarely to average workers who contributed to said profits as well).

And yet so many companies — the majority in fact — continue to sit on their high profits, or allocate the lion’s share to the top of their corporate structure, all the while bemoaning the high costs of turnover, low productivity, etc. Mere ignorance, short-term thinking, and pressure from greedy shareholders interested only in immediate (and ever-higher and unsustainable) dividends are among the reasons for this problem (to say nothing of a culture that values profits and ruthless commerce at all costs). Being a business does not entail knowing, let alone caring about, the psychological and economical factors involved in corporate policy.

Personally, I am of the view that if the only way you can run your business is to knowingly beggar your employees — for example, paying far less than what can reasonably sustain basic necessities like shelter and healthcare — despite being able to well afford better pay by simply allocating a little less to yourself and your shareholders, then you do not deserve to run a business. I know that is an idealistic and moralizing approach, but why shouldn’t it be, given how integral business policies are (in the aggregate) to societal well-being. If we do not want government calling the shots, than it is contingent on all citizens — especially with the most power and resources — to do what they can to behave ethically and socially responsibly (indeed, this is what libertarians and many conservatives argue: that it is on the private sector, not the public sector, to create a more just and prosperous society).

My short answer: leave it to workers themselves, via unions, co-ops, or other empowering structures, to contribute to the decision-making process of corporate policy. It is hardly a perfect system — what institution is? — but it seems to be the best and most practical alternative to hierarchical and often out-of-touch corporate models, as well as government fiat (which of course is typically no less hierarchical and aloof). Otherwise, we would have to hope that low-wage employers come around to Ikea’s position, especially as it is as much in their interests as those of their workers.
What are your thoughts?