The Untold Story of Buddhism’s Struggle in America

Buddhism’s presence in the United States is seen as a very recent, if not trendy, phenomenon, becoming most visible starting from the 1960s and 70s. But like other minority religions, Buddhism has been around far longer than our public consciousness suggests, and its history here has not always been a pleasant one.

A recent article in The Atlantic discusses the tribulations of Buddhists in the context of Japanese internment during World War II. Because a large number of early American Buddhists were of Japanese ancestry, the legal and social problems faced by adherents were inextricably tied what Japanese citizens and residents faced as a whole.

73 years ago this week … President Franklin D. Roosevelt signed Executive Order 9066, authorizing the evacuation of all of those of Japanese descent from the West Coast to ten war relocation centers—often called “concentration camps” before that term came to have other connotations.

For the most part, the wartime fears that led to the relocation of Japanese­-born immigrants and their American­-born children were justified on racial rather than religious grounds. Those forced to leave behind homes, farms, and businesses in states bordering the Pacific were not of a single faith. There were Buddhists among them, and many maintained Shinto rituals that provided spiritual connections to their homeland, but there were also Christians of various denominations, as well as those with no particular affiliation.

Religion was not ignored, however. When the FBI set about compiling its list of suspect individuals after the attack on Pearl Harbor, they naturally included members of various American Nazi parties and groups with political ties to Japan. Yet they also paid particular attention to Buddhist priests.

J. Edgar Hoover’s Custodial Detention List used a classification system designating the supposed risk of individuals and groups on an A­B­C scale, with an “A” ranking assigned to those deserving greatest scrutiny. Ordained Buddhists like Reverend Fujimura were designated “A­1,” those whose apprehension was considered a matter of urgent concern.

The priests became the first of a relocation effort that would soon detain more than 110,000. Many within this larger group, having heard of the sudden arrests and harsh interrogations endured by Buddhist community leaders, sought refuge in Christianity, hoping—in vain, it turned out—that church membership might shield them from such treatment.

Those who did not go this route were called “Buddhaheads,” an epithet often applied to the Japanese Americans of Hawaii, but more broadly used to suggest a resistance to assimilation. Within the Japanese community, Buddhists were more likely than Christians to maintain their native language, as well as the customs and rituals performed in that language. They were also more likely than Christians to read publications concerned with Japanese political affairs. Subscription rolls of such publications provided the FBI with a natural starting point for building its “A” list of suspects.

Because of the connections and the traditional knowledge Buddhist temples helped maintain, to be a Japanese Buddhist in America during the 1940s was to be considered a greater risk to the nation.

I recommend reading the rest of this piece, which conveys the struggles of Buddhists and Japanese through the experiences of Reverend Fujimura, and looks at a little-known fight to get Buddhist troops due recognition of their faith on their memorials. Very informative look at one of the many neglected chapters of American history.

Apple Ends (Sort Of) Ends Indentured Servitude — In 2015

It is the 21st century, and the world’s most valuable company has finally ended a practice akin to slavery, up to a point. As the Washington Post reported:

The process works like this: Employment agencies recruit workers. They then charge them placement fees for jobs, often in foreign countries. Those fees end up putting workers in debt to the agency. If that wasn’t bad enough, according to Apple’s own audits, some agencies held the passports of bonded workers in safes until their debts were paid off.

That’s right, no passports. That probably means no form of identification, and it certainly means that they can’t go home.

It’s pretty close to what some might call indentured servitude. And that’s what Apple — the tech company that has taken a lot of heat and also offers the most information about its factory conditions — has only just stopped. (It did previously ban factories from using employment agencies that charged more than a month’s wages in fees.)

This is where we are in 2015.

And before any back-patting commences, it’s worth noting that even this step is just a small one, said Scott Nova of the Economic Policy Institute, who co-authored a paper raising questions about Apple’s auditing process. Nova noted that the policy only applies to those who travel across borders to work at Apple supplier factories —  not to the Chinese workers at Chinese suppliers, many of whom also use recruiting agencies.

As the article notes, Apple is hardly unique in this and other abusive practices, as labor exploitation is pretty much the norm among tech company (and for that matter in just about every industry). Even if this one company policy was fully eradicated, many other problems remain:

While Apple has made inroads in some areas, it actually saw compliance with overtime rules fall from the previous year. Last year, 92 percent of workers of factories that the company audited kept to a 60-hour work week, a decline from 2013 when it was 95 percent. That’s not nearly as bad as levels in 2007, when it was roughly 70 or 80 percent, but it is a dip. Not to mention the 60-hour work week, which many of us would balk at, is also 10 hours more than China’s poorly-enforced law limiting the work week to 50 hours. (Technically, Apples contracts with companies such as Foxconn to manufacture its electronics and does not directly employ those workers).

Recall that most of this data come from self-reporting on Apple’s part: the picture would no doubt be just as grim among every other major manufacturer in the world. When this sort of thing is so normal and acceptable that a minor tweak in policy is considered a new-worthy step, something is certainly amiss. Consider this proposed solution to speeding up reform:

So what could Apple, or any tech company, do to speed things up? Nova suggests a model recently struck with garment workers in Bangladesh, following the horrific factory fires in 2012. In that country, he said, 200 brands and retailers fashioned an agreement with groups that directly represent workers. The deal calls for independent audits of factory conditions and promises by the retailers to put up the money to renovate dangerous facilities.

That will cost money, of course, which would eat into the relatively high profit margins that tech companies — and Apple in particular — enjoy. Improving worker conditions would also likely mean that consumers would have to be okay with slower delivery rates, Nova said. Getting swamped with orders for the new iPhone 6 and iPhone 6 Plus, for example, could have been a reason that Apple’s overtime hours went up this past year.

Currently valued at over $700 billion — larger than most countries’ GDP — Apple’s total revenue for 2014 was $182 billion. Taiwan-based supplier Foxconn, the world’s largest electronics contractor, ended 2013 with total revenue of $131.8 billion (data for 2014 remain unavailable). I am pretty sure that a mere fraction of either company’s revenue would be enough to give workers descent treatment and pay.

I will never understand how highly profitable companies — whose executives and shareholders enjoy billions in compensation and dividends, respectively — can claim that customers must pay more in exchange for treating workers like human beings. The average corporate investor or upper manager could still remain fabulously wealthy — if heaven forbid slightly less so — while giving consumers and producers alike a better and more ethical deal.

Even if consumers should pay — and lets grant that in some cases — most of the time it will cost no more than a few cents or dollars per item, a literally small price to pay for our fellow humans to live better lives. (This applies as much to major domestic employers like Walmart and McDonalds as it does to manufacturers with global supply chains.)

Fighting Climate Change Can Be Cheap and Easy — If We Ever Get To It

Well, it is easy conceptually at least. While advanced “negative emissions technologies” (NETs) like carbon-absorbing towers and light-reflecting clouds are touted as solutions to mitigating climate change, the best approaches may actually be the simplest and most low-tech: planting trees and improving soil quality.

That is the conclusion of a recent Oxford study reported in The Atlantic:

Both techniques, said the report, are “no regrets.” They’ll help the atmosphere no matter what, they’re comparatively low-cost, and they carry little additional risk. Specifically, the two techniques it recommends are afforestation—planting trees where there were none before—and biochar—improving the soil by burying a layer of dense charcoal.

Between now and 2050, trees and charcoal are the “most promising” technologies out there, it said.

Charcoal refers specifically to the production of biochar,  an ancient practice whereby agricultural waste (such as food scraps, decaying leaves, etc.) is smoldered and then covered by dirt. This not only makes the soil richer, but it helps dispose of a major source of CO2 while also eliminating the need to clear forest for more arable farmland.

As the article notes, these low-cost methods have a long and proven track record:

Forest management is one of the oldest ways that humans have shaped their environment. Before the arrival of Europeans, Native communities in the Americas had been burning forest fires for millenniato support the growth of desirable plants like blueberries and to manage ecosystems. British communities have long practiced coppicing, a tree-cutting technique that keeps forests full of younger trees.

In other words, humanity has been “geoengineering” with trees for a very long time. The authors of the Oxford report add that afforestation will need global support in order to be successful.

“It is clear that attaining negative emissions is in no sense an easier option than reducing current emissions,” it says (emphasis mine). “To remove CO2 on a comparable scale to the rate it is being emitted inevitably requires effort and infrastructure on a comparable scale to global energy or agricultural systems.”

It is interesting that the authors also cautioned against viewing NETs as a”deus ex machina that will ‘save the day,'” viewing them instead as just some of the many ways to avoid the worst of climate change still yet to come. That said, reforestation and soil enrichment alone will not solve the problem either; reducing emissions in the first place, in conjunction with these and other methods, is still our best bet.

This is confirmed by two recent reports by the National Research Council, an arm of the United States National Academies. As National Geographic reports:

An NRC committee of experts from across disciplines was asked by several U.S. government science and intelligence agencies to evaluate geoengineering proposals. The ideas range from anodyne (planting trees to capture CO₂) to potentially alarming (injecting sulfate particles or other aerosols into the atmosphere to reflect sunlight and cool the planet).

Committee members were blunt in their first recommendation: The world should focus first and foremost on curbing fossil fuel emissions rather than on any kind of geoengineering.

“I think it’s going to be easier and cheaper to avoid making a mess than it will be to make a mess and then try to clean it up later,” said committee member Ken Caldeira, a climate scientist at Stanford University’s Carnegie Institution for Science. “If we end up having to build a fix that’s on the scale of our energy system, why not just retool our energy system?

….

The first, CO₂ removal, the committee characterized as worthy and “almost inevitable.” The second, using aerosols or other means to reflect solar radiation, would be “irrational and irresponsible” if done as anything but a last-ditch effort to prevent a global famine or other emergency.

The Royal Society of the United Kingdom and the Intergovernmental Panel on Climate Change (IPCC) have similarly put an emphasis on reducing emissions first and foremost, with other strategies being auxiliary or complementary.

We know the solutions, and have ample resources and capital to draw upon — we just need the political and public will to make it all happen. If merely planting trees, enriching soil, and cutting back on carbon usage are enough to largely avert an existential threat to humanity, then the worsening of climate change is a damning condemnation of our species’ foolishness and shortsightedness.

America’s Muslim Heritage

Although widely seen as a new — and in some circles, invasive — presence in the United States, Islam has been a part of the nation’s history since colonial days, if not earlier. The New York Times highlights just a few of the known examples:

In 1528, a Moroccan slave called Estevanico was shipwrecked along with a band of Spanish explorers near the future city of Galveston, Tex. The city of Azemmour, in which he was raised, had been a Muslim stronghold against European invasion until it fell during his youth. While given a Christian name after his enslavement, he eventually escaped his Christian captors and set off on his own through much of the Southwest.

Two hundred years later, plantation owners in Louisiana made it a point to add enslaved Muslims to their labor force, relying on their experience with the cultivation of indigo and rice. Scholars have noted Muslim names and Islamic religious titles in the colony’s slave inventories and death records.

The best known Muslim to pass through the port at New Orleans was Abdul-Rahman Ibrahim ibn Sori, a prince in his homeland whose plight drew wide attention. As one newspaper account noted, he had read the Bible and admired its precepts, but added, “His principal objections are that Christians do not follow them.”

Among the enslaved Muslims in North Carolina was a religious teacher named Omar ibn Said. Recaptured in 1810 after running away from a cruel master he called a kafir (an infidel), he became known for inscribing the walls of his jail cell with Arabic script. He wrote an account of his life in 1831, describing how in freedom he had loved to read the Quran, but in slavery his owners had converted him to Christianity.

Continue reading

Apple Nation

The rise of Apple Inc as a dominant global force in the 21st century could not be better pronounced than with the news that it is currently worth an incredible $700 billion, with $178 billion in profit as of the latest quarter. If the company were a country, it would be the 55th richest in the world, on par with New Zealand and bigger than Vietnam, Morocco, and Ecuador (nations that together have well over 120 million people).

The Atlantic, my source for this news, puts this into perspective:

These sort of comparisons help underscore just how much money Apple has, but they’re not entirely nuanced. For instance, although Apple has made history with its earnings, there have been countries just as rich—and richer—once you adjust for inflation. In that case, Apple still hasn’t hit Microsoft’s high-water mark in 1999. (Microsoft was worth $620 billion then; which would exceed $870 billion in today’s dollars.) But what does that tell us, really, other than how quickly tech fortunes can change? After all, Apple today is worth more than twice as much as Microsoft ($349 billion). In 1999, though, Apple was perhaps notable for making colorful iMacs that dotted high-school computer labs, but not much else.

It is astounding that a private company could accrue enough wealth to rival entire nations, or to give every American $556. I see it as a troubling development, insofar as it is indicative of a wider trend of global inequality, with more wealth becoming concentrated among a small cohort of the population — mostly the shareholders, executives, and financiers involved in the trading and investing of these big companies.

Apple’s beleaguered workers, namely those subcontracted in the developing world, certainly deserve some sort of bonus for helping to make these record-breaking profits possible. If such compensation is a way to reward success, and to encourage productivity and performance, as businesspeople alleged, than does that logic not apply to average people? Or are only the rich who helm the top of these companies entitled (or apparently in need of) such motivation?

It is scary to think that this is hardly an unprecedented development either:

No modern tech company has approached the value of trading companies of the 1700s, though, and the Dutch East India Company trumps them all. The shipping juggernaut was the world’s first publicly traded company. At its height,according to several estimates, it was worth the equivalent of more than $7 trillion in 2015 dollars. That’s a seven with 12 zeros after it—or Apple’s valuation today 10 times over.

The Dutch East India Company was ultimately undone, at least in part, by the weight of corruption and venality among its personnel. It seems a common fate for institutions that become swept up in unchecked growth and the subsequent expectations for more and more. This is not a sustainable model for any organized system, whether it is a government or a tech firm.

Reflecting On The Killing Of Three Muslim Students

I rarely post about current events or news stories, but I have a rare bit of time and this even merits attention and reflection.

Last night, three Muslim students — Deah Barakat, 23; his wife, Yusor Mohammad Abu-Salha, 21; and her sister, Razan Mohammad Abu-Salha, 19 — were shot dead at a housing complex near University of North Carolina in Chapel Hill. The perpetrator was Craig Stephen Hicks, 46, who handed himself over to the police afterward. News is still unfolding as of this post, and the motive remains unclear, though some reports claim cite a dispute over parking — of all things to kill lover.

The natural question that comes to mind (or that should) is whether this incident was motivated by anti-Islam bigotry. This would certainly fit the pattern of post-9/11 attacks and harassment towards Muslims or those perceived to be Muslim (namely Sikhs). Opposition to Islam, ranging from criticism of the religion to out-and-out bigotry, have definitely seen an uptick in recent months following high-profile incidents involving Islamic extremists, such as the Charlie Hebdo shootings and the barbarism of Boko Haram and IS.

Given the present lack of information, it is difficult to determine why Hicks killed these people, although some sources have pointed out his open condemnation and mockery of organized religion on social media, as well as his association with atheist groups (albeit mainstream ones like Atheist for Equality that, to my knowledge, do not advocate violence or discrimination against religion people).

Ultimately, whether or not the perpetrator’s dislike of religion played a role in his decision to escalate a dispute into a murderous assault, it remains true that his atheism did not prevent him from such an immoral crime.

This tragic incident reaffirms why I much prefer the label of secular humanist over just plain atheist, precisely because mere disbelief in a deity or the supernatural says nothing about one’s morality or character. Atheism denotes what you do not have — religious beliefs — but not what you have chosen to replace said beliefs or ethical foundations with. Hence why atheists run the gamut from humanists like Albert Einstein to monsters like Joseph Stalin.

It goes without saying that a humanist framework is one that precludes violence against other humans, regardless of their beliefs, religious or otherwise. Of course people will always harm and kill one another regardless of whatever authority or precept they alleged to follow or associate with, whether it is secular or religious in nature. But this fact of human nature, whereby bad actions are caused by all sorts of other factors outside professed belief, does not preclude the creation of a comprehensive and authoritative moral and ethical framework.

Moreover, it is worth pointing out the distinction between being critical of religion as an idea and institution — all while still recognizing the humanity of its adherents — and hating religiously identifying people on such a visceral and hateful level as the perpetrator allegedly did. I myself am highly critical of religion as a whole, but I certainly do not view religious people as this faceless Other without personality, hopes, dreams, feelings, and humanity. Atheist or not, there is a difference between disliking or criticizing beliefs and ideas and taking the next step to hate or kill those innocents who hold such beliefs without harm to anyone else.

That said, it is important to remind fellow atheists to be careful to distinguish themselves (and their atheist leaders) as religious skeptics from religious bigots who incite such attacks or (in thankfully rare cases) directly perpetrates them. I am not trying to make this tragedy about me or the atheist movement, but highlighting the inherent dangers of proclaiming moral superiority by virtue of casting off religion while ignoring that one can still be a bad person, morally or behaviorally, regardless of what one believes.

If we are going to promote a skeptical view of religion, and opposition to its more harmful affects (both institutional and ideological), than we must do so alongside the propagation of a humanist ethic. By all means, critique religion and seek to minimize its harm, as I certainly do, but also recognize and fight the harms of non-religious origin, and more importantly see the humanity of the billions of fellow humans who, like it or not, hold religious views of some form or another.

All that said, I do not mean to read into this senseless act the larger issue of bigotry, lack of empathy, and the like; while likely factors, the details once again remain unknown for certain. It is also certainly not my intention to exploit a tragedy as an opportunity to get on a soap box for my own purposes and movement.

Rather, I am just tired of seeing people kill each other in such wanton manners for one reason or another: ideological, religious, anti-religious, opportunistic, etc. While I know this horror is a fact of human existence (at least for the foreseeable future — I cling to a kernel of utopianism), that does not mean that I want to be indifferent to the large psychological, social, and ideological factors underpinning so much of the killing and harming that goes on everyday somewhere in the world.

Given what little help I can lend to these unfortunate victims, the very least I can do — and in fact, feel obligated to do — is use the opportunity to reflect upon my own moral foundations and those of my fellow humans, both secular and non-religious. Maybe it is my way of trying to make sense of the senseless, or trying to derive meaning from sheer tragedy, but it is all I can do. I like to think that if enough of us continuous reflect on why we do the awful things we do, and what we can do about it, such barbarous acts will become more rare if not extinct.

One can still dream. In the meantime, my heart goes out to the victims and their loved ones. From what reports show, these young people were not only bright and talented, but socially conscious and humanitarian. By all accounts, they were, in other words, what humanists should aspire to be.

Interesting Read: Decline of Democracy?

From Aeon:

Neo-liberalism, which was supposed to replace grubby politics with efficient, market-based competition, has led not to the triumph of the free market but to the birth of new and horrid chimeras. The traditional firm, based on stable relations between employer, workers and customers, has spun itself out into a complicated and ever-shifting network of supply relationships and contractual forms. The owners remain the same but their relationship to their employees and customers is very different. For one thing, they cannot easily be held to account. As the American labour lawyer Thomas Geoghegan and others have shown, US firms have systematically divested themselves of inconvenient pension obligations to their employees, by farming them out to subsidiaries and spin-offs. Walmart has used hands-off subcontracting relationships to take advantage of unsafe working conditions in the developing world, while actively blocking efforts to improve industry safety standards until 112 garment workers died in a Bangladesh factory fire in November last year. Amazon uses subcontractors to employ warehouse employees in what can be unsafe and miserable working conditions, while minimising damage to its own brand.

Instead of clamping down on such abuses, the state has actually tried to ape these more flexible and apparently more efficient arrangements, either by putting many of its core activities out to private tender through complex contracting arrangements or by requiring its internal units to behave as if they were competing firms. As one looks from business to state and from state to business again, it is increasingly difficult to say which is which. The result is a complex web of relationships that are subject neither to market discipline nor democratic control. Businesses become entangled with the state as both customer and as regulator. States grow increasingly reliant on business, to the point where they no longer know what to do without its advice. Responsibility and accountability evanesce into an endlessly proliferating maze of contracts and subcontracts. As Crouch describes it, government is no more responsible for the delivery of services than Nike is for making the shoes that it brands. The realm of real democracy — political choices that are responsive to voters’ needs — shrinks ever further.

Politicians, meanwhile, have floated away, drifting beyond the reach of the parties that nominally chose them and the voters who elected them. They simply don’t need us as much as they used to. These days, it is far easier to ask business for money and expertise in exchange for political favours than to figure out the needs of a voting public that is increasingly fragmented and difficult to understand anyway. Both the traditional right, which always had strong connections to business, and the new left, which has woven new ties in a hurry, now rely on the private sector more than on voters or party activists. As left and right grow ever more disconnected from the public and ever closer to one another, elections become exercises in branding rather than substantive choice.

Interesting Read: Is the U.S. Constitution Too Sacred?

It is a debate as old as the United States itself: how should the Constitution, our seminal governing document, be treated? Should it be more flexible and easy to change — a “living document” — or should it remain fixed and difficult to alter, in favor of an “originalist” interpretation? An article in Aeon weighs in:

In effect, the amending clause contained in Article V says that any change, no matter how minor, must be approved by two-thirds of each house of Congress plus three-fourths of the states. This is daunting, certainly. But growing population disparities render it even more so since the three-fourths rule means that 13 states representing as little as 4.4 per cent of the population can veto any change sought by the remaining 95.6 percent of the population.

As a result, Americans have succeeded in modifying the Constitution only 17 times since ratification of the Bill of Rights in 1791. Since amendments tend to come in clumps during periods of exceptional turmoil, this means that decades can race by without any change at all. For instance, the US was constitutionally frozen for nearly 60 years prior to the Civil War, and then spent another 40 years in a constitutional deep-freeze during the Gilded Age that followed. Only one amendment, the 27th, concerning the scheduling of Congressional pay raises, has been approved since the civil-rights revolution of the 1960s and early ’70s, and that one was drafted in 1789 and then gathered dust in various state legislatures for more than two centuries. Excepting this unusual amendment, the present constitutional ice age could wind up outlasting the first.

Arguably, this Constitutional paralysis is the real source of American exceptionalism – not America’s military or economic clout, but its basic political structure, so unlike that of just about any other country on Earth. It’s certainly the source of its exceptional political psychology. One might think that Americans would be impatient with a Constitution that frustrates any and all efforts at reform, yet the response has been the opposite: instead of growing angry, people have reassured themselves over the years that immobility is all to the good because anything they do to change things can only make them worse. In effect, they’ve taken the old adage, ‘If it ain’t broke, don’t fix it,’ and turned it around. Since a fix is impossible due to the system’s deep-seated resistance to change, then it must not be broken at all. In fact, it must be perfect and therefore divinely inspired. And if the Constitution is divinely inspired, can the US be anything other than divinely inspired as well?

The Constitution is perfect because it’s impervious to change and vice versa. This is exceptional all right, as well as more than a bit odd. After all, cars, washing machines, and vacuum cleaners all run down from time to time, so why not the US machinery of government? Why should it be spared the usual wear and tear? This would seem to be the case especially given the news out of Washington these days about gridlock, high-wire negotiations, and government shutdowns. Surely, a government that periodically shuts its doors due to budget disputes between the executive and legislative branch can’t be said to be functioning up to snuff? In fact, it seems more and more dysfunctional. Yet everyone say it’s the greatest system on Earth. How can that be?

As you can tell from the excerpt, the article clearly takes a skeptical view of the Constitution’s “sacredness”, but I feel it is a thought-provoking read, and it does highlight some troubling trends that stem, at least to some degree, from the way the Constitution is applied and interpreted:

Since economic polarisation is a global phenomenon, a sclerotic 18th century Constitution can’t be entirely to blame. But an increasingly unrepresentative system obviously doesn’t help. Thanks to a Senate that gives equal representation to all 50 states even though the largest (California) is now some 65 times more populous than the smallest (Wyoming), U.S. government is arguably more undemocratic now than it was even in the 19th century.

In the 114th US Congress, 67.8 million people voted for senators who caucus with the Democratic Party, while 47.1 million voted for senators who caucus with the Republican Party. Yet those 67.8 million votes elected 46 senators while the 47.1 million votes elected 54 senators. Call this what you will, but representative it’s not. Thanks to a bizarre filibuster system that allows 41 senators (representing as little as 11 per cent of the population) to prevent any bill from reaching the floor, it has never been more unfair. Yet a fix is impossible. The results in the economic realm are all too obvious. While other countries have succeeded to a degree in bucking the trend toward financial oligopoly, the U.S. has given it free reign. The system continues tottering forward because no one is able to come up with a viable alternative.

Throughout my many college courses in political science and law, I came across a consistent theme: that the U.S. Constitution was deliberately designed to promote deadlock and create a high bar for laws to pass. The logic was that this would prevent the government from being swayed by one populist whim after another, while representatives would be forced to appeal to their higher nature by coming together rather than allow gridlock to transpire (incidentally, partisan politics — and for that matter actual political parties — were virtually nonexistent at the time of the nation’s founding).

But given the present circumstances, namely how much media, politics, and the wider world have changed, is this approach too dated, if not naive? Is it feasible to retain the Constitution’s strict approach to change? Has politics become so cynical and oligarchic as to render the status quo in law and elections abusive? Maybe the problem isn’t the Constitution, but the politicians, or perhaps the public…or perhaps all of the above?

I encourage you to read the rest of the article and share what you think? How should this document — and by extension American government and law — be treated?

The Countries Most Threatened By Climate Change

It goes without saying that climate change will have a severe impact on humanity. But some areas will be harder hit than others, and the countries most likely to be heavily impacted are also the least equipped to handle the subsequent social, economic, and political consequences.

Indeed, as the following infographics show, nearly all the world’s wealthiest nations will get by relatively unscathed (at least initially), while the greatest burden will fall on those states that are already strained by poverty, underdevelopment, environmental degradation, and political instability — factors that will exacerbate, and be exacerbated by, the effects of climate change.

Bussiness Insider notes some important details to keep in mind:

While the maps provide a great zoomed-out perspective of what’s going to happen globally as the earth warms, there are a few caveats to keep in mind when checking it out:

First, these maps are based on country rankings, not comprehensive evaluations of each country. In other words, the best-ranked countries are only as great as they seem compared to the countries that are performing less well.

Additionally, the ranking looks only at the level of entire countries. All of the state-specific, region-specific, or city-specific data gets somewhat lost in this zoomed out perspective.

While many in the developed world, particularly the United States, remain unresponsive or slow to act (if not in open denial to the problem), humanity’s most vulnerable people — already suffering enough as it is — will bear the brunt of the consequence of inaction. It is worth pointing out that a large proportion of the world’s population lives in the “global south” where climate change will be worst, meaning the human toll will be of an appalling scale.

Of course, in our heavily globalized world, even the initially best-off countries will be negatively impacted eventually. World food supplies will be disrupted, tens of millions of refugees will flee starvation and social breakdown to wherever they can, and the possibility of international conflict over strained resources (or disfavored migration) will be more likely. So while some places may be relatively better off than others, all of us will be affected in some way or another: there is currently no way to escape our planet and its increasingly erratic climate.

While the precise sociopolitical effects are speculative (to varying degrees of likelihood), climate change itself is not. The evidence is mounting and the impact is already being felt and documented in both ecosystems and the world’s poorest countries (and even in the U.S., which recently endured record drought throughout most of the country). Ultimately, we will all suffer together, and the only way to do anything about it is to develop an appropriately global response. This is both an existential and moral issue.

Three Big Historical Anniversaries Today

In 1943, the Soviet Red Army won the Battle of Stalingrad, turning the tide of the Second World War. One of history’s bloodiest and most decisive battles, the five-month siege involved over 1 million troops on each side. The Axis suffered a total 850,000 casualties (wounded, killed, captured) and the Soviets over 1.1 million, of which over 478,000 were killed.

To understand the scale of the battle, the U.S. and U.K. suffered a total of 405,399 and 383,800 combat deaths respectively in the entire war. (Ultimately, by the end of the war, Soviet Russia lost 20-28 million people, of whom 7-12 million were civilians; nearly a quarter of its population had been killed, wounded, or directly affected by the conflict in some way).

Soviet soldier waving the Red Banner over the central plaza of Stalingrad in 1943. 

You can read a quick rundown of the battle here.

In 1848, the Mexican–American War ended with the signing of the Treaty of Guadalupe Hidalgo, in which Mexico was forced to give up 530,000 square miles of territory to the United States for $15 million. Along with the prior cession of Texas, this amounted to 55 percent of Mexico’s pre-war territory and today comprises about 15 percent of U.S. territory.

Cession includes all of California, Nevada, and Utah, most of Arizona, large chunks of Colorado and New Mexico, and some of Wyoming.

In 1990, South African President F. W. de Klerk declared the official end of apartheid, a system of intense segregation and racial oppression, following mounting domestic and international opposition, which culminated in negotiations between the government and resistance groups (namely the African National Congress, from which Nelson Mandela emerged as the nation’s first freely-elected leader).

De Klerk and Mandela at the World Economic Forum in Davos, 1992; the latter would be elected president two years later.

All photos courtesy of Wikipedia.