How Machines Will Conquer The Economy

From Zeynep Tufecki over at the New York Times:

But computers do not just replace humans in the workplace. They shift the balance of power even more in favor of employers. Our normal response to technological innovation that threatens jobs is to encourage workers to acquire more skills, or to trust that the nuances of the human mind or human attention will always be superior in crucial ways. But when machines of this capacity enter the equation, employers have even more leverage, and our standard response is not sufficient for the looming crisis.

Machines aren’t used because they perform some tasks that much better than humans, but because, in many cases, they do a “good enough” job while also being cheaper, more predictable and easier to control than quirky, pesky humans. Technology in the workplace is as much about power and control as it is about productivity and efficiency.

This is the way technology is being used in many workplaces: to reduce the power of humans, and employers’ dependency on them, whether by replacing, displacing or surveilling them. Many technological developments contribute to this shift in power: advanced diagnostic systems that can do medical or legal analysis; the ability to outsource labor to the lowest-paid workers, measure employee tasks to the minute and “optimize” worker schedules in a way that devastates ordinary lives. Indeed, regardless of whether unemployment has gone up or down, real wages have been stagnant or declining in the United States for decades. Most people no longer have the leverage to bargain.

I can think of no better a justification for implementing a guaranteed basic income than this trend. How much longer until we run out of sustainable employment to support our population? Already, in the United States and elsewhere, most fast-growing sectors are low paying service jobs like fast-food and retail; even the professions that should ostensibly pay well, such as those requiring degrees or experience, increasingly do not.

Most people are already running out of alternatives for liveable, meaningful work — and now mechanization and automation threaten to undermine what comparatively little remains. I think this says a lot more about the social, economic, and moral failings of our society than it does about technology.

Why should everything be hyper-efficient at the expense of workers — who are also consumers and thus drivers of the economy? Why should we have a business culture, or indeed an economic and social structure, whereby those at the top must ruthlessly undercut the leverage and well-being of everyone else, whom they nonetheless depend on? If we want to optimize production and cost-effectiveness, which are of course not bad aims, then why not do so while providing some alternative means of survival for those who get displaced?

How we respond to this trend will speak volumes about our values, priorities, and moral grounding.

Study: Sixty Is Now Middle Age

As humans live longer than ever, it is not surprising that what we define as middle age is also shifting upwards.

A study conducted by researchers from the International Institute for Applied Systems Analysis (IIASA) and Stony Brook University confirmed this, as HuffPo reports:

The researchers used projections of Europe’s population until the year 2050 to look at how an increasing life expectancy changes the definition of “old.” They used different rates of increases, ranging from a stagnant life expectancy to one which grew 1.4 years per decade, to look at the portion of the population who was considered to be old. They looked at both the conventional definition, which considers people over age 65 old, and a new measure, which advances the threshold for old age as overall life expectancy grows.

The findings, published in the journal PLOS ONE, say that as the life expectancy increased with the new measure of old age, the proportion of older people in the population continually fell. The researchers say that we must adjust the threshold we use to determine old age, otherwise the proportion of older people will grow as life expectancy increases.

“What we think of as old has changed over time, and it will need to continue changing in the future as people live longer, healthier lives,” Scherbov said.

It is amazing to live in a time when humans are pushing the limits of both longevity and quality of life: not only are we adding more years to our time on Earth, but we are increasingly managing to maintain relatively healthy mental and physical faculties (especially with regards to Blue Zones and other countries where centenarians are increasingly common).

Given the rapid gains in medicine, nutrition, and public health, who knows how much longer and better people will continue living over the course of the 21st century.

When Mega-Cities Rule the World

The United States has always stood out among developed nations for its sheer size, in terms of territory, population, and urban centers. So perhaps it’s no surprise that we’ve seen the organic emergence of “mega-regions,” sprawling urban centers than span across multiple countries, states, and municipalities, often for hundreds of miles. Needless to say, these megalopolises dominate (or even completely consume) their respective regions, and together they drive the nation’s economic, cultural, social, and political direction.

The following is a map created by the Regional Plan Association, an urban research institute in New York, identifying the eleven main ‘mega-regions’ that are transcending both conventional cities and possibly even states.

To reiterate, the areas are Cascadia, Northern and Southern California, the Arizona Sun Corridor, the Front Range, the Texas Triangle, the Gulf Coast, the Great Lakes, the Northeast, Piedmont Atlantic, and peninsular Florida, my home state (and the only one that is almost entirely consumed by its own distinct mega-region).

Also note how some of these mega-regions spillover into neighboring Mexico and Canada, a transnational blending of urban regions that can be seen in many other developed countries (most notably those in Europe and E.U. specifically. I’d be curious to see a similar map for other parts of the world, especially since developing countries such as China, India, and Brazil are leading the global trend of mass urbanization.

This intriguing map is part of the Regional Plan Association’s America 2050 project,  which proposes that we begin to change our views of urban areas away from being distinct metropolitan areas but instead as interconnected “megaregions” act as distinct economic, social, and infrastructure areas in their own right.

These are the areas in which residents and policymakers are the most likely to have shared common interests and policy goals and would benefit most from co-operation with each other. It’s especially important, because as the Regional Plan Association notes, “Our competitors in Asia and Europe are creating Global Integration Zones by linking specialized economic functions across vast geographic areas and national boundaries with high-speed rail and separated goods movement systems.”

By concentrating investment in these regions and linking them with improved infrastructure, such megaregions enjoy competitive advantages such as efficiency, time savings, and mobility.

The U.S., however, has long focused on individual metro areas and the result has been a “limited capacity” to move goods quickly — this is a major liability threatening long-term economic goals. And while U.S. commuters are opting to drive less, public transportation isn’t even close to commuters’ needs.

The Regional Plan Association proposes aggressive efforts to promote new construction, and finds that even existing lines are in desperate need of large-scale repairs or updates to improve service. In particular, they say the emerging megaregions need transportation modes that can work at distances 200-500 miles across, such as high-speed rail.

While this makes sense, what are the consequences of having such potent sub-national entities emerging separately from already-established state and city limits? Should we, or will we, have to re-draw the map? Will these megaregions become the new powerhouses that influence the political and economic systems of the country at the expense of current representative structures? Will they coalesce into distinct interests that have their own separate political demands from the individual local and state governments that are wholly or partly covered by them?

Interesting questions to consider, especially in light of this being an accelerating global trend with little sign of stopping, let alone reversing. I’m reminded of Parag Khanna’s article, “When Cities Rule the World,” which argued that urban regions will come to dominate the world, ahead of — and often at the expense of —  nation states:

In this century, it will be the city—not the state—that becomes the nexus of economic and political power. Already, the world’s most important cities generate their own wealth and shape national politics as much as the reverse. The rise of global hubs in Asia is a much more important factor in the rebalancing of global power between West and East than the growth of Asian military power, which has been much slower. In terms of economic might, consider that just forty city-regions are responsible for over two-thirds of the total world economy and most of its innovation. To fuel further growth, an estimated $53 trillion will be invested in urban infrastructure in the coming two decades.

Given what we’ve seen with America’s megaregions, the prescient Mr. Khanna (who wrote this article three years ago) has a point. Here are some of his highlights regarding this trend and its implications:

Mega-cities have become global drivers because they are better understood as countries unto themselves. 20 million is no longer a superlative figure; now we need to get used to the nearly 100 million people clustered around Mumbai. Across India, it’s estimated that more than 275 million people will move into India’ s teeming cities over the next two decades, a population equivalent to the U.S. Cairo’s urban development has stretched so far from the city’ s core that it now encroaches directly on the pyramids, making them and the Sphynx commensurately less exotic. We should use the term “gross metropolitan product” to measure their output and appreciate the inequality they generate with respect to the rest of the country. They are markets in their own right, particularly when it comes to the “bottom of the pyramid,” which holds such enormous growth potential.

As cities rise in power, their mayors become ever more important in world politics. In countries where one city completely dominates the national economy, to be mayor of the capital is just one step below being head of state—and more figures make this leap than is commonly appreciated. From Willy Brandt to Jacques Chirac to Mahmoud Ahmadinejad, mayors have gone on to make their imprint on the world stage. In America, New York’s former mayor Rudy Giuliani made it to the final cut among Republican presidential candidates, and Michael Bloomberg is rumored to be considering a similar run once his unprecedented third term as Giuliani’s successor expires. In Brazil, José Serra, the governor of the São Paulo municipal region, lost the 2010 presidential election in a runoff vote. Serra rose to prominence in the early 1980s as the planning and economy minister of the state of São Paulo, and made his urban credentials the pillar of his candidacy.

It is too easy to claim, as many city critics do, that the present state of disrepair and pollution caused by many cities means suburbs will be the winner in the never-ending race to create suitable habitats for the world’s billions. In fact, it is urban centers—without which suburbs would have nothing to be “sub” to—where our leading experiments are taking place in zero-emissions public transport and buildings, and where the co-location of resources and ideas creates countless important and positive spillover effects. Perhaps most importantly, cities are a major population control mechanism: families living in cities have far fewer children. The enterprising research surrounding urban best practices is also a source of hope for the future of cities. Organizations like the New Cities Foundation, headquartered in Geneva, connect cities by way of convening and sharing knowledge related to sustainability, wealth creation, infrastructure finance, sanitation, smart grids, and healthcare. As this process advances and deepens, cities themselves become nodes in our global brain.

While most visions of the future imagine mega-corporations to be the entities that transcend nations and challenge them for supremacy, it may be these mega-regions or mega-cities that will be the true powerhouses of the world. In fact, we may even see something of a three-way struggle between all of these globalizing behemoths, as many nation-states also begin to band together to form more powerful blocs.

One things is for certain: the future will be an interesting experiment in testing humanity’s organizational and technological prowess, especially in the midst of worsening environmental conditions and strained national resources, which such mega-regions will no doubt need to overcome. What are your thoughts?

Hat tip to my friend Will for sharing this article with me.


National Intelligence Council Foresees Transhumanist Future

In the new report, the NIC describes how implants, prosthetics, and powered exoskeletons will become regular fixtures of human life — what could result in substantial improvements to innate human capacities. By 2030, the authors predict, prosthetics should reach the point where they’re just as good — or even better — than organic limbs. By this stage, the military will increasingly rely on exoskeletons to help soldiers carry heavy loads. Servicemen will also be administered psychostimulants to help them remain active for longer periods.

Many of these same technologies will also be used by the elderly, both as a way to maintain more youthful levels of strength and energy, and as a part of their life extension strategies.

Brain implants will also allow for advanced neural interface devices — what will bridge the gap between minds and machines. These technologies will allow for brain-controlled prosthetics, some of which may be able to provide “superhuman” abilities like enhanced strength, speed — and completely new functionality altogether.

Other mods will include retinal eye implants to enable night vision and other previously inaccessible light spectrums. Advanced neuropharmaceuticals will allow for vastly improved working memory, attention, and speed of thought.

“Augmented reality systems can provide enhanced experiences of real-world situations,” the report notes, “Combined with advances in robotics, avatars could provide feedback in the form of sensors providing touch and smell as well as aural and visual information to the operator.”

But as with any technological development, there is a caveat:

But as the report notes, many of these technologies will only be available to those who are able to afford them. The authors warn that it could result in a two-tiered society comprising enhanced and nonenhanced persons, a dynamic that would likely require government oversight and regulation.

Smartly, the report also cautions that these technologies will need to be secure. Developers will be increasingly challenged to prevent hackers from interfering with these devices.

Lastly, other technologies and scientific disciplines will have to keep pace to make much of this work. For example, longer-lasting batteries will improve the practicality of exoskeletons. Progress in the neurosciences will be critical for the development of future brain-machine interfaces. And advances in flexible biocompatible electronics will enable improved integration with cybernetic implants.

Read the entire report here


The Venus Project

From the BBC comes a brief but interesting look into the Venus Project,  an  organization that aims to restructure society through an economic and infrastructural system in which, goods, services and information are free within the context of resource sustainability and availability (e.g. a “resource-based economy). 

This utopian idea was begun in 1980 by self-educated structural engineer, industrial designer, and futurist Jacque Fresco. He’s been described variably as an eccentric, idealist, visionary, crackpot, and charlatan, and his ideas have received as much praise as they have criticism.  The BBC article linked to above pretty much sums it up this way:

Is it possible to create a radically different society? One where material possessions are unnecessary, where buildings are created in factories, where mundane jobs are automated?

Would you want to live in a city where the main aim of daily life is to improve personal knowledge, enjoy hobbies, or solve problems that could be common to all people in order to improve the standard of living for everyone?

Some may think it is idealistic, but 97-year old architect Jacque Fresco is convinced his vision of the future is far better than how we live today.

I agree that his vision of the future is far better than what we have now. But so have many other such hypothetical concepts throughout history. Is his idea credible? Can it really be implemented in our lifetimes, if at all? What are your thoughts?

Are Fears of Information Overload Overblown?

Nowadays, it seems that the only thing more ubiquitous than information is the subsequent anxiety about whether the human mind can handle it all. From casual conversations to numerous scholarly articles, debate on the subject is hard to miss. I’ve discussed myself before, reaching an ambivalent conclusion (as I’m wont to do – it’s a bad habit, I know).

Unsurprisingly, I’m not alone in this experience, as BBC columnist ____ shared similar reflections about whether all this concern is merited. He begins by putting some well-needed historical perspective:

A 1971 essay in The Futurist magazine opened with some alarming numbers. The average city, it said, now had six television channels. But, the author warned, there was already one city planning 42 channels and in the future, there could even be places that support 80, 100 or 200 channels. Where will it end, the essay asked.

Just four decades on, in a world of instant-on, hyper-connected reality, the numbers sounds almost laughable. But, it seems, every generation believes that it has reached information overload. Look back through history, and whether it was the arrival of the book or the arrival of the internet, everyone from scholarly monks to rambunctious politicians are willing to pronounce that we can take no more; humanity has reached its capacity. Television, radio, apps, e-books, the internet – it is causing so much anxiety and stress in our lives that we no longer have control. The machines have won.

Or have they?

Every generation invariably fears the changes that emerge within its lifetime, and why not? Change is scary and, by definition, unfamiliar. We don’t know what to expect, even while it transpires before us, so we start debating the implications, the unseen effects, and the long-term consequences.

People once thought that novels would corrupt the minds of the youth or distract them from reality (sound familiar?), or that writing in place of oral traditions would cause a decline in memory. And as the 1970s reference to The Futurist showed, we’ve fretted about technology overloading our minds since before it was even fully utilized. The entire issue was arguably begun, or at least accelerated, by own man in particular:

If we want to understand the modern way we think about so-called “information overload” the best place to start is the 1970 book Future Shock by author and futurist Alvin Toffler. In it, he said future shock is, “the dizzying disorientation brought on by the premature arrival of the future. It may well be the most important disease of tomorrow”.

There is no denying Toffler’s international influence on the way we think about the future. I have seen Future Shock in virtually every used bookstore I have visited from Portland, Oregon to Cartagena, Colombia. With over six million copies sold, it clearly struck a nerve in 1970 and beyond.

Toffler explained in his book that, “just as the body cracks under the strain of environmental overstimulation, the “mind” and its decision processes behave erratically when overloaded.

In a radio interview shortly after the release of his book, Toffler warned that the exhaustion he saw throughout the world was tied to his new future shock theory. “I think there’s a tremendous undercurrent of dissatisfaction in America; people saying I want out, it’s moving too fast, it’s moving away from me; a sense of panic; a sense that things are slipping out of control and I don’t think that there’s much we can do in our personal lives to counteract that,” he said.

Toffler’s assumption was that the future is something that happens to us, rather than with us. It is something out of our control that will inevitably overwhelm us.

Toffler’s statements are almost word-for-word what we hear and read today: the world’s moving too quickly for us to adjust; we’re coming under the mercy of technology forces that we barely understand, let alone control; and all this subsequent anxiety from modernity is making us more depressed, worried, and cynical.

So the psychological strain we’re enduring is just the latter stage of a decades-long process, brought about by a “future” that we didn’t anticipate coming so quickly, and thus couldn’t prepare for. Even the most radical developments – be they technological or otherwise – take generations before their effects are truly felt or learned about.

Whilst some will take comfort in Toffler’s words, some of the notions seem rather quaint forty years later. Just as people today throw around the number of tweets sent per second or the amount of video watched online, in the early 1970s Toffler followers and techno-reactionaries liked to scatter their own figures to show the magnitude of the problem.

In the same Futurist essay that decried the rise of the number of TV channels, the author Ben Bagdikian goes on to overwhelm readers with even more daunting numbers, explaining that computers will soon be able to store information at a rate of 12 million words a minute, whilst printers will be able to pump out 180,000 words a minute; something that will collide violently with humanity’s ability to process information, he said.

“The disparity between the capacity of machines and the capacity of the human nervous system is not a small matter in the future of communications,” he wrote. “It has individual and social consequences that are already causing us problems, and will cause even more in the future.

“The human being of the near future probably will need as much sleep as he does today. He will spend more time absorbing abstract information than he does today, continuing the trend of past generations. But there is a limit.

It is a warning that we still hear today in many contexts. For example, author Jonathan Franzen, an opponent of electronic books, argues that traditional paper tomes give humanity some much needed stability in a world rocked by change. He fears that this rapid pace is hurting us. “Seriously, the world is changing so quickly that if you had any more than 80 years of change I don’t see how you could stand it psychologically,” he said.

Are we really any more stressed out or overwhelmed in the present day then we’ve been in the past? Hasn’t every society of every generation had something to fret about? I’m curious at what point had the world experienced changes that didn’t cause some sort of disruption to the status quo (as changes, by their nature, invariably do). Either we deal with the scary and disorderly unknowns of change, or we wallow in the familiar misery of stagnation.

To be fair, I don’t think Franzen, Bagdikian, or their contemporaries are opposed to change in principle. Their argument seems to be that a certain kind of change, and/or a certain speed of change, is what is unfavorable. That may be a fair point, but what exactly is the solution? Changes of any sort are rarely organized, deliberate affairs – they come about from a random assortment of various dynamics and circumstances. Even people that invent new things or come up with new ideas don’t always the intent, or the means, to see them through, nor can they ever really anticipate what may come of their creation.

In the end, all any of us can do is adapt. Human society will always be too complex, diverse, and disorderly to predict its development or chart an organized course through future obstacles. ____ seems to agree:

Yet history seems to suggest we ride these waves of change. I am typing this on a 15-hour flight over the Pacific Ocean. In that time, I watched two movies, three TV episodes and read half of a (deadtree) book. No one was forcing me to consume this media, nor even write these words. I made a conscious choice that this is how I wished to spend my time. I would also argue that most people reading an essay about the concept of information overload on the internet have some choice in the matter.

Toffler, Bagidikian and Franzen are not necessarily wrong or even alarmist in their concerns that we should seek to control our own technological destinies. But futility should not win the argument. Your consumption of media is largely within your control. We have a choice in the matter. We can change the channel, turn off the TV, or close the laptop lid. These are our choices, and it is hard to see how any of them are irrational or happening to us rather than with us.

Victor Cohn, in his 1956 book, 1999: Our Hopeful Future might have put it most reasonably. Cohn was a pragmatist and understood that we could not run from the future, but that by embracing change we might do some good: “Reject change, and we will be enslaved by it. Others will accept the worst of it and dictate to us. Accept change, and we may control it.”

Sooner or later, the future catches up with us all. But it need not swallow us whole.

That isn’t to say that we shouldn’t keep debating this issue. The naysayers or alarmists raise decent points that deserve serious consideration. We must never shut out well-founded concerns.

It could very well be that we’ll eventually reach a point in our technological development where we’re negatively altered by the forces we unknowingly unleashed. As I said, change is unpredictable and nebulous, so who’s to say that such alterations to our social, political, and economic fabric can’t morph into something we’ll be unable to adapt to.

Personally, I’m an optimist in this regard. I think every development will have its pros and cons. No element of human progress has ever been devoid of caveats. But given that this will always be the case, it seems that the best – if not only – thing we can do is try to understand the forces at work and adjust ad hoc. It’s not a graceful or sophisticated approach, but such is the nature of human beings. We’re constrained by our cognitive limitations, and can only see so much of the bigger picture for so long. Improvements in technology and methodology are helping us, but they can only get us so far – unless some unanticipated radical change fixes that too!

What are your thoughts?

Are We Getting Dumber?

The New York Times is hosting an online debate on the state of human intelligence in the modern world and whether it will decline or improve in the coming generations. All debaters make interesting points, and the commentary from readers can be just as insightful. I encourage you all to share your views (here or there), or at least reflect upon the arguments made. The fates of this planet and our species are dependent upon are own intellectual and innovative capital.

For the record, I don’t believe we’re devolving intellectually – relative to historical levels, a larger number of people are more knowledgeable than ever. It’s just that the standards have changed, and the vast abundance of knowledge – as well as it’s increased availability – raises our expectations for intelligence. I do, however, fear that the growing technological convenience of the modern world may pose a risk to our cognitive development, as it may lead to a disincentive to learn more things or acquire new skills.

Alas, I’m heading to bed, so I can’t expound on my point as much as I’d like. But I will certainly be revisiting this later. As always, I welcome your own input.

Technology and Economics

The other day, I was reflecting on a two-month old article about a planned factory in Taiwan that will produce advanced robots for manufacturing. It was pretty brief, so I’ll share its entire contents below:

Taipei, Oct. 29 (CNA) Hon Hai Precision Industry Co. Chairman Terry Gou signed a letter of intent with Taichung Mayor Jason Hu Saturday, confirming the company’s determination to build an intelligent robotics and automation equipment manufacturing hub in the Central Taiwan Science Park.

President Ma Ying-jeou attended the signing ceremony to show his appreciation for Hon Hai’s commitment to expand investment in Taiwan.

In addition to signing the pact, Gou also presided over a ground-breaking ceremony for an intelligent robotics technology research facility and the inauguration of a factory of Foxnum Technology Co., a Hon Hai subsidiary dedicated to development and production of automation equipment, at the central Taiwan high-tech park.

Gou said Hon Hai will build an “intelligent robotics kingdom” in the park in the coming few years. The project is expected to generate an estimated NT$120 billion (US$4 billion) in production value in the next three to five years and create about 2,000 jobs, he added.

An age-old concern among futurists and technoprogressives immediately came to my mind: what are the consequences of relying too much on technology to carry out our economic activity? While building and maintaining these machines will produce well-needed jobs, how much will that compare to the net human employment that will be lost by using them? How many people will otherwise have been employed if it weren’t for the utilization of mechanized labor?

Of course, these worries are hardly the purview of science fiction buffs or economists – average people have long held similar concerns, going back all the way to the mid-18th century, around the beginning of the industrial age (read up on the luddites, and their modern day successors, for a start). New inventions, starting most prominently with the cotton gin, were beginning to reduce the need for as much human labor as before, raising concerns about mass unemployment and the consequent socioeconomic troubles that would follow.

Indeed, just about every technological achievement that ever been made with the capacity to improve productivity – and reduce the need for as much, if any, manpower – is met with the same apprehension. Such innovations and inventions are double-edged swords: they make our lives easier, and our products cheaper, but they could also lead to problems of their own. Plus, while some people may benefit, those who lose out to technological efficiently don’t fare so well, and dispute their worth. These debates and mixed reactions become cyclical, and computers and robots are only the most recent example.

Of course, the cotton gin and all the machines and equipment that followed didn’t lead to systemic long-term unemployment. In many cases, they created new jobs, such as in their maintenance, design, and construction. Automobiles may have put horse-drawn carriage riders out of work, but they were soon replaced by taxi drivers, and a plethora of businesses – from parts supplies to gas stations to car repairs – emerged, providing work that wouldn’t otherwise have been around. Advancements in farming equipment and techniques, along with these newly created industries, drew surplus ex-farmers into cities, swelling the labor pool and creating a market for commerce. Ostensibly, a virtuous cycle of supply and demand ensued, continuing to this very day.

Besides, as I learned in Economics, luddites were adhering to a lump of a labor fallacy – assuming that there are a set number of workers versus a set number of jobs. But jobs come and go all the time, as do the people that seek them. Most importantly, just because someone finds a better or faster way of doing something, doesn’t mean they’ll stop there: why not make more cotton gins, which will increase the need for transporters, retailers, marketers, and more operators? “Destroying” jobs in one area merely creates more elsewhere, with the bonus of providing newer and better goods at a faster and cheaper rate. Capitalism and free-markets at their finest.

But what if this conventional wisdom, validated (but disputed) for so long, doesn’t hold true forever? What if we reach a watershed where new technological developments disrupt the economy and workforce on a greater scale than any others before? What if technology becomes so efficient that even the alternative businesses they generate won’t be enough to make up for their disruption?

In most of the developed world, the manufacturing jobs that were once the bedrock of industrialized economies have largely been marginalized like the agricultural ones before it. The majority of people in wealthy societies engage in services, ranging from sales to banking, largely because there is little else left to do. Many of these jobs don’t pay as well as manufacturing, especially the majority of occupations in retail, tourism, and dinning (which altogether represent a good chunk of the service industry). This partly explains why real wages have been stagnant for most Americans since the mid-1970s, when a combination of outsourcing and new technology began to eliminate blue-collar work.

But it’s no longer just the old and low-skilled industries that are being threatened. The old trifecta of outsourcing, technology, and administrative innovation are continuing to erode the pool of jobs that remain. Several banking, paralegal, and clerical occupations are being rendered obsolete – and these are considered relatively high-end jobs. The internet has quickly become a popular venue to shop for all sorts of goods and services, leaving me to question whether even person-to-person retail will eventually be outmoded. I wonder which jobs will no longer exist by the time I have kids old enough to join the labor force. Of those that remain, how many people will they really need?

I am only entertaining all these speculations of course. I make no predictions, much less raise any alarm. But I think this issue is worth serious consideration given the continued influx of advanced technology, in light of what may be an unprecedented (and perhaps interrelated) global economic slowdown.

Assuming the hypothetical holds true, what should we do in response, if anything? Should such technology be restricted in order to allow for more job offerings? Would that be an ethical course of action? Should we change the way we view low-paying but (so far) non-automated tasks like lawn maintenance or domestic service, making them better paying and more professional? Do share your views on the matter (or if not, at least give it some thought for your own contemplation).

The Empathic Civilization

Hello everyone. As per my habit lately, given my time constraints and a bit of writer’s block, I’ve decided to keep my post brief – by letting someone else do the talking.

This is part of a wonderful series by, which shares all sorts of informative videos on a number of  diverse topics:  society, technology, economics, progress, science, and so on. This particular video is by Jeremy Rifkin, an economist who focuses a lot on society and ethics.  The video in question validates a major underlying theme in my life philosophy: that empathy and interconnection is the foundation of human progress, and that it is only strengthening with time.

I hope you all enjoy. Expect me to share more videos like this, though I encourage you to check out the rest. They’re as thought-provoking as they are inspiring.