How Screens Negatively Impact Health

Thanks to the boom in mobile technology — particularly smartphones and tablets — screens have become ubiquitous in modern society. It is almost impossible for most people to avoid exposing their eyes to some sort of screen for hours at a time, whether it is texting on your phone, bingeing shows and movies on Netflix, or playing video games.

In fact, the introduction of electricity is what first began the disruption of 3 billion years of cyclical sunlight governing the functions of life. What has been the effect of increasingly undermining this cycle, which humans have long been shaped by?

Wired explores some of the troubling research coming out regarding if and how more and more light exposure is negatively impacting us:

Researchers now know that increased nighttime light exposure tracks with increased rates of breast cancer, obesity and depression. Correlation isn’t causation, of course, and it’s easy to imagine all the ways researchers might mistake those findings. The easy availability of electric lighting almost certainly tracks with various disease-causing factors: bad diets, sedentary lifestyles, exposure to they array of chemicals that come along with modernity. Oil refineries and aluminum smelters, to be hyperbolic, also blaze with light at night.

Yet biology at least supports some of the correlations. The circadian system synchronizes physiological function—from digestion to body temperature, cell repair and immune system activity—with a 24-hour cycle of light and dark. Even photosynthetic bacteria thought to resemble Earth’s earliest life forms have circadian rhythms. Despite its ubiquity, though, scientists discovered only in the last decade what triggers circadian activity in mammals: specialized cells in the retina, the light-sensing part of the eye, rather than conveying visual detail from eye to brain, simply signal the presence or absence of light. Activity in these cells sets off a reaction that calibrates clocks in every cell and tissue in a body. Now, these cells are especially sensitive to blue wavelengths—like those in a daytime sky.

But artificial lights, particularly LCDs, some LEDs, and fluorescent bulbs, also favor the blue side of the spectrum. So even a brief exposure to dim artificial light can trick a night-subdued circadian system into behaving as though day has arrived. Circadian disruption in turn produces a wealth of downstream effects, including dysregulation of key hormones. “Circadian rhythm is being tied to so many important functions”, says Joseph Takahashi, a neurobiologist at the University of Texas Southwestern. “We’re just beginning to discover all the molecular pathways that this gene network regulates. It’s not just the sleep-wake cycle. There are system-wide, drastic changes”. His lab has found that tweaking a key circadian clock gene in mice gives them diabetes. And a tour-de-force 2009 study put human volunteers on a 28-hour day-night cycle, then measured what happened to their endocrine, metabolic and cardiovascular systems.

As the article later notes, it will take a lot more research to confirm the causation between disrupting the circadian rhythm and suffering a range of mental and physical problems. Anecdotal evidence would suggest that in the long-term, for many (though not all) people, too much exposure to screen-light can cause problems. But given the many other features of modern society that are just as culpable — long hours of work, constant overstimulation, sedentary living — identifying which, if not most, aspects of the 21st century lifestyle is responsible can be difficult to do, let alone resolve.

An Illuminating Interview About Philosophy and Science

Marx was not entirely wrong in arguing, in the Communist Manifesto, that “the history of all hitherto existing society is the history of class struggles”, but I am not convinced he identified the most profound struggle, which is actually between different ways of making sense of our life and giving meaning to it. We can bear almost anything, but not meaninglessness. Philosophy has withdrawn from the task of providing meaningful narratives, and this has left plenty of space to fundamentalists of all kind. We need philosophy to be intellectually engaged again, to shape the human project.

– Lucian Floridi, in an interview with Sincere Kirabo at OldPiano.org.

I recommend you click the hyperlink and check out the rest of the discussion. It is a very informative look at the intersection between philosophy and science, and what lies ahead for both fields as they an increasingly vital role in our fast-changing and troubled world.

Screentime Around The World

The following graph looks at how much time the world spends looking at screens (television, PC, smartphone, and tablet).

Courtesy of Gizmodo, KPCB, and Quartz

Courtesy of Gizmodo, KPCB, and Quartz

Interestingly, developing countries make up most of the top viewers, with the U.S. coming in sixth place overall but ranking the highest in the developed world (the second highest industrialized country, the U.K., is in fifteenth place overall). Japan, France, and Italy rank the lowest among surveyed countries and the developed world.

The preferred medium of viewing varied a lot from country to country as well: Americans and Britons watch the most television out of any other nation in the list, but Indonesians and Filipinos have them beat in terms of smartphone usage. Tablets seem to be far more popular in Asia (with the notable exceptions of Japan and South Korea) than the rest of the world. 

Pretty interesting stuff.

Source: Gizmodo

Amazing Scientific Achievements We’ll See Within A Decade

From StumbleUpon is an exciting collection of twenty-three incredible technological developments to look forward. While not all of these are guaranteed to be available or implemented by their probably date, they’re all a lot more likely to happen in our lifetimes then we previously thought. Plus, it never hurts to hope!

2012

Ultrabooks – The last two years have been all about the tablet. Laptops, with their “untouchable” screens, have yet to match any tablet’s featherweight portability and zippy response times. However, by next year, ultraportable notebooks — Ultrabooks — will finally be available for under $1000, bringing a complete computing experience into areas of life which, until now, have only been partially filled by smaller technologies such as tablets and smartphones. They weigh around three pounds, measure less than an inch thick, and the hard drives are flash-based, which means they’ll have no moving parts, delivering zippy-quick startups and load times.

The Mars Science Laboratory – By August 2012, the next mission to Mars will reach the Martian surface with a new rover named Curiosity focusing on whether Mars could ever have supported life, and whether it might be able to in the future. Curiosity will be more than 5 times larger than the previous Mars rover, and the mission will cost around $2.3 billion — or just about one and a half New Yankee Stadiums.

The paralyzed will walk. But, perhaps not in the way that you’d imagine. Using a machine-brain interface, researchers are making it possible for otherwise paralyzed humans to control neuroprostheses — essentially mechanical limbs that respond to human thought — allowing them to walk and regain bodily control. The same systems are also being developed for the military, which one can only assume means this project won’t flounder due to a lack of funding.

2013

The Rise of Electronic Paper – Right now, e-paper is pretty much only used in e-readers like the Kindle, but it’s something researchers everywhere are eager to expand uponFull-color video integration is the obvious next step, and as tablet prices fall, it’s likely newspapers will soon be fully eradicated from their current form. The good news: less deforestation, and more user control over your sources.

4G will be the new standard in cell phone networks. What this means: your phone will download data about as fast as your home computer can. While you’ve probably seen lots of 4G banter from the big cell providers, it’s not very widely available in most phones. However, both Verizon and the EU intend to do away with 3G entirely by 2013, which will essentially bring broadband-level speeds to wireless devices on cell networks. It won’t do away with standard internet providers, but it will bring “worldwide WiFi” capabilities to anyone with a 4G data plan.

The Eye of Gaia, a billion-pixel telescope will be sent into space this year to begin photographing and mapping the universe on a scale that was recently impossible. With the human eye, one can see several thousand stars on a clear night; Gaia will observe more than a billion over the course of its mission — about 1% of all the stars in the Milky Way. As well, it will look far beyond our own galaxy, even as far as the end of the (observable) universe.

2014

A 1 Terabyte SD Memory Card probably seems like an impossibly unnecessary technological investment. Many computers still don’t come with that much memory, much less SD memory cards that fit in your digital camera. Yet thanks to Moore’s Law we can expect that the 1TB SD card will become commonplace in 2014, and increasingly necessary given the much larger swaths of data and information that we’re constantly exchanging every day (thanks to technologies like memristors and our increasing ever-connectedness). The only disruptive factor here could be the rise of cloud-computing, but as data and transfer speeds continue to rise, it’s inevitable that we’ll need a physical place to store our digital stuff.

The first around-the-world flight by a solar-powered plane will be accomplished by now, bringing truly clean energy to air transportation for the first time. Consumer models are still far down the road, but you don’t need to let your imagination wander too far to figure out that this is definitely a game-changer. Consider this: it took humans quite a few milennia to figure out how to fly; and only a fraction of that time to do it with solar power.

The Solar Impulse, to be flown around the world. Photo by Stephanie Booth

The world’s most advanced polar icebreaker is currently being developed as a part of the EU’s scientific development goals and is scheduled to launch in 2014. As global average temperatures continue to climb, an understanding and diligence to the polar regions will be essential to monitoring the rapidly changing climates — and this icebreaker will be up to the task.

$100 personal DNA sequencing is what’s being promised by a company called BioNanomatrix, which the company founder Han Cao has made possible through his invention of the ‘nanofluidic chip.’ What this means: by being able to cheaply sequence your individual genome, a doctor could biopsy a tumor, sequence the DNA, and use that information to determine a prognosis and prescribe treatment for less than the cost of a modern-day x-ray. And by specifically inspecting the cancer’s DNA, treatment can be applied with far more specific — and effective — accuracy.

2015

The world’s first zero-carbon, sustainable city in the form of Masdar City will be initially completed just outside of Abu Dhabi. The city will derive power solely from solar and other renewable resources, offer homes to more than 50,000 people.

Personal 3D Printing is currently reserved for those with extremely large bank accounts or equally large understandings about 3D printing; but by 2015, printing in three dimensions (essentially personal manufacturing) will become a common practice in the household and in schools. Current affordable solutions include do-it-yourself kits like Makerbot, but in four years it should look more like a compact version of the uPrint. Eventually, this technology could lead to technologies such as nano-fabricators and matter replicators — but not for at least a few decades.

2016

Space tourism will hit the mainstream. Well, sorta. Right now it costs around $20-30 million to blast off and chill at the International Space Station, or $200,000 for a sub-orbital spaceflight from Virgin Galactic. But the market is growing faster than most realize: within five years, companies like Space IslandGalactic Suite, and Orbital Technologies may realize their company missions, with space tourism packages ranging from $10,000 up-and-backs to $1 million five-night stays in an orbiting hotel suite.

The sunscreen pill will hit the market, protecting the skin as well as the eyes from UV rays. By reverse-engineering the way coral reefs shield themselves from the sun, scientists are very optimistic about the possibility, much to the dismay of sunscreen producers everywhere.

A Woolly Mammoth will be reborn among other now-extinct animals in 2016, assuming all goes according to the current plans of Japan’s Riken Center for Developmental Biology. If they can pull it off, expect long lines at Animal Kingdom.

2017

Portable laser pens that can seal wounds – Imagine you’re hiking fifty miles from the nearest human, and you slip, busting your knee wide open, gushing blood. Today, you might stand a chance of some serious blood loss — but in less than a decade you might be carrying a portable laser pen capable of sealing you back up Wolverine-style.

2018

Light Peak technology, a method of super-high-data-transfer, will enable more than 100 Gigabytes per second — and eventually whole terabytes per second — within everyday consumer electronics. This enables the copying of entire hard drives in a matter of seconds, although by this time the standard hard drive is probably well over 2TB.

Insect-sized robot spies aren’t far off from becoming a reality, with the military currently hard at work to bring Mission Impossible-sized tech to the espionage playground. Secret weapon: immune to bug spray.

2019

The average PC has the power of the human brain. According to Ray Kurzweil, who has a better grip on the future than probably anyone else, the Law of Accelerating Returns will usher in an exponentially greater amount of computing power than ever before.

Web 3.0 – What will it look like? Is it already here? It’s always difficult to tell just where we stand in terms of technological chronology. But if we assume that Web 1.0 was based only upon hyperlinks, and Web 2.0 is based on the social, person-to-person sharing of links, then Web 3.0 uses a combination of socially-sourced information, curated by a highly refined, personalizable algorithm (“they” call it the Semantic Web). We’re already in the midst of it, but it’s still far from its full potential.

Energy from a fusion reactor has always seemed just out of reach. It’s essentially the process of producing infinite energy from a tiny amount of resources, but it requires a machine that can contain a reaction that occurs at over 125,000,000 degrees. However, right now in southern France, the fusion reactor of the future is being built to power up by 2019, with estimates of full-scale fusion power available by 2030.

2020

Crash-proof cars have been promised by Volvo, to be made possible by using radar, sonar, and driver alert systems. Considering automobile crashes kill over 30,000 people in the U.S. per year, this is definitely a welcome technology.

2021

So, what should we expect in 2021? Well, 10 years ago, what did you expect to see now? Did you expect the word “Friend” to become a verb? Did you expect your twelve-year-old brother to stay up texting until 2am? Did you expect 140-character messaging systems enabling widespread revolutions against decades-old dictatorial regimes?

The next 10 years will be an era of unprecedented connectivity; this much we know. It will build upon the social networks, both real and virtual, that we’ve all played a role in constructing, bringing ideas together that would have otherwise remained distant, unknown strangers. Without twitter and a steady drip of mainstream media, would we have ever so strongly felt the presence of the Arab Spring? What laughs, gasps, or loves, however fleeting, would have been lost if not for Chatroulette? Keeping in mind that as our connections grow wider and more intimate, so too will the frequency of our connectedness, and as such, your own understanding of just what kinds of relationships are possible will be stretched and revolutionized as much as any piece of hardware.

Truly, the biggest changes we’ll face will not come in the form of any visible technology; the changes that matter most, as they always have, will occur in those places we know best but can never quite see: our own hearts and minds.

The last three paragraphs are the most salient to me. Whatever fantastic developments the future holds, we can all agree that much of it will unexpected no matter how hard we try to prepare and predict. That’s neither a good nor bad thing, it just is.

Happy 25th Birthday World Wide Web!

March 12 was the 25th anniversary of the World Wide Web, otherwise known simply as the Web, a system of interlinked hypertext documents accessed via the Internet. On that day in 1989, Tim Berners-Lee, a British computer scientist and engineer at CERN, wrote a proposal to his administrators for developing an effective communication system to be used by the organization’s members.

He eventually realized the wider applications of this concept, and teamed up with Belgian computer scientist Robert Cailliau in 1990 to further refine the concept of a hypertext system that would “link and access information of various kinds as a web of nodes in which the user can browse at will”. Hypertext is simply text displayed on a computer with references to other text via “hyperlinks”. Berners-Lee finished the first website in December of that year, which you can still see here (for information on the first image ever uploaded, which was a GIF, click here). 

It’s amazing how far it’s come since that humble page , and where the web will be another 25 years from now. Berners-Lee actually shares his thoughts on the future of the Internet in general here and I recommend you give it a read.

Note that despite being used interchangeably, the Internet and the Web are two distinct things: the former is a massive networking infrastructure that connects millions of computers together globally — a network of networks, so to speak. Information that travels over the Internet does so via a variety of languages known as protocols.

The Web, on the other hand, is a way of accessing that information using the HTTP protocol, which is one of only many languages used in the Internet to transmit data. Email, for example, relies on the SMTP protocol, and therefore isn’t technically part of the Web.

Clive Thompson on the Death of the Phone Call

It goes without saying that our form of communication is changing: according to Nielsen, the both the number of mobile phone calls and their length is dropping every year; in 2005, they averaged three minutes in length, dropping to almost half of that as of 2013.

Needless to say, this development has caused a lot of apprehension and concern — particularly older generations — who fret about the decline in social skills, friendships, and overall quality of life as people become isolated. But is this really the case? Is the changing of our medium of communication necessarily mean a decline in the quality or value of communication?

Clive Thompson of Wired Magazine presents a rarely-seen, somewhat positive view of this change: it’s not necessarily good nor bad, but simply is.

This generation doesn’t make phone calls, because everyone is in constant, lightweight contact in so many other ways: texting, chatting, and social-network messaging. And we don’t just have more options than we used to. We have better ones: These new forms of communication have exposed the fact that the voice call is badly designed. It deserves to die.

Consider: If I suddenly decide I want to dial you up, I have no way of knowing whether you’re busy, and you have no idea why I’m calling. We have to open Schrödinger’s box every time, having a conversation to figure out whether it’s OK to have a conversation. Plus, voice calls are emotionally high-bandwidth, which is why it’s so weirdly exhausting to be interrupted by one. (We apparently find voicemail even more excruciating: Studies show that more than a fifth of all voice messages are never listened to.)

The telephone, in other words, doesn’t provide any information about status, so we are constantly interrupting one another. The other tools at our disposal are more polite. Instant messaging lets us detect whether our friends are busy without our bugging them, and texting lets us ping one another asynchronously. (Plus, we can spend more time thinking about what we want to say.) For all the hue and cry about becoming an “always on” society, we’re actually moving away from the demand that everyone be available immediately.

In fact, the newfangled media that’s currently supplanting the phone call might be the only thing that helps preserve it. Most people I know coordinate important calls in advance using email, text messaging, or chat (r u busy?). An unscheduled call that rings on my phone fails the conversational Turing test: It’s almost certainly junk, so I ignore it. (Unless it’s you, Mom!)

What do you think of this assessment? Are the rise of “n

Iran’s Surena 2 Robot

Surena II

The Sorena 2 Robot, which was designed by engineers at University of Tehran and unveiled in 2010. The Institute of Electrical and Electronics Engineers (IEEE) has ranked it among the five most prominent robots in the world in terms of performance and advancement.

 

Convenience vs. Freedom

The right to privacy has been one of the defining issues of the modern world, especially since the advent of the internet, which has facilitated the vast and rapid exchange of information — often beyond our own apparent individual control. The Babbage technology column in The Economist highlights one of the complex dynamics of this issue: the trade off between the convenience that the web brings — many in the form of free services from social media to search — and the need to give up a certain measure of privacy as payment of a sort.

It has been said many times, but the fact remains that anything users share over the internet will inevitably be bought and sold and, sooner or later, used against them in some way. That is the price people tacitly accept for the convenience of using popular web services free of charge.

The corollary, of course, is that if individuals are not paying for some online product, they are the product. And collecting information about the product (users) enhances its value for the service’s actual customers (advertisers, corporate clients and government agencies) who pay the bills. That is how the business model works. Those who do not like it can choose not to use such free services and find paid alternatives instead that promise greater privacy. Though limited, they do exist.

Granted, the internet is an inherent social force that’s driven by networks and trends, so most people tend to gravitate to services that most other people are already using. So seeking out alternatives is easier said than done, since those that do so will likely find themselves alone or out of the loop.

Perhaps the fact that most people have (thus far) chosen to use the services that mine their data says something about what we value more, or about how unconcerning the whole process ultimately seems. Indeed, it’s pretty much become a given that that is how the internet works:

Along with other internet companies, Google mines the data it collects from users for two purposes. One is to improve the user experience, making its various online services more personal, useful and rewarding for the individual—and thereby increasing their popularity. The other purpose is to provide better targeted information for advertisers.

Like other firms offering free services, Google makes its living out of matching the right kind of advertising to the specific interests of its individual users. To do so, it needs to know their likes and purchases as well as their identifiers and demographics, including name, sex, age, address, current location and income bracket.

If truth be told, no-one needs to eavesdrop to discover such things. People willingly volunteer all manner of facts about themselves when registering or subscribing to various online services. Scraping such information off social networks and combining it with data drawn from sites for searching, shopping, downloading, streaming or whatever lets social marketers infer all they need to know about most individuals.

That is fine for the vast majority of internet users, who are happy to trade a measure of privacy for the convenience of using popular sites like Google, Facebook, Twitter, Flickr and YouTube. That such convenience comes free of charge makes the trade an even better deal. But where to draw the line?

There’s already growing concern about whether we’ve gone too far in willingly giving up our information to internet companies (and whether those firms themselves have been crossing the line). But at the article notes, most of what we share is already public knowledge: the stuff we want to buy, the movies we like, the personal views we hold, etc. Isn’t it unreasonable to expect companies that are offering a free service to not, at the very least, utilize this innocuous data to sustain their operations? (After all, there are overhead costs to cover).  Well, that’s where it gets a little complicated:

It is one thing to reveal personal preferences such as favourite films, TV shows, dishes, books or music tracks. However, most people (though not all) stop short of blurting out more intimate details about their private lives. Even so, all those innocuous bits of self-revelation can be pieced together, jig-saw fashion, by intelligent algorithms. Throw in the digital paper-trails stashed in Google searches and Amazon purchases, and things can begin to get a little scary.

Babbage’s teenage daughter, for instance, uses his Amazon account and credit card to buy everything from romantic novels to cosmetics and underwear. As a result, he gets bombarded by e-mails recommending other female items he might like to purchase. Anyone leaning over his shoulder could easily label him a pervert or worse.

So is the onus on companies to refrain — or be legally forced to refrain — from prying into our more personal tastes and habits? Or is this once again a small price to pay for the convenience that such “personalized” services offer? Maybe we’re the ones that need to take action, as the Babbage columnist feels.

But with the convenience of using free online services, even those offered by major brands, comes the responsibility to be personally vigilant, to watch out for oneself—and to be willing to pay for services that offer higher levels of security, freedom from advertising, or simply a better quality of service all round. One of Babbage’s colleagues says he would happily pay for Twitter if it provided proper analytics. He would pay for Facebook, too, if it did not compress his photographs so much.

Ultimately, though, Babbage is more concerned about identity theft than with Google selling his likes and dislikes to advertisers. This is one of the fastest growing white-collar crimes around, with an identity being stolen somewhere at least once every four seconds (see “Your life in their hands”, March 23rd 2007). The average cost of restoring a stolen identity is reckoned to be $8,000, and victims spend typically 600 hours dealing with the nightmare—plus many years more restoring their good name and credit record.

Personally, I side with Babbage in that I see little conceivable harm in the mining of already-public or innocuous data, but do feel that the potential for identity theft is a growing and real threat, one made easier by all the data flowing around on the web. And what about private messages or email? What would happen if someone were to hack into our Facebook, Google, or smartphone accounts and expose our personal conversations? Are we to just avoid such exchanges on the web?

And will our tacit acceptance of this arrangement lead to a blurring of what’s public and private? As people become more willing and able to share more about themselves online, do we risk undermining our own sense of personal space? Maybe it doesn’t even matter and we’re happy to share these things; after all, humans have always been keen on self-expression, and the rate of doing so has always increased alongside advances in media (consider the then-unprecedented outflow of opinions and data once the printing press was invented).

Anyway, what are your thoughts?

Link

Africa’s Futuristic “New Cities”

Africa rarely comes to mind as a hub for technological and infrastructural innovation (on the contrary, it seems that most people I know forget it even has modern cities to begin with).  

But despite its immense social, economic, and political challenges, the increasingly vibrant continent is brimming with potential and economic growth — and therefore optimism. Hence some of these ambitious urban plans, some of which rival science fiction in their sleekness and engineering brilliance.

Konza – Kenya

Dubbed as “Africa’s Silicon Savannah,” Konza Techno City is the Kenyan government’s flagship mega project designed to foster the growth of the country’s technology industry.

The multi-billion dollar city, located on a 5,000-acre plot of land some 60 kilometers southeast of the capital Nairobi, aims to create nearly 100,000 jobs by 2030.

It will feature a central business district, a university campus, urban parks and housing to accommodate some 185,000 people.

Appolonia, King City — Ghana

Designed by Rendeavour, the urban development branch of Moscow-based Renaissance Group, Appolonia and King City will be located in Greater Accra and Western Ghana respectively.

The mixed-use satellite cites are expected to accommodate more than 160,000 residents on land developed for housing properties, retail and commercial centers, as well as schools, healthcare and other social amenities.

Rendeavour says that all baseline studies, master plans and detailed designs have been completed and approved, while basic infrastructure work in Appolonia is expected to begin in the third quarter of 2013.

Eko Atlantic — Nigeria

Eko Atlantic is a multi-billion dollar residential and business development that will be located on Victoria Island in Lagos, along its upmarket Bar Beach coastline.

Read this: Lagos of the future

The ambitious project is being built on 10 square kilometers of land reclaimed from the Atlantic Ocean.

Eko Atlantic is expected to provide upscale accommodation for 250,000 people and employment opportunities for a further 150,000.

Tatu City – Kenya

Also being developed by Rendeavour, Tatu City will span 1,035 hectares of land some 15 kilometers from Nairobi.

It is designed to create a new decentralized urban center to the north of the bustling Kenyan capital.

Construction work began last May and the whole project is projected to be completed in 10 phases by 2022. When finalized, the mixed-use satellite city is expected to be home to 77,000 residents.

La Cite du Fleuve — Democratic Republic of Congo

La Cite du Fleuve is a luxurious housing project planned for two islands on the Congo River in Kinshasa, the capital of the Democratic Republic of Congo and one of Africa’s fastest growing cities.

Developer Hawkwood Properties plans to reclaim about 375 hectares of sandbanks and swamps to build thousands of riverside villas, offices and shopping centers over the next 10 years.

It says that more than 20 hectares of land have already been reclaimed.

Hope City — Ghana

Hope City is a $10 billion high-tech hub that will be built outside Accra, aiming to turn Ghana into a major ICT player.

The planned hub, which is hoped will house 25,000 residents and create jobs for 50,000 people, will be made up of six towers of different dimensions, including a 75-story, 270 meter-high building that is expected to be the highest in Africa.

Ghanaian company RLG Communications is financing 30% of the project, while the remainder will be funded by a wide array of investors and through a stock-buying scheme. Its sustainable facilities will include an assembly plant for various tech products, business offices, an IT university and a hospital, as well as restaurants, theaters and sports centers.

Kigali — Rwanda

The capital and biggest city of Rwanda has launched an ambitious urban development plan to transform itself into the “center of urban excellence in Africa.” The bold and radical 2020 Kigali Conceptual Master Plan includes all the hallmarks of a regional hub for business, trade and tourism. It envisages Singapore-like commercial and shopping districts boasting glass-box skyscrapers and modern hotels, as well as green-themed parks and entertainment facilities.

Of course, given the many issues these nations face, there is still plenty of understandable skepticism:

Critics warn that many of these new developments will only serve a tiny elite, exacerbating an already deep divide between the haves and have-nots.

“They are essentially designed for people with money,” says Vanessa Watson, professor of city planning at the University of Cape Town. She describes many of the plans as unsustainable “urban fantasies” that ignore the reality of African cities, where most people are still poor and live informally.

“What many of these new cities are doing will result in the exclusion and the forced removal of those kind of informal areas, which quite often are on well-located land,” says Watson. In some cases, entire settlements have been relocated and large plots of land have been cleared to make way for the proposed projects.

Critics also bemoan a lack of adequate research to gauge the impact of some new developments on the local environment and economies.

They point out the “ghost town” of Kilamba in Angola, a grandiose project often labeled as a white elephant. Built afresh outside the capital Luanda, Kilamba was designed to accommodate hundreds of thousands of people but remains largely empty due to its expensive housing and unfavorable location.

I can still dream at least.