lead_large

Taking Environmental Sustainability to the Next Level

It is one thing to design buildings that can minimize impact on the environment. But what about creating structures that can play a regenerative role, contributing positively and directly to surrounded ecosystems? CityLab explores this intriguing and recent concept:

The idea is not to be satisfied with efficiency for its own sake. Regenerative design aspires to an active participation in ecosystems all around. A green roof is pleasant for humans and reduces energy consumption in the building underneath; a regenerative green roof not only does that but is intentionally designed to support butterflies or birds that have otherwise vacated an urban area.

Capturing rainwater, recycling graywater, and treating wastewater on-site are all great for reducing overall water consumption. But in regenerative design, these strategies are only optimal if they recharge the local aquifer as well.

Similarly, building materials shouldn’t only be viewed in the context of minimizing damage and the consumption of resources; they should be put to work for the planet. The use of wood thus becomes at its core a carbon sequestration strategy. The carbon soaked up by older trees—harvested in sustainable forestry practices, cutting them down before they fall and rot and release emissions back into the atmosphere—gets taken out of the cycle, permanently tucked away as beams and pillars and walls.

Given the novelty of the idea, there are no working models just yet. The article highlights the closest example of a regenerative system: the VanDusen Botanical Garden Visitor Centre in Vancouver (pictured).

Waste from the toilets is harvested to be mixed with food waste composting, while the water is separated out and purified for use in irrigation. Rammed-earth building blocks were formed by dredging ponds on the site, and the deeper water in turn led to a healthier ecosystem. The equivalent of staircases encourage all kinds of critters to get up to the green roof and feed; coyotes have been spotted up there.

Given the twin trends of rapid urbanization and equally rapid environmental degradation, this is definitely idea well worth exploring and investing in further. What are your thoughts?

Featured Image -- 7392

This wind turbine generates power without blades

Eupraxsophy:

It is cheaper to build and operate than conventional models, and though not as energy efficient individually, can collectively cover more ground to make up for it. Sounds like something well worth investing in.

And of course, this is coming out of Spain, a world leader in wind power.

Originally posted on Quartz:

Growing interest in alternative energy sources has made the three-pronged white metal wind turbines dotted across open landscapes a familiar sight.

But thanks to a Spanish energy startup known as Vortex Bladeless, there’s a new type of turbine in town with a rather different look—and the potential to be cheaper and more reliable. Vortex’s generator resembles a giant straw in the ground and harnesses wind energy without the need for rotating windmill blades. It’s designed to vibrate in the wind as much as possible, like a guitar string; those vibrations are then converted into stored energy.

According to the company’s website, the Vortex turbines are 53% cheaper to manufacture and 51% cheaper to operate than traditional wind turbines. This is in part due to their lack of moving parts—there just aren’t that many components to break. Their current model, the 41-foot Vortex Mini tube, captures around 30% less energy than a traditional wind…

View original 42 more words

How Machines Will Conquer The Economy

From Zeynep Tufecki over at the New York Times:

But computers do not just replace humans in the workplace. They shift the balance of power even more in favor of employers. Our normal response to technological innovation that threatens jobs is to encourage workers to acquire more skills, or to trust that the nuances of the human mind or human attention will always be superior in crucial ways. But when machines of this capacity enter the equation, employers have even more leverage, and our standard response is not sufficient for the looming crisis.

Machines aren’t used because they perform some tasks that much better than humans, but because, in many cases, they do a “good enough” job while also being cheaper, more predictable and easier to control than quirky, pesky humans. Technology in the workplace is as much about power and control as it is about productivity and efficiency.

This is the way technology is being used in many workplaces: to reduce the power of humans, and employers’ dependency on them, whether by replacing, displacing or surveilling them. Many technological developments contribute to this shift in power: advanced diagnostic systems that can do medical or legal analysis; the ability to outsource labor to the lowest-paid workers, measure employee tasks to the minute and “optimize” worker schedules in a way that devastates ordinary lives. Indeed, regardless of whether unemployment has gone up or down, real wages have been stagnant or declining in the United States for decades. Most people no longer have the leverage to bargain.

I can think of no better a justification for implementing a guaranteed basic income than this trend. How much longer until we run out of sustainable employment to support our population? Already, in the United States and elsewhere, most fast-growing sectors are low paying service jobs like fast-food and retail; even the professions that should ostensibly pay well, such as those requiring degrees or experience, increasingly do not.

Most people are already running out of alternatives for liveable, meaningful work — and now mechanization and automation threaten to undermine what comparatively little remains. I think this says a lot more about the social, economic, and moral failings of our society than it does about technology.

Why should everything be hyper-efficient at the expense of workers — who are also consumers and thus drivers of the economy? Why should we have a business culture, or indeed an economic and social structure, whereby those at the top must ruthlessly undercut the leverage and well-being of everyone else, whom they nonetheless depend on? If we want to optimize production and cost-effectiveness, which are of course not bad aims, then why not do so while providing some alternative means of survival for those who get displaced?

How we respond to this trend will speak volumes about our values, priorities, and moral grounding.

Skyscraper Farms

Despite being one of the most densely populated countries in the world, the Netherlands manages to have one of the most efficient and productive agricultural sectors, second only to the United States (a far bigger country) in value of exports.

In light of that, perhaps it is fitting that a Dutch company should lead the way in the new concept of “high-rise farming”. As Mic.com reports:

PlantLab, a Dutch agriculture firm, wants to construct “plant production units,” spaces made for growing plants and vegetables. Each unit is customizable, able to adjust and control anything from to the amount and kind of light received, a major value for photosynthesis, to how large the space needs to be — anything from a garden the size of a microwave to a skyscraper.

By either constructing buildings, or, potentially more sustainably, retrofitting existing, unused buildings, PlantLab believes they can construct spaces where plants will grow faster and more efficiently.

This means the entirety of California’s almond-growing operation could be put into something the size of a Best Western hotel, while also cutting out pesticides, producing three to five times more almonds and using 90% less water thanks to smarter hydration — all without tweaking the almond’s genetics.

Here is a proof of concept of sorts from the company’s official YouTube:

The implications of this idea are vast. Suddenly, regions of the world lacking resources or appropriate climate can grow any number of crops to suit local needs. So much space can be freed, and environments spared, while giving immediate access to food. It is also a great way to make use of otherwise derelict building — imagine how many decaying cities and suburbs could be turned into thriving agricultural centers?

PlantLab claims that with this approach, it will only need space equal to about one third of the U.S. state of Hawaii to feed the world’s population. A part of me is skeptical of this, but with some analysts projecting a global food shortage by 2050, I want to be hopeful.

The company’s TedTalk in Brainport, Netherlands is certainly intriguing.

Granted, the world already produces enough food to feed its inhabitants. Most global hunger is attributed to the inequities and inefficiencies of the global food market, as well as various shortcomings in infrastructure, investment, and transportation. None of this means that we should give up on finding solutions to improve food production; rather it is just one component of a very complicated problem.

How Screens Negatively Impact Health

Thanks to the boom in mobile technology — particularly smartphones and tablets — screens have become ubiquitous in modern society. It is almost impossible for most people to avoid exposing their eyes to some sort of screen for hours at a time, whether it is texting on your phone, bingeing shows and movies on Netflix, or playing video games.

In fact, the introduction of electricity is what first began the disruption of 3 billion years of cyclical sunlight governing the functions of life. What has been the effect of increasingly undermining this cycle, which humans have long been shaped by?

Wired explores some of the troubling research coming out regarding if and how more and more light exposure is negatively impacting us:

Researchers now know that increased nighttime light exposure tracks with increased rates of breast cancer, obesity and depression. Correlation isn’t causation, of course, and it’s easy to imagine all the ways researchers might mistake those findings. The easy availability of electric lighting almost certainly tracks with various disease-causing factors: bad diets, sedentary lifestyles, exposure to they array of chemicals that come along with modernity. Oil refineries and aluminum smelters, to be hyperbolic, also blaze with light at night.

Yet biology at least supports some of the correlations. The circadian system synchronizes physiological function—from digestion to body temperature, cell repair and immune system activity—with a 24-hour cycle of light and dark. Even photosynthetic bacteria thought to resemble Earth’s earliest life forms have circadian rhythms. Despite its ubiquity, though, scientists discovered only in the last decade what triggers circadian activity in mammals: specialized cells in the retina, the light-sensing part of the eye, rather than conveying visual detail from eye to brain, simply signal the presence or absence of light. Activity in these cells sets off a reaction that calibrates clocks in every cell and tissue in a body. Now, these cells are especially sensitive to blue wavelengths—like those in a daytime sky.

But artificial lights, particularly LCDs, some LEDs, and fluorescent bulbs, also favor the blue side of the spectrum. So even a brief exposure to dim artificial light can trick a night-subdued circadian system into behaving as though day has arrived. Circadian disruption in turn produces a wealth of downstream effects, including dysregulation of key hormones. “Circadian rhythm is being tied to so many important functions”, says Joseph Takahashi, a neurobiologist at the University of Texas Southwestern. “We’re just beginning to discover all the molecular pathways that this gene network regulates. It’s not just the sleep-wake cycle. There are system-wide, drastic changes”. His lab has found that tweaking a key circadian clock gene in mice gives them diabetes. And a tour-de-force 2009 study put human volunteers on a 28-hour day-night cycle, then measured what happened to their endocrine, metabolic and cardiovascular systems.

As the article later notes, it will take a lot more research to confirm the causation between disrupting the circadian rhythm and suffering a range of mental and physical problems. Anecdotal evidence would suggest that in the long-term, for many (though not all) people, too much exposure to screen-light can cause problems. But given the many other features of modern society that are just as culpable — long hours of work, constant overstimulation, sedentary living — identifying which, if not most, aspects of the 21st century lifestyle is responsible can be difficult to do, let alone resolve.

An Illuminating Interview About Philosophy and Science

Marx was not entirely wrong in arguing, in the Communist Manifesto, that “the history of all hitherto existing society is the history of class struggles”, but I am not convinced he identified the most profound struggle, which is actually between different ways of making sense of our life and giving meaning to it. We can bear almost anything, but not meaninglessness. Philosophy has withdrawn from the task of providing meaningful narratives, and this has left plenty of space to fundamentalists of all kind. We need philosophy to be intellectually engaged again, to shape the human project.

— Lucian Floridi, in an interview with Sincere Kirabo at OldPiano.org.

I recommend you click the hyperlink and check out the rest of the discussion. It is a very informative look at the intersection between philosophy and science, and what lies ahead for both fields as they an increasingly vital role in our fast-changing and troubled world.

Screentime Around The World

The following graph looks at how much time the world spends looking at screens (television, PC, smartphone, and tablet).

Courtesy of Gizmodo, KPCB, and Quartz

Courtesy of Gizmodo, KPCB, and Quartz

Interestingly, developing countries make up most of the top viewers, with the U.S. coming in sixth place overall but ranking the highest in the developed world (the second highest industrialized country, the U.K., is in fifteenth place overall). Japan, France, and Italy rank the lowest among surveyed countries and the developed world.

The preferred medium of viewing varied a lot from country to country as well: Americans and Britons watch the most television out of any other nation in the list, but Indonesians and Filipinos have them beat in terms of smartphone usage. Tablets seem to be far more popular in Asia (with the notable exceptions of Japan and South Korea) than the rest of the world. 

Pretty interesting stuff.

Source: Gizmodo

Amazing Scientific Achievements We’ll See Within A Decade

From StumbleUpon is an exciting collection of twenty-three incredible technological developments to look forward. While not all of these are guaranteed to be available or implemented by their probably date, they’re all a lot more likely to happen in our lifetimes then we previously thought. Plus, it never hurts to hope!

2012

Ultrabooks – The last two years have been all about the tablet. Laptops, with their “untouchable” screens, have yet to match any tablet’s featherweight portability and zippy response times. However, by next year, ultraportable notebooks — Ultrabooks — will finally be available for under $1000, bringing a complete computing experience into areas of life which, until now, have only been partially filled by smaller technologies such as tablets and smartphones. They weigh around three pounds, measure less than an inch thick, and the hard drives are flash-based, which means they’ll have no moving parts, delivering zippy-quick startups and load times.

The Mars Science Laboratory – By August 2012, the next mission to Mars will reach the Martian surface with a new rover named Curiosity focusing on whether Mars could ever have supported life, and whether it might be able to in the future. Curiosity will be more than 5 times larger than the previous Mars rover, and the mission will cost around $2.3 billion — or just about one and a half New Yankee Stadiums.

The paralyzed will walk. But, perhaps not in the way that you’d imagine. Using a machine-brain interface, researchers are making it possible for otherwise paralyzed humans to control neuroprostheses — essentially mechanical limbs that respond to human thought — allowing them to walk and regain bodily control. The same systems are also being developed for the military, which one can only assume means this project won’t flounder due to a lack of funding.

2013

The Rise of Electronic Paper – Right now, e-paper is pretty much only used in e-readers like the Kindle, but it’s something researchers everywhere are eager to expand uponFull-color video integration is the obvious next step, and as tablet prices fall, it’s likely newspapers will soon be fully eradicated from their current form. The good news: less deforestation, and more user control over your sources.

4G will be the new standard in cell phone networks. What this means: your phone will download data about as fast as your home computer can. While you’ve probably seen lots of 4G banter from the big cell providers, it’s not very widely available in most phones. However, both Verizon and the EU intend to do away with 3G entirely by 2013, which will essentially bring broadband-level speeds to wireless devices on cell networks. It won’t do away with standard internet providers, but it will bring “worldwide WiFi” capabilities to anyone with a 4G data plan.

The Eye of Gaia, a billion-pixel telescope will be sent into space this year to begin photographing and mapping the universe on a scale that was recently impossible. With the human eye, one can see several thousand stars on a clear night; Gaia will observe more than a billion over the course of its mission — about 1% of all the stars in the Milky Way. As well, it will look far beyond our own galaxy, even as far as the end of the (observable) universe.

2014

A 1 Terabyte SD Memory Card probably seems like an impossibly unnecessary technological investment. Many computers still don’t come with that much memory, much less SD memory cards that fit in your digital camera. Yet thanks to Moore’s Law we can expect that the 1TB SD card will become commonplace in 2014, and increasingly necessary given the much larger swaths of data and information that we’re constantly exchanging every day (thanks to technologies like memristors and our increasing ever-connectedness). The only disruptive factor here could be the rise of cloud-computing, but as data and transfer speeds continue to rise, it’s inevitable that we’ll need a physical place to store our digital stuff.

The first around-the-world flight by a solar-powered plane will be accomplished by now, bringing truly clean energy to air transportation for the first time. Consumer models are still far down the road, but you don’t need to let your imagination wander too far to figure out that this is definitely a game-changer. Consider this: it took humans quite a few milennia to figure out how to fly; and only a fraction of that time to do it with solar power.

The Solar Impulse, to be flown around the world. Photo by Stephanie Booth

The world’s most advanced polar icebreaker is currently being developed as a part of the EU’s scientific development goals and is scheduled to launch in 2014. As global average temperatures continue to climb, an understanding and diligence to the polar regions will be essential to monitoring the rapidly changing climates — and this icebreaker will be up to the task.

$100 personal DNA sequencing is what’s being promised by a company called BioNanomatrix, which the company founder Han Cao has made possible through his invention of the ‘nanofluidic chip.’ What this means: by being able to cheaply sequence your individual genome, a doctor could biopsy a tumor, sequence the DNA, and use that information to determine a prognosis and prescribe treatment for less than the cost of a modern-day x-ray. And by specifically inspecting the cancer’s DNA, treatment can be applied with far more specific — and effective — accuracy.

2015

The world’s first zero-carbon, sustainable city in the form of Masdar City will be initially completed just outside of Abu Dhabi. The city will derive power solely from solar and other renewable resources, offer homes to more than 50,000 people.

Personal 3D Printing is currently reserved for those with extremely large bank accounts or equally large understandings about 3D printing; but by 2015, printing in three dimensions (essentially personal manufacturing) will become a common practice in the household and in schools. Current affordable solutions include do-it-yourself kits like Makerbot, but in four years it should look more like a compact version of the uPrint. Eventually, this technology could lead to technologies such as nano-fabricators and matter replicators — but not for at least a few decades.

2016

Space tourism will hit the mainstream. Well, sorta. Right now it costs around $20-30 million to blast off and chill at the International Space Station, or $200,000 for a sub-orbital spaceflight from Virgin Galactic. But the market is growing faster than most realize: within five years, companies like Space IslandGalactic Suite, and Orbital Technologies may realize their company missions, with space tourism packages ranging from $10,000 up-and-backs to $1 million five-night stays in an orbiting hotel suite.

The sunscreen pill will hit the market, protecting the skin as well as the eyes from UV rays. By reverse-engineering the way coral reefs shield themselves from the sun, scientists are very optimistic about the possibility, much to the dismay of sunscreen producers everywhere.

A Woolly Mammoth will be reborn among other now-extinct animals in 2016, assuming all goes according to the current plans of Japan’s Riken Center for Developmental Biology. If they can pull it off, expect long lines at Animal Kingdom.

2017

Portable laser pens that can seal wounds – Imagine you’re hiking fifty miles from the nearest human, and you slip, busting your knee wide open, gushing blood. Today, you might stand a chance of some serious blood loss — but in less than a decade you might be carrying a portable laser pen capable of sealing you back up Wolverine-style.

2018

Light Peak technology, a method of super-high-data-transfer, will enable more than 100 Gigabytes per second — and eventually whole terabytes per second — within everyday consumer electronics. This enables the copying of entire hard drives in a matter of seconds, although by this time the standard hard drive is probably well over 2TB.

Insect-sized robot spies aren’t far off from becoming a reality, with the military currently hard at work to bring Mission Impossible-sized tech to the espionage playground. Secret weapon: immune to bug spray.

2019

The average PC has the power of the human brain. According to Ray Kurzweil, who has a better grip on the future than probably anyone else, the Law of Accelerating Returns will usher in an exponentially greater amount of computing power than ever before.

Web 3.0 – What will it look like? Is it already here? It’s always difficult to tell just where we stand in terms of technological chronology. But if we assume that Web 1.0 was based only upon hyperlinks, and Web 2.0 is based on the social, person-to-person sharing of links, then Web 3.0 uses a combination of socially-sourced information, curated by a highly refined, personalizable algorithm (“they” call it the Semantic Web). We’re already in the midst of it, but it’s still far from its full potential.

Energy from a fusion reactor has always seemed just out of reach. It’s essentially the process of producing infinite energy from a tiny amount of resources, but it requires a machine that can contain a reaction that occurs at over 125,000,000 degrees. However, right now in southern France, the fusion reactor of the future is being built to power up by 2019, with estimates of full-scale fusion power available by 2030.

2020

Crash-proof cars have been promised by Volvo, to be made possible by using radar, sonar, and driver alert systems. Considering automobile crashes kill over 30,000 people in the U.S. per year, this is definitely a welcome technology.

2021

So, what should we expect in 2021? Well, 10 years ago, what did you expect to see now? Did you expect the word “Friend” to become a verb? Did you expect your twelve-year-old brother to stay up texting until 2am? Did you expect 140-character messaging systems enabling widespread revolutions against decades-old dictatorial regimes?

The next 10 years will be an era of unprecedented connectivity; this much we know. It will build upon the social networks, both real and virtual, that we’ve all played a role in constructing, bringing ideas together that would have otherwise remained distant, unknown strangers. Without twitter and a steady drip of mainstream media, would we have ever so strongly felt the presence of the Arab Spring? What laughs, gasps, or loves, however fleeting, would have been lost if not for Chatroulette? Keeping in mind that as our connections grow wider and more intimate, so too will the frequency of our connectedness, and as such, your own understanding of just what kinds of relationships are possible will be stretched and revolutionized as much as any piece of hardware.

Truly, the biggest changes we’ll face will not come in the form of any visible technology; the changes that matter most, as they always have, will occur in those places we know best but can never quite see: our own hearts and minds.

The last three paragraphs are the most salient to me. Whatever fantastic developments the future holds, we can all agree that much of it will unexpected no matter how hard we try to prepare and predict. That’s neither a good nor bad thing, it just is.

Happy 25th Birthday World Wide Web!

March 12 was the 25th anniversary of the World Wide Web, otherwise known simply as the Web, a system of interlinked hypertext documents accessed via the Internet. On that day in 1989, Tim Berners-Lee, a British computer scientist and engineer at CERN, wrote a proposal to his administrators for developing an effective communication system to be used by the organization’s members.

He eventually realized the wider applications of this concept, and teamed up with Belgian computer scientist Robert Cailliau in 1990 to further refine the concept of a hypertext system that would “link and access information of various kinds as a web of nodes in which the user can browse at will”. Hypertext is simply text displayed on a computer with references to other text via “hyperlinks”. Berners-Lee finished the first website in December of that year, which you can still see here (for information on the first image ever uploaded, which was a GIF, click here). 

It’s amazing how far it’s come since that humble page , and where the web will be another 25 years from now. Berners-Lee actually shares his thoughts on the future of the Internet in general here and I recommend you give it a read.

Note that despite being used interchangeably, the Internet and the Web are two distinct things: the former is a massive networking infrastructure that connects millions of computers together globally — a network of networks, so to speak. Information that travels over the Internet does so via a variety of languages known as protocols.

The Web, on the other hand, is a way of accessing that information using the HTTP protocol, which is one of only many languages used in the Internet to transmit data. Email, for example, relies on the SMTP protocol, and therefore isn’t technically part of the Web.