Over at Big Think, Teodora Zareva introduces a revolutionary new car that will give wheelchair users much needed mobility and independence: the Kenguru, designed by a Hungarian company of the same name and manufactured by the Austin, Texas-based Community Cars. This clever vehicles is the first of its kind, the product of an international partnership between Texas lawyer and wheelchair user Stacy Zoern, and Kenguru chief executive Istvan Kissaroslaki. Continue reading
Well, speaking for myself anyway. The scientific enterprise in question is ITER, an international mega project based in southeastern France that seeks to create nuclear fusion, a potent source of energy that occurs only in the sun and other stars.
As one would imagine, trying to recreate what only these massive celestial bodies are known to do takes quite a lot of technological and scientific effort — hence the over $20 billion price tag, and the involvement of over thirty-five nations, including all twenty-six members states of the European Union, India, Japan, China, Russia, South Korea and the United States.
Unlike fission, nuclei do not spontaneously undergo fusion: atomic nuclei are positively charged and must overcome their huge electrostatic repulsion before they can get close enough together that the strong nuclear force, which binds nuclei together, can kick in.
In nature, the immense gravitational force of stars is strong enough that the temperature, density and volume of the star’s core is enough for atomic nuclei to fuse through “quantum tunnelling” of this electrostatic barrier. In the laboratory, quantum tunnelling rates are far too low, and so the barrier can only be overcome by making the fuel nuclei incredibly hot – six to seven times hotter than the Sun’s core.
Even the easiest fusion reaction to initiate – the combination of the hydrogen isotopes deuterium and tritium, to form helium and an energetic neutron – requires a temperature of about 120 million C. At such extreme temperatures, the fuel atoms are ruptured into their component electrons and nuclei, forming a superheated plasma.
Keeping this plasma in one place long enough for the nuclei to fuse together is no mean feat. In the laboratory, the plasma is confined using strong magnetic fields, generated by coils of electrical superconductors which create a donut-shaped “magnetic bottle” in which the plasma is trapped.
Today’s plasma experiments such as the Joint European Torus can confine plasmas at the required temperatures for net power gain, but the plasma density and energy confinement time (a measure of the cooling time of the plasma) are too low to for the plasma to be self-heated.
But progress is being made – today’s experiments have fusion performance 1,000 times better, in terms of temperature, plasma density and confinement time, than the experiments of 40 years ago. And we already have a fair idea of how to move things to the next step.
Jumping off of this progress, ITER aims to demonstrate the vast scientific and technological applications of using fusion power for electricity generation.
The ITER reactor, now under construction at Cadarache in the south of France, will explore the “burning plasma regime”, where the plasma heating from the confined products of fusion reaction exceeds the external heating power. The total power gain for ITER will be more than five times the external heating power in near-continuous operation, and will approach 10-30 times for short durations.
The engineering and physical challenge is immense. ITER will have a magnetic field strength of 5 Tesla (100,000 times the Earth’s magnetic field) and a device radius of 6 m, confining 840 cubic metres of plasma (one-third of an Olympic swimming pool). It will weigh 23,000 tonnes and contain 100,000 km of niobium tin superconducting strands. Niobium tin is superconducting at 4.5K (about minus-269C), and so the entire machine will be immersed in a refrigerator cooled by liquid helium to keep the superconducting strands just a few degrees above absolute zero.
ITER is expected to start generating its first plasmas in 2020. But the burning plasma experiments aren’t set to begin until 2027. One of the huge challenges will be to see whether these self-sustaining plasmas can indeed be created and maintained without damaging the plasma facing wall or the high heat flux “divertor” target.
Again, this is very complex stuff: hence the fair amount of skepticism and criticism regarding the feasibility and technical challenges of recreating this process. Nevertheless, the first prototypes should be up and running by the 2030s, offering models for creating more commercially viable and environmentally sustainable power plants.
Given what is at stake — a reliable and efficient source of power for a fast-growing world with depleting fossil fuels — these hurdles are well worth pushing through.
It is one thing to design buildings that can minimize impact on the environment. But what about creating structures that can play a regenerative role, contributing positively and directly to surrounded ecosystems? CityLab explores this intriguing and recent concept:
The idea is not to be satisfied with efficiency for its own sake. Regenerative design aspires to an active participation in ecosystems all around. A green roof is pleasant for humans and reduces energy consumption in the building underneath; a regenerative green roof not only does that but is intentionally designed to support butterflies or birds that have otherwise vacated an urban area.
Capturing rainwater, recycling graywater, and treating wastewater on-site are all great for reducing overall water consumption. But in regenerative design, these strategies are only optimal if they recharge the local aquifer as well.
Similarly, building materials shouldn’t only be viewed in the context of minimizing damage and the consumption of resources; they should be put to work for the planet. The use of wood thus becomes at its core a carbon sequestration strategy. The carbon soaked up by older trees—harvested in sustainable forestry practices, cutting them down before they fall and rot and release emissions back into the atmosphere—gets taken out of the cycle, permanently tucked away as beams and pillars and walls.
Given the novelty of the idea, there are no working models just yet. The article highlights the closest example of a regenerative system: the VanDusen Botanical Garden Visitor Centre in Vancouver (pictured).
Waste from the toilets is harvested to be mixed with food waste composting, while the water is separated out and purified for use in irrigation. Rammed-earth building blocks were formed by dredging ponds on the site, and the deeper water in turn led to a healthier ecosystem. The equivalent of staircases encourage all kinds of critters to get up to the green roof and feed; coyotes have been spotted up there.
Given the twin trends of rapid urbanization and equally rapid environmental degradation, this is definitely idea well worth exploring and investing in further. What are your thoughts?
It is cheaper to build and operate than conventional models, and though not as energy efficient individually, can collectively cover more ground to make up for it. Sounds like something well worth investing in.
And of course, this is coming out of Spain, a world leader in wind power.
Originally posted on Quartz:
Growing interest in alternative energy sources has made the three-pronged white metal wind turbines dotted across open landscapes a familiar sight.
But thanks to a Spanish energy startup known as Vortex Bladeless, there’s a new type of turbine in town with a rather different look—and the potential to be cheaper and more reliable. Vortex’s generator resembles a giant straw in the ground and harnesses wind energy without the need for rotating windmill blades. It’s designed to vibrate in the wind as much as possible, like a guitar string; those vibrations are then converted into stored energy.
According to the company’s website, the Vortex turbines are 53% cheaper to manufacture and 51% cheaper to operate than traditional wind turbines. This is in part due to their lack of moving parts—there just aren’t that many components to break. Their current model, the 41-foot Vortex Mini tube, captures around 30% less energy than a traditional wind…
View original 42 more words
From Zeynep Tufecki over at the New York Times:
But computers do not just replace humans in the workplace. They shift the balance of power even more in favor of employers. Our normal response to technological innovation that threatens jobs is to encourage workers to acquire more skills, or to trust that the nuances of the human mind or human attention will always be superior in crucial ways. But when machines of this capacity enter the equation, employers have even more leverage, and our standard response is not sufficient for the looming crisis.
Machines aren’t used because they perform some tasks that much better than humans, but because, in many cases, they do a “good enough” job while also being cheaper, more predictable and easier to control than quirky, pesky humans. Technology in the workplace is as much about power and control as it is about productivity and efficiency.
This is the way technology is being used in many workplaces: to reduce the power of humans, and employers’ dependency on them, whether by replacing, displacing or surveilling them. Many technological developments contribute to this shift in power: advanced diagnostic systems that can do medical or legal analysis; the ability to outsource labor to the lowest-paid workers, measure employee tasks to the minute and “optimize” worker schedules in a way that devastates ordinary lives. Indeed, regardless of whether unemployment has gone up or down, real wages have been stagnant or declining in the United States for decades. Most people no longer have the leverage to bargain.
I can think of no better a justification for implementing a guaranteed basic income than this trend. How much longer until we run out of sustainable employment to support our population? Already, in the United States and elsewhere, most fast-growing sectors are low paying service jobs like fast-food and retail; even the professions that should ostensibly pay well, such as those requiring degrees or experience, increasingly do not.
Most people are already running out of alternatives for liveable, meaningful work — and now mechanization and automation threaten to undermine what comparatively little remains. I think this says a lot more about the social, economic, and moral failings of our society than it does about technology.
Why should everything be hyper-efficient at the expense of workers — who are also consumers and thus drivers of the economy? Why should we have a business culture, or indeed an economic and social structure, whereby those at the top must ruthlessly undercut the leverage and well-being of everyone else, whom they nonetheless depend on? If we want to optimize production and cost-effectiveness, which are of course not bad aims, then why not do so while providing some alternative means of survival for those who get displaced?
How we respond to this trend will speak volumes about our values, priorities, and moral grounding.
Despite being one of the most densely populated countries in the world, the Netherlands manages to have one of the most efficient and productive agricultural sectors, second only to the United States (a far bigger country) in value of exports.
In light of that, perhaps it is fitting that a Dutch company should lead the way in the new concept of “high-rise farming”. As Mic.com reports:
PlantLab, a Dutch agriculture firm, wants to construct “plant production units,” spaces made for growing plants and vegetables. Each unit is customizable, able to adjust and control anything from to the amount and kind of light received, a major value for photosynthesis, to how large the space needs to be — anything from a garden the size of a microwave to a skyscraper.
By either constructing buildings, or, potentially more sustainably, retrofitting existing, unused buildings, PlantLab believes they can construct spaces where plants will grow faster and more efficiently.
This means the entirety of California’s almond-growing operation could be put into something the size of a Best Western hotel, while also cutting out pesticides, producing three to five times more almonds and using 90% less water thanks to smarter hydration — all without tweaking the almond’s genetics.
Here is a proof of concept of sorts from the company’s official YouTube:
The implications of this idea are vast. Suddenly, regions of the world lacking resources or appropriate climate can grow any number of crops to suit local needs. So much space can be freed, and environments spared, while giving immediate access to food. It is also a great way to make use of otherwise derelict building — imagine how many decaying cities and suburbs could be turned into thriving agricultural centers?
PlantLab claims that with this approach, it will only need space equal to about one third of the U.S. state of Hawaii to feed the world’s population. A part of me is skeptical of this, but with some analysts projecting a global food shortage by 2050, I want to be hopeful.
The company’s TedTalk in Brainport, Netherlands is certainly intriguing.
Granted, the world already produces enough food to feed its inhabitants. Most global hunger is attributed to the inequities and inefficiencies of the global food market, as well as various shortcomings in infrastructure, investment, and transportation. None of this means that we should give up on finding solutions to improve food production; rather it is just one component of a very complicated problem.
Thanks to the boom in mobile technology — particularly smartphones and tablets — screens have become ubiquitous in modern society. It is almost impossible for most people to avoid exposing their eyes to some sort of screen for hours at a time, whether it is texting on your phone, bingeing shows and movies on Netflix, or playing video games.
In fact, the introduction of electricity is what first began the disruption of 3 billion years of cyclical sunlight governing the functions of life. What has been the effect of increasingly undermining this cycle, which humans have long been shaped by?
Wired explores some of the troubling research coming out regarding if and how more and more light exposure is negatively impacting us:
Researchers now know that increased nighttime light exposure tracks with increased rates of breast cancer, obesity and depression. Correlation isn’t causation, of course, and it’s easy to imagine all the ways researchers might mistake those findings. The easy availability of electric lighting almost certainly tracks with various disease-causing factors: bad diets, sedentary lifestyles, exposure to they array of chemicals that come along with modernity. Oil refineries and aluminum smelters, to be hyperbolic, also blaze with light at night.
Yet biology at least supports some of the correlations. The circadian system synchronizes physiological function—from digestion to body temperature, cell repair and immune system activity—with a 24-hour cycle of light and dark. Even photosynthetic bacteria thought to resemble Earth’s earliest life forms have circadian rhythms. Despite its ubiquity, though, scientists discovered only in the last decade what triggers circadian activity in mammals: specialized cells in the retina, the light-sensing part of the eye, rather than conveying visual detail from eye to brain, simply signal the presence or absence of light. Activity in these cells sets off a reaction that calibrates clocks in every cell and tissue in a body. Now, these cells are especially sensitive to blue wavelengths—like those in a daytime sky.
But artificial lights, particularly LCDs, some LEDs, and fluorescent bulbs, also favor the blue side of the spectrum. So even a brief exposure to dim artificial light can trick a night-subdued circadian system into behaving as though day has arrived. Circadian disruption in turn produces a wealth of downstream effects, including dysregulation of key hormones. “Circadian rhythm is being tied to so many important functions”, says Joseph Takahashi, a neurobiologist at the University of Texas Southwestern. “We’re just beginning to discover all the molecular pathways that this gene network regulates. It’s not just the sleep-wake cycle. There are system-wide, drastic changes”. His lab has found that tweaking a key circadian clock gene in mice gives them diabetes. And a tour-de-force 2009 study put human volunteers on a 28-hour day-night cycle, then measured what happened to their endocrine, metabolic and cardiovascular systems.
As the article later notes, it will take a lot more research to confirm the causation between disrupting the circadian rhythm and suffering a range of mental and physical problems. Anecdotal evidence would suggest that in the long-term, for many (though not all) people, too much exposure to screen-light can cause problems. But given the many other features of modern society that are just as culpable — long hours of work, constant overstimulation, sedentary living — identifying which, if not most, aspects of the 21st century lifestyle is responsible can be difficult to do, let alone resolve.
Marx was not entirely wrong in arguing, in the Communist Manifesto, that “the history of all hitherto existing society is the history of class struggles”, but I am not convinced he identified the most profound struggle, which is actually between different ways of making sense of our life and giving meaning to it. We can bear almost anything, but not meaninglessness. Philosophy has withdrawn from the task of providing meaningful narratives, and this has left plenty of space to fundamentalists of all kind. We need philosophy to be intellectually engaged again, to shape the human project.
— Lucian Floridi, in an interview with Sincere Kirabo at OldPiano.org.
I recommend you click the hyperlink and check out the rest of the discussion. It is a very informative look at the intersection between philosophy and science, and what lies ahead for both fields as they an increasingly vital role in our fast-changing and troubled world.
The excitement of a novel technology (or anything, really) has been replaced—or at least dampened—by the anguish of knowing its future burden.
— Ian Bogost, “Future Ennui“, The Atlantic
The following graph looks at how much time the world spends looking at screens (television, PC, smartphone, and tablet).
Interestingly, developing countries make up most of the top viewers, with the U.S. coming in sixth place overall but ranking the highest in the developed world (the second highest industrialized country, the U.K., is in fifteenth place overall). Japan, France, and Italy rank the lowest among surveyed countries and the developed world.
The preferred medium of viewing varied a lot from country to country as well: Americans and Britons watch the most television out of any other nation in the list, but Indonesians and Filipinos have them beat in terms of smartphone usage. Tablets seem to be far more popular in Asia (with the notable exceptions of Japan and South Korea) than the rest of the world.
Pretty interesting stuff.