Saturday, August 31, 2013

INTERVIEW: In Victory for Activists, Entergy to Close Vermont Yankee Nuclear Plant; Will More Follow?

by Democracy Now.org: http://www.democracynow.org/2013/8/28/in_victory_for_activists_entergy_to#

English: The Vermont Yankee Nuclear Power Plant.
The Vermont Yankee Nuclear Power Plant (Wikipedia)
One of the country’s oldest and most controversial nuclear plants has announced it will close late next year.

Citing financial reasons, the nuclear plant operator Entergy said Tuesday it will decommission the Vermont Yankee nuclear power station in Vernon, Vermont.

The site has been the target of protests for decades and has had a series of radioactive tritium leaks.

In 2010, the Vermont State Senate voted against a measure that would have authorized a state board to grant Vermont Yankee a permit to operate for an additional 20 years.

Its closure leaves the United States with 99 operating nuclear reactors, and our guest, former nuclear executive Arnie Gundersen, says he expects more to follow in the aftermath of Japan’s ongoing nuclear disaster at the Fukushima nuclear plant.

"These small single-unit nuclear plants - especially the ones that are like Fukushima Daiichi - are prone to more closures in the future because it just makes no economic sense to run an aging nuclear plant that’s almost 43 years old, and to invest hundreds of millions of dollars more to meet the modifications related to Fukushima," Gundersen says.

Transcript

This is a rush transcript. Copy may not be in its final form.

NERMEEN SHAIKH: One of the country’s oldest and most controversial nuclear plants has announced it will close late next year. On Tuesday, the nuclear plant operator Entergy said it plans to decommission the Vermont Yankee nuclear power station in Vernon, Vermont.

The site has been the target of protests for decades and has had a series of radioactive tritium leaks. In 2010, the Vermont Senate voted against a measure that would have authorized a state board to grant Vermont Yankee a permit to operate for an additional 20 years.

Vermont Governor Peter Shumlin welcomed news of Vermont Yankee’s decision to close. This is Governor Shumlin speaking to Vermont Public Radio on Tuesday.
GOV. PETER SHUMLIN: They’re not economically viable. You know, I spoke with both Bill Mohl, who’s the president of Entergy Nuclear, and the new CEO, Leo Denault, of Entergy Louisiana, and, you know, they’ve made the right decision. They’ve made the right decision for Vermont. They’ve made the right decision for Entergy. And what I said to them in those conversations was that, you know, we’ve obviously had very strong disagreements in the past about the future of the plant, but our job now is to work together with Entergy, with the other governors that are impacted by this. I also spoke this morning with Governor Maggie Hassan of New Hampshire and with Governor Deval Patrick of Massachusetts. Let’s remember that of the 650 hard-working employees in Vernon, roughly 35 percent live in the state of Vermont, and the rest live in either - most of them live in either New Hampshire or Massachusetts. And we’re going to all pledge to work together to get our rapid response teams into the plant immediately - Entergy has invited us to do that - from all three states and find a good economic future for the hard-working employees. That’s whom my heart goes out to, and I know the rest of Vermonters join me.
AMY GOODMAN: The Vermont Yankee plant has been the site of scores of anti-nuclear protests since its opening in 1972. The closure leaves the United States with 99 operating reactors.

For more, we go to Arnie Gundersen, former nuclear industry executive, who has coordinated projects at 70 nuclear power plants around the country.

He provides independent testimony on nuclear and radiation issues to the Nuclear Regulatory Commission, the NRC, congressional and state legislatures, and government agencies and officials in the U.S. and abroad. He’s chief engineer at Fairewinds Associates.

Arnie Gundersen, welcome to Democracy Now! This is a tremendous victory, well, for the governor himself, who actually as a state legislator was opposed to the nuclear plant in his own district, as well as the thousands of people who have been protesting this nuclear power plant. 

Can you talk about the significance, how this was finally shut down?

ARNIE GUNDERSEN: You know, it certainly is a victory for the Legislature in Vermont. You’ll remember that vote back in 2010 was 26 to four. It was pretty darn near unanimous to shut the plant down. 

Now, it took three years, but it was citizen pressure that got the state Senate to such a position, so my hat’s off to the citizens of Vermont for applying pressure to the Legislature for years, that culminated in this 26-to-4 vote.

The straw that broke the camel’s back is economics. You know, five nuclear plants have been shut down this year. We came into the year with 104, and now we’re at 99, and the year isn’t even over yet. 

These small single-unit nuclear plants, especially the ones that are like Fukushima Daiichi, are prone to more closures in the future, because it just makes no economic sense to run an aging nuclear plant that’s almost 43 years old and to invest hundreds of millions of dollars more to meet the modifications related to Fukushima Daiichi.

NERMEEN SHAIKH: So you think that the closure of Vermont Yankee might lead to subsequent closures of the 99 remaining plants?

ARNIE GUNDERSEN: Well, there’s a paper out by Dr. Mark Cooper at the Vermont Law School, and he predicts that as many as 30 nuclear plants are on the cusp of shutting down because of economic considerations. 

You know, a nuclear plant has 650 employees, as Governor Shumlin said, but the real comparison is against a comparable plant. A comparable plant of a fossil plant would have a hundred people. So the cost to keep a nuclear plant running is extraordinarily high. 

The nuclear fuel is not as expensive as coal or gas, but, in comparison, all the other costs are extraordinarily high. 

So there’s a lot of downward pressure on plants like Pilgrim, plants like Hope Creek and those in New Jersey that - Oyster Creek, that was hit by Sandy just six months ago. 

There’s a lot of cost pressures that likely will shut down, you know, another dozen nuclear plants before - before this all shakes out.

AMY GOODMAN: Arnie Gundersen, before we move on to Japan, I wanted to ask you about not - this not only being a victory for the people who have been opposed to nuclear power in Vermont, but a real defeat for Entergy and what it tried to do, how it tried to circumvent the people’s will, the Vermont Legislature. Can you explain what it was doing and why the court was so significant in this?

ARNIE GUNDERSEN: Well, after the Legislature voted to - not to grant a license to continue until after 2012, Entergy had promised to reapply for a license to continue for the next 20 years. The Legislature, in that 26-to-4 vote, said, "No, we’re not going to allow you to reapply. It’s over. You know, a deal’s a deal. We had a 40-year deal." 

Well, Entergy went to first the federal court here in Vermont and won, and then went to an appeals court in New York City and won again on the right - on the issue, as they framed it, that states have no authority to regulate safety. And they successfully argued that. But the position of the state was never about safety. 

You know, I was involved in the evaluations back in '09 and 2010, and when we found safety problems on the panel that I was on, we immediately notified the Nuclear Regulatory Commission. 

Our goal was to look at the cost of Vermont Yankee and the reliability of Vermont Yankee as an aging plant. That got muddled up in the legal arguments, and Entergy prevailed. 

But I think by closing the plant, you know, ultimately Vermont prevailed anyway. It's likely that that won’t get appealed to the Supreme Court, because when Entergy pulled the plug, the entire legal process has been mooted.
Enhanced by Zemanta

Friday, August 30, 2013

We Set the Fuel for the Rim Fire, Climate Change Lit the Match

Fire Yosemite
Fire Yosemite (Photo credit: Rennett Stowe)
by Matthew Hurteau, Pennsylvania State University

The frequency of large wildfires in the western US have been increasing over the past several decades.

The Rim Fire, currently threatening the Hetch Hetchy reservoir in Yosemite National Park, is an example of two factors that are contributing to this increase.

Dense forests with substantial amounts of needle cast and branches on the forest floor (referred to as fuels by forest managers) coupled with drought are the major contributors to many of these large wildfires.

Fire is no stranger to California’s Sierra Nevada. In fact, much of the forest in the Sierra was historically maintained by fire. The major difference between historical natural fires and today’s large wildfires is the severity with which these modern wildfires burn.

Prior to the implementation of US Government policy to put wildfires out, fires on average burned through the ponderosa pine and mixed-conifer forests of the Sierra Nevada every 8-35 years.

This regular natural event consumed the fuel - dry wood and pine needles - on the forest floor and kept tree density much lower than it is today. Low tree density and a lack a fuel build-up kept fires relegated primarily to the forest floor.

Bringing in a policy of fire suppression reduced and sometimes even eliminated this ecological process for nearly a century.

As a result, the build-up of fuel and the increase in the density of trees has created conditions today in which a fire can spread from the forest floor into the forest canopy by escalating the fuel ladder provided by smaller trees with branches nearer ground level.

What were once rare, stand-replacing fires (in which all canopy trees are killed) have become more common.

The other part of the equation that contributes to fires such as this is drought. California’s climate is Mediterranean, and in the Sierra the majority of the precipitation comes as snow during the winter.

The Sierra has experienced two dry winters in a row, which has left plants water-stressed and the forest floor’s potential fuel supply tinder dry. Policies may have caused the fuel build-up, but drought is what primes the pump by creating conditions for large wildfires.

However, drought is not the only factor. Across the Western US, increasing temperature is causing earlier snowmelt and leading to longer fire seasons because fuels are drier for longer.

Climate projections for increased warming and drying in California suggest we should expect a further increase in the frequency of large wildfires.

The fires in California, Colorado, and New Mexico this year all provide examples of how these large, uncharacteristic wildfires directly threaten society.

Forests and the climate are linked through the carbon cycle, because as trees grow they take in carbon dioxide from the atmosphere and store the carbon in wood.

When high-severity wildfires burn through forests, they release that carbon back into the atmosphere, directly through combustion and indirectly as trees decompose over time.

Emissions from burning forests can be substantial, accounting for 4-6% of human-caused emissions in the US, and severely burned forests can remain a major source of carbon emissions for decades.

However, while big wildfires contribute to the problem, restoring the process of occasional natural fires can help solve it. Low severity fires can reduce emissions by as much as 60% in some forest types.

It is often risky to light fires in forests that are dense with trees and have high fuel loads. So mechanically thinning the forest to reduce tree density first, followed by prescribed burning can reduce the risk of large, severe wildfires.

Subsequent burns can restore ecosystem function and have been found to increase plant diversity in the Sierra. However, these thinning and burning treatments cannot avoid the carbon cost - cutting trees and implementing prescribed burns releases carbon from the forest.

Yet, the benefit is reduced emissions from larger wildfires and increased stability of carbon locked up in forests because fewer trees are killed by fire. Furthermore, thinning a forest reduces competition for water among trees, making them more resilient to drought in the first place.

However well-intentioned, modern fire suppression policy has resulted in overgrown, fuel-loaded forests. Increasing frequency of drought and rising temperatures have turned these forests into tinderboxes.

When an ignition occurs, the resulting wildfire can have substantial consequences for both ecosystems and society. Decades of research in these dry forests clearly shows that restoring fire as a natural process is the solution to dealing with this issue.

As recent research suggests, in a changing climate, reducing the risk of large, severe wildfires through forest restoration can also help buffer the system against more frequent drought.

Matthew Hurteau receives funding from the US Department of Agriculture, the US Department of Defense, the USDA Forest Service, and the Joint Fire Science Program.
The Conversation

This article was originally published at The Conversation. Read the original article.
Enhanced by Zemanta

Thursday, August 29, 2013

Logging, Palm Oil Plantations, and Indonesia’s Summer Of Smoke

by Brihannala Morgan, Earth Island Journal: http://www.earthisland.org/journal/index.php/elist/eListRead/logging_palm_oil_plantations_and_indonesias_summer_of_smoke

Brihannala Morgan is director of The Borneo Project, an Earth Island Institute sponsored project that brings international attention and support to community-led efforts to defend forests, sustainable livelihoods, and human rights in the island of Borneo. 

forest fire in Indonesia
Research by the World Resources Institute has shown that half of the fires in Indonesia occur in areas that have been set aside for pulp and paper and palm oil plantations

With wealthy communities in neighboring Singapore and Malaysia feeling the impact of forest fires, the time is ripe for action.

Over the last few weeks and months, I have followed the spread of the haze from the fires in Sumatra, which have inundated Singapore and peninsular Malaysia, with a mixture of both sadness and deja vu.

Research by the World Resources Institute has shown that half of the fires in Indonesia occur in areas that have been set aside for pulp and paper and palm oil plantations.

I lived in Indonesian Borneo during the infamous fires of the late 1990s, when the sky was not visible for months, flights were canceled, and people stayed inside if at all possible. The air - even in the middle of the city - smelled like a bonfire.

Then, like now, the fires were primarily caused by the burning of forests and peat swamps for palm oil plantations.

In order to grow palm oil on the often infertile and acidic soil in the tropics, palm oil growers must drain and burn the soil. When the weather is dry, and when the forest is degraded, it is easy for the fires to spread past the intended burn area and consume otherwise intact forests.

This is the reason that Indonesia is the third largest emitter of greenhouses gases in the world, after the USA and China. Over 75 percent of Indonesia’s emissions come from deforestation.

Research by the World Resources Institute has shown that half of the fires occur in areas that have been set aside for pulp and paper and palm oil plantations.

Despite commitments made by both the companies (many of whom have a “no burn” policy on paper) and the government (who have deemed most burning to be illegal), Indonesia’s lax enforcement policy makes it easy for companies to do what they have always done and prepare land by burning.

Of course, the problem is far more insidious than that. Long term selective logging has meant that many of the remaining forests in Sumatra are much dryer and burn more readily.

One of the effects of climate change has been a change in weather patterns that has meant that forests are less healthy and, again, more flammable.

One of the most tragic parts of this disaster has been the (predictable) blame game around who is responsible. The government blames the small farmers, and (sometimes) the companies. The companies blame the small farmers. The small farmers, who generally very poor, land-based communities with no access to the mainstream media, just get to breathe in the smoke.

The fact is that small farmers do use swidden agriculture, and have used swidden agriculture for generations. In the past, there was enough forest that they could leave used farm land to fallow for up to 20 years, rebuilding the soil fertility and having a limited impact on the environment.

As the total amount of forest has decreased, rotation cycles had to be shortened, and the impact on the remaining forest was intensified.

Coupled with changes to the forest from selective logging and climate change (as mentioned above), fires set by small farmers have had a larger impact on overall forest burning than they have in the past.

The fact is the only real solution lies in a system-wide change. Until the Indonesian government enforces the laws (don’t hold your breath … although you may need to with all the smoke), companies will continue to use the cheapest and most expedient ways to prepare land for lucrative palm oil plantations.

Land-based communities have no option other than to plant subsistence crops where-ever they are able to plant.

Until they are guaranteed rights to land - enough land to make small-scale swidden agriculture sustainable - communities will continue to use whatever land is available for their farms and will disregard the now impossible practices that made swidden farming sustainable.

Now is an incredible opportunity for action. Wealthy communities in Singapore and Malaysia are feeling firsthand the impact of destruction for palm oil plantation. Tourists and business people from around the world are breathing in the smoke of Indonesia’s forests.

It’s time for people around the world to come together and demand that the Indonesian government enforce their laws, and that communities are given legal rights to their land. Until that happens, all we can expect is more summers of smoke.

Wednesday, August 28, 2013

Politicians Have Forgotten the ‘Dry’ in Dry Tropics and the Change in the Climate

by Elspeth Oppermann, Charles Darwin University and Chris O'Brien, Charles Darwin University

Northern futures, northern voices: It seems everyone has ideas about how Australia’s north could be better, but most of those ideas come from the south. 

In this six-part weekly series, developed by the Northern Research Futures Collaborative Research Network and The Conversation, northern researchers lay out their own plans for a feasible, sustainable future.

Wet Season storm at night, Darwin
Wet Season storm at night, Darwin (Wikipedia)

The most terrifying thing about political visions for northern Australian is the complete absence of consideration of climate change. The contrast between the Coalition’s 2030 vision and the climate change projections for 2030 couldn’t be more striking.

One number underpins plans to develop northern Australia. It is precise. It is astronomical. It is the annual average rainfall north of the Tropic of Capricorn.

But this number is so abstract and meaningless that citing it again here is pointless. The climate of northern Australia is far too variable across space and time for ideas like those of “northern Australian rainfall” to have any coherence. They should have no relevance to thoughtful policy.

Climate is commonly reduced to 30-year averages of certain weather elements: rainfall, temperature, and so on. This definition does capture some of the important dimensions of climate.

Even a cursory glance at average monthly rainfall for Darwin (Jan 424mm; Apr 101mm; Jul 1mm; Oct 70mm), Broome (Jan 179mm; Apr 26mm; Jul 7mm; Oct 1mm), Kununurra (Jan 199mm; Apr 30mm; Jul 2mm; Oct 24mm), Cooktown (Jan 306mm; Apr 166mm; Jul 30mm; Oct 26mm), Cloncurry (Jan 182mm; Apr 20mm; Jul 2mm; Oct 21mm) and Weipa (Jan 484mm; Apr 101mm; Jul 2mm; Oct 24mm) reveals how climate varies across the north.

However, climate itself isn’t quite what we think it is. True climate also includes the extremes, frequencies and patterns of weather over several decades.

For example, a brief history of Darwin rainfall shows that one year can vary markedly from another; one decade can scarcely resemble the next.

During the middle of the year little or no rain falls in the north. However, slicing time into Octobers, Januaries and Aprils (proceeding through The Wet) we see that between 1870 and 1942 rain volumes varied enormously between the years.

During these 72 years October rainfall ranged from 0mm (on three occasions) to 213mm, with fewer than 10mm falling on 10 occasions and over 100mm on 10 other occasions.

The January extremes ranged from 68mm, in 1906, to 711mm, in 1895. Volumes exceeded 500mm 20 times and 600mm 8 times; yet failed to reach 250mm 14 times and 200mm on 10 occasions.

During the driest April - 1897 - 1.3mm was recorded whereas 603mm were registered in 1891. There were 17 Aprils when less than 25mm fell and 11 with over 200mm falling.


Figure 1 Extremes of Rainfall, Darwin Post Office, 1870-1942 Bureau of Meteorology

Timing is also integral to variability. Twice during these 72 years rain fell every calendar month of the year in Darwin. In another six years rain fell during 11 of the calendar months. Yet, in 1896, no rain was recorded from April 24 till November 15.

1925 also saw one of the only other six-month periods without rain. Darwin experienced two “dry seasons” in 1926; one from May 10 till September 10, another from late September till early November.

Such variability demands nuanced thinking and planning. The vision to build large numbers of dams appears to solve the problem of rainfall variability, but assumes our current levels of rainfall will continue.Climate change projections strongly indicate the contrary is likely to be the case.

The core “vision” for both parties is investing in long-term, large-scale infrastructure. Yet doing this without factoring in the range of potential climate change impacts amounts to gross negligence that harms the financial interests and resilience of the nation rather than helping it.

With decreased rainfall and increased evaporation as a result of atmospheric warming, the ratio of horticultural output to dollars spent on dams and irrigation will be vastly reduced.

We also need to factor in if, for how long and under what conditions cattle will survive the heat before we know if such an investment makes sense.

The other side of the northern investment plan is mineral resource extraction and associated booms in mine services, construction and populations. The dangerous combination of sustained high temperatures, combined with extreme humidity, already plagues these industries.

Between 1982 and 2006, 20% of the year saw heat stress at “extreme” levels, where exposure has very serious to life-threatening health consequences.

Heat illness is common during the hottest period (September-November). Anecdotal evidence points to productivity declines of 20% in this period. Some exposed businesses traditionally roster-off staff or shut down entirely to avoid heat-stress related spikes in aggression and accidents.

Yet the number of dangerously hot days is set to increase. The number of “very hot” days above 35C is projected to rise from the current number – 11 per year – to 69 by 2030 and 308 by 2070. This measure doesn’t even take into account the significant effects of humidity.

For Darwin, this means that, at some point between 2030 and 2070, almost every day of the year will be in the most extreme thermal-“comfort” range, radically reducing productivity, health and well-being.


Figure 2 The number of days over 35 Degrees. ‘Very hot’ days would increase in number if the thermal stress effects of high humidity were also factored in. 

As temperatures continue to rise, serious adaptation measures will be needed. But there remains a dangerous absence of leadership on adaptation.

There are very limited impacts projections for the Northern Territory, no NT adaptation strategy, and very broad-brush local strategies. Adaptation reviews sit around collecting dust.

At Territory and Federal levels, Liberal and Labor are engaging in short-term idealism that is spelling medium- and long-term disaster for businesses in agriculture, mining and construction.

Northern Australia needs investment, but it needs smart investment that takes into account the best scientific advice on climate change impacts already available. It needs to back its own findings with real leadership in developing adaptation plans.

Let’s trade-in “ideological delusion 2030” and “failure-to-adapt nightmare 2030” for a real vision, because 2030 is scarily close.

Chris O'Brien is affiliated with The Centre for Environmental History, ANU

Elspeth Oppermann does not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article, and has no relevant affiliations.
The Conversation

This article was originally published at The Conversation. Read the original article.
Enhanced by Zemanta

Tuesday, August 27, 2013

The Next 'Black Gold' Could be Green

by Chris Greenwell, Durham University

English: Algae harvester Made in San Jose, Cal...
Algae harvester made in San Jose, California (Wikipedia)
Leave a glass with nutrient-rich pond water on a sunny window sill and within a day or two it will have turned a very vibrant, verdant green.

This apparent alchemy has less to do with chemistry and more to do with biology: the green is microalgae - microscopic, free-floating, single cell, plant-like organisms.

Given water rich in fertiliser (washed off from fields) or organic waste, sunlight and carbon dioxide these organisms grow rapidly, multiplying at an astounding rate by turning the nutrients and carbon dioxide into biomass.

Importantly, algae don’t store energy as starch like most plants, but instead are full of vegetable oils - which makes them the equivalent of green gold.

Recent decades have witnessed several cycles of interest in being able to exploit these organisms.

Research has examined the use of algae for remediating waste water or nutrient-overloaded coastal seas, absorbing carbon dioxide from the atmosphere or directly from industrial chimneys, in beneficial products such as proteins and antioxidants, and to produce oil and liquid fuels.

Given its indisputable virtues, why has microalgae not been put to use in these ways?

Like many great ideas, the concept is simple but the execution is complex. You might expect to find algae farms springing up everywhere, judging by some reports in the media.

But despite billions of dollars in investment from government, industry and venture capital funds, microalgae has still not been cultivated on a large scale.

Some colleagues and I looked at some of the technological hurdles to microalgae biofuel production in a 2010 paper published in the Royal Society Interface Journal.

In order to convert the oils from microalgae into biodiesel (known as fatty acid methyl esters, or FAME), the first hurdle is to separate the algae from the water.

Algae is present in water at around 0.1% of the mass, so, a tonne (1m3) of water can produce 1Kg of microalgae. Evidently that means 999Kg of water must be removed.

To process enough algae to be worthwhile this has to be done rapidly, but unfortunately this is not a trivial operation. The algae are microscopic and about the same density as the water, which makes it hard to separate by conventional filtration or in a centrifuge.

Flocculation with chemicals, a common method in water treatment works, uses highly charged polymers or metals to attract the algae together so they sink. It’s effective, but adds expense and complexity.

What’s more, microalgae are about 30-40% oil content by dry weight, and the 1Kg of weight above is wet. Dried, the 1Kg of microalgae might weigh 100g, so would yield perhaps 30-40g of oil.

The oils may be extracted using solvents or pressing, and then converted to prepare biodiesel. However, the chemistry required to do this means not all the oil can be converted, further lowering the potential yield.

Genetic modification of algae could be used to improve various characteristics, such as ease of harvest, cell wall rupture, or oil yield. But the impact of a GM microalgae accidentally released into the wild needs careful consideration.

However, a major benefit of microalgae is that it can be cultivated on marginal coastal land that is not used for food crops, and can use seawater rather than fresh water. This alleviates the main ethical accusations levelled at biofuels, that they affect food production.

Algae has even been demonstrated growing while integrated into the fabric of buildings, with building fascia panels used for growing microalgae cultivated on wastewaters and carbon dioxide emissions.

Processing still represents a challenge, but advances in microfiltration and physical flocculation methods have led to significant improvements. Other means of converting microalgae oils to biodiesel have been trialed.

For example, hydrothermal liquefaction - where wet, whole microalgae biomass is heated under pressure - and pyrolysis - where whole microalgae are heated rapidly in an oxygen-free environment - are two methods that don’t need the oil to be extracted before converting to biodiesel.

The first stages of microalgae fuel development were characterised by intense hype. This has passed, and the challenges are now well understood. The path forward requires considerable improvements in economics for large-scale cultivation for biodiesel.

As such, greater success may come from integrating microalgae cultivation into waste-water treatment and carbon dioxide emission reduction programmes, where the fuel is a beneficial side effect.

Alternatively, a model that focuses on creating a high value product - health supplements, for example - with the biofuel oils as a secondary revenue stream.

So while large-scale cultivation is undoubtedly possible, methods to remove water at low cost remains a hard nut to crack. Matching algae with high cell densities and rapid growth rates with low cost water removal methods will be a big leap forward for microalgae biofuels.

But as ever, technology is but part of the challenge - the societal, political and legislative framework will also need to be in place.

Chris Greenwell does not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article, and has no relevant affiliations.
The Conversation

This article was originally published at The Conversation. Read the original article.
Enhanced by Zemanta

Monday, August 26, 2013

Look to the Trees for Truly Green Technology

Woods
Woods (Photo credit: @Doug88888)
by Cris Brack, Australian National University

Green alternatives such as wind and solar may be touted as the solution to our environmental problems such as climate change, but how green are they really?

Wind and solar rely on technologically-sophisticated industries and infrastructure including rare earth batteries, highly-processed composite building materials, computer controlled switching and balancing programs and continuous maintenance.

There are natural alternatives to such technologies that are arguably “greener”. So, why aren’t we looking to make our technologies truly green?

Wind, solar … wood

Fire is probably the greatest discovery of humankind, if not the discovery that set us on the path to becoming civilised and social.

Wood still fuels the energy needs of millions in Africa, China and India. Perhaps surprisingly, it also fuels the energy needs of many thousands in Europe, Canada, the US and even Australia. Why do we in the developed word seem to have forgotten its power?

Wood fuel has numerous advantages over wind or solar. Wood can be grown right where it is needed - even along the boundaries of residential properties, around commercial enterprises or even in urban and peri-urban parks.

While it is growing, trees look good and provide a temporary home for birds and other wildlife - certainly not something that can be said for every wind farm.

A continuous supply of winter home heating can be produced by selecting relevant tree species (or group of species) and progressively planting them around a “quarter acre” residential block.

Each year, one seventh of the boundary could be planted and after seven years the owner could begin harvesting, drying, burning and replanting the oldest trees.

Changing the trees species and the harvesting rotation lengths could allow co-production of products such as honey or flowers without ultimately endangering fuel reserves. Such a system would however require some management.

Neighbourhood groups could coordinate their individual plantings and use of the trees to encourage community projects, including planting in parks, that benefit from trees at different stages of their life or allow longer life spans for selected trees.

Such a system could continue pretty much indefinitely and may rightly be classified as sustainable yield: renewable energy with very little need for unnatural elements or practises.

But somehow the use of wood as a fuel source is specifically included from a range of renewable energy and environmental improvement schemes, despite its advantages.

Timber!

The timber industry could benefit from similar rethinking. Plantations are gaining a reputation as the “green” option for the production of solid timber for use in construction or high-value products.

The management required in plantations includes ploughing, ripping, spraying and fertilising for preparation, followed by more spraying and fertilising over time. Exotic species are used to avoid losses from local pests and diseases.

This intensive management is designed to ensure that final harvest revenues don’t happen so far into the future that the “time cost of money” erodes the net profit.

While not as intensive or invasive as agriculture, and orders of magnitude less intensive than the industries associated with plastic, steel or concrete products, plantations are never-the-less more intense and less natural than native forest management.

In native forests, local or endemic species are kept even though growth is slower. Fertiliser is not applied, partially because its cost cannot be justified but also because the local species are commonly adapted to local soil fertility. Similarly, weedicide application is rare.

Producing wood products in such a forest is slower, and to produce the same amount requires a larger area. One hectare of intensively managed plantation can produce the same amount of solid wood product as 30-to-50 hectares of native eucalypt forest.

But the managed native forest will have a greater diversity of tree sizes and stages, and only relatively small areas of disturbance. The vast majority of the forest simply grows and changes in a natural way, which is orders of magnitude better for birds and animals.

There is a strong branch of forest management in Europe called “nature-based forestry” or “near natural silviculture” that attempts to make human induced disturbances during harvesting or regeneration as close to natural-like conditions as possible.

Visitors need special training to detect the difference between the human induced changes and the natural ones.

But, like high-technology systems, plantations are seen as the “green” alternative to low-technology native forest management.

Green values

The “green” alternatives market has been captured by systems that require high levels of technology, energy inputs and processing.

Is the ultimate green goal is to leave nature altogether, replacing nature-based solution with technological ones - perhaps ultimately living in space stations powered by solar cells measured in kilometres?

Machines could make our air, water and nutrients out of raw mineral stocks mined from asteroid belts without impinging on natural earth at all. A “green” but precarious future totally reliant on sophisticated technology.

To be green and natural, we must re-engage with nature. Recall battles over battery chickens.

The battle against that industry could not have commenced until the connection between the product (the egg) and the system (chickens in backyards or battery farms created by us) was re-established. Many urban children have never seen a farm or even touched a chicken.

Similarly a battle for green and natural alternatives can only be commenced once the connection between natural systems that produce goods and services are appreciated and compared with unnatural and energy demanding systems that they have been replaced by.

Cris Brack does not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article, and has no relevant affiliations.
The Conversation

This article was originally published at The Conversation. Read the original article.
Enhanced by Zemanta

Friday, August 23, 2013

Australian Endangered Species: Gulbaru Gecko

by Conrad Hoskin, James Cook University

A Gulbaru Gecko trying to be a rock (Conrad Hoskin)
You may not have heard of the Gulbaru Gecko but you’d love it if you met it. Ancient and spectacular, this endangered gecko has one of the smallest distributions of any Australian animal.

Australia is a global centre of gecko diversity, with a remarkable 140 species at last count. Australia’s geckos fall into three families: Diplodactylidae, Gekkonidae and Carphodactylidae.

The last of these, Carphodactylidae, is a uniquely Australian group; in fact it’s the only lizard family endemic to Australia.

It is also, arguably, the most impressive family of geckos in Australia, including the leaf-tailed geckos, knob-tailed geckos,thick-tailed geckos and chameleon gecko.

These are all large, spectacular geckos with flamboyant or bizarre tails; very different to the “typical” gecko most would have seen on a house wall.

Among the genera within Carphodactylidae is Phyllurus, which means “leaf tail”. There are nine species, restricted to coastal eastern Australia. The most southerly is found in the sandstones of the Sydney region.

The remaining eight species are all restricted to tiny distributions along the Queensland coast, in many cases to a single mountain or range. All these species are found in rocky rainforest.

No two species are found in the same patch and their distribution reflects the gradual contraction and fragmentation of rainforest in eastern Australia to small isolated pockets over millions of years.

The most northerly species is the Gulbaru Gecko (Phyllurus gulbaru). The Gulbaru Gecko occurs approximately 35 km west of Townsville, in Patterson’s Gorge at the southern end of the Paluma Range.

Despite being highly distinctive and close to a city, the Gulbaru Gecko was only discovered in 2001 and named in 2003.

This large gecko, growing to 18 cm, is restricted to rocky rainforest, generally dominated by Hoop Pines (Araucaria cunninghamii).

It hides in rock cracks during the day and emerges at night to hunt invertebrates on the rock surfaces. It is slow moving and highly camouflaged. Females lay two eggs, which develop slowly.

Status

The Gulbaru Gecko is listed internationally as Critically Endangered, and is listed at the state level as Endangered.

It is one of Australia’s most narrowly-distributed species. The total distribution is extremely small and almost certainly fragmented into two areas.

The larger area of suitable habitat is approximately 10km2, the other patch is about 4km2. In the larger patch the gecko is reasonably common, in the smaller patch it is rare.

Between the two patches there is a narrow band of unsuitable habitat, which almost certainly separates the geckos. The total population size is not known, and we have to be careful not to overestimate the abundance of the gecko.

Even within suitable habitat their distribution is patchy.
The larger population is protected within Mt Cataract Forest Reserve and Paluma Range National Park.

Threats

Because of the small size of the Gulbaru Gecko’s habitat and population, it is vulnerable to anything that reduces or degrades the rainforest.

The primary threat is unmanaged burning, particularly late dry season fires that encroach into the rainforest from nearby open forest and pastoral areas.

Fires are a natural part of the landscape in this region but intense burning can chip away at the rainforest edge. This has happened over the last decade at one of the Gulbaru sites I’ve been visiting.

Invasive grasses growing at the rainforest boundary provide a thick, highly flammable fuel load that can exacerbate these effects.

Restriction of the Gulbaru Gecko to rocky areas such as gully lines affords the species some protection from fire; but, it is dependent on surrounding rainforest vegetation which is vulnerable. Even small incursions from fire could further fragment populations.

Climate change is a potential threat to the species, for example if it leads to drier conditions and greater potential for fire. An unlikely but obvious direct threat to the species is quarrying, an activity that doesn’t occur within the distribution but does occur nearby in the region.

Strategy

Recently the larger fragment of the gecko’s habitat was protected under the state reserve system. The smaller fragment is not formerly protected but the leaseholders are aware of the species and its habitat requirements.

Further surveys are required to determine the fine-scale distribution of the Gulbaru Gecko and to estimate population size. An active program to reduce late dry season hot fires should be implemented.

Conclusion

The Gulbaru Gecko is a spectacular reptile that persists in a tiny area. It has clearly done so for a long time and with a little management to protect its rainforest habitat it will continue to do so.

Conrad Hoskin receives funding from the Australian Biological Resources Study (ABRS), the Australian Research Council (ARC) and the National Environmental Research Program (NERP). He is affiliated with the School of Marine & Tropical Biology, James Cook University (Townsville).
The Conversation

This article was originally published at The Conversation. Read the original article.

Thursday, August 22, 2013

Let's Put Threatened Species on the Election Agenda

by Stephen Garnett, Charles Darwin University; Hugh Possingham, University of Queensland; and John Woinarski, Charles Darwin University

Bennett's Wallaby (Macropus rufogriseus rufogr...
Bennett's Wallaby (Macropus rufogriseus rufogriseus) juvenile, Maria Island, Tasmania, Australia (Photo credit: Wikipedia)
The Coalition will instate a Commissioner for Threatened Species should it form government, according to shadow environment minister Greg Hunt.

The minister says that, while management plans for threatened species exist, they are not being enacted thoroughly enough.

For many the announcement is the first sign of relief in a campaign, from both major parties, that has been almost devoid of positive environmental policies.

Most Australians do not want more of our species to become extinct, even if it does mean some constraints on development.

So, what needs to change if we’re to look after our threatened species properly?

The Coalition’s announcement also responds to messages underlying recommendations from a senate report on threatened species released last week - although a Commissioner was not explicitly recommended.

Out of the report’s 44 recommendations, five stand out.

The first is to bring the “official” roll-call of threatened species lists up-to-date.

Review after review of the Environment Protection and Biodiversity Conservation Act (EPBC Act) have recommended that the lists be updated regularly. Over 80% of the species on the list were simply adopted from an old list prepared in the 1990s.

Some should not be there at all - in fact there is one bird on the list, the Roper River Scrub-robin - that never existed - it was almost certainly a fraud.

Other species on the list are now known to be common, not threatened, and drive both government regulators and industry to distraction. Because they are on the list, conditions have to be imposed on proposals that insist on conservation work for species that need no protection.

Worse still are the species that should be on the lists but aren’t. The pace of threatened species bureaucracy is not keeping up with our growing knowledge about threatened species.

Such unlisted threatened species live now in an administrative limbo in which they have no protection from development (or other factors).

While the existing vetting body (the Threatened Species Scientific Committee) does its level best to keep up with public nominations to get on the list, the process is hopelessly under-resourced and woefully slow.

The senate inquiry recommends that the process of adding or deleting species from the official list of threatened species should be expedited using the pool of talent and goodwill present in the wider community of experts.

The second major recommendation is for dedicated threatened species funding. A few years ago the Commonwealth began to emphasise a landscape approach to biodiversity conservation rather than funding many individual programs for particular threatened species.

This policy change was hoping to focus on the causes and broader picture of landscape dysfunction rather than on the symptoms (individual threatened species).

But the change led to abandonment of the essential management of individual threatened species and their threats, and has had some catastrophic consequences. As recognised by the senate inquiry, there needs to be a balanced portfolio of investment in both landscapes and species.

Our research suggests that A$10 million a year should secure all Australian birds from extinction. We estimate that dedicated funding of about A$100 million a year could prevent further extinctions of just about all Australian species.

The amounts are not unreasonable. Importantly most of the money would go into creating jobs in rural and remote areas where the threatened species live, strengthening local community economies and giving value to lands that are often useless for farming or other commercial use.

Also, threatened species investments are highly effective. A submission to the inquiry from BirdLife Australia listed a string of extraordinary successes in threatened species management in Australia.

We can turn things around and secure Australia’s natural heritage for the price of two cappuccinos per Australian per year. The latest success is the extraordinarily well-conceived and executed removal of rabbits and rodents from Macquarie Island, but there have been many others over the years.

The senate also recommends long-term funding be committed. AusAid now makes commitments to fund programs for eight years with potential to extend after reviews at four years.

Threatened species funding should adopt the same approach. It is inconceivable that the deep-rooted problems affecting many Australian threatened species can be remedied in the one to three-year projects typical of conservation grants.

The inevitable consequences of such ephemeral funding is chaotic project management, failure, frustration, waste and concern from auditors due to the poor return on investment. The management of threatened species is a long-term commitment.

The fourth key recommendation is to spend the money efficiently. Scattering the money to squeaky lobby groups and needy electorates will squander it. Australia leads the world in research on how best to allocate conservation funds.

The Commonwealth, through the National Environment Research Program, funds a centre led by one of us for this very purpose.

Research on cost-effective conservation allocation is already being followed by Tasmania, New South Wales and New Zealand. It is time for the Commonwealth to act on the findings of its own far-sighted research investments

Finally threatened species conservation needs proper research and planning. All successful conservation programs to date have been built on a good knowledge base. That then feeds into good planning. A random sample of recovery plans suggested that they fail as often as they succeed.

But the good ones, such as the South Coast Threatened Bird Recovery Plan in Western Australia, have proved critical to threatened species management, bringing together diverse teams with a common purpose to retain one or more species for our descendants to enjoy.

The Senate committee recommends that a collaborative recovery planning approach, bringing together all key actors likely to be involved in recovery, be supported. We concur, provided such teams are managed properly.

There is still time for the major parties to commit to retaining all species in Australia. Such a commitment would be good value at twice the price.

But it will require more than wishful thinking and platitudes: this recent senate inquiry provides the strategic approach on which enduring success can be built.

Stephen Garnett receives funding from the Australian Research Council and serves on committees advising BirdLife Australia and the Northern Territory Government.

Hugh Possingham receives his primary funding from The Australian Research Council, DSEWPaC National Environmental Research Program and international science-based non-government organisations, WWF Australia and small grants from other organisations. He is affiliated with The Wentworth Group of Concerned Scientists, The Nature Conservation Society of Australia and Birds Australia.

John Woinarski receives funding from the National Environmental Research Program for research on declining native mammals, but this piece is not directly related to that support.
The Conversation

This article was originally published at The Conversation. Read the original article.
Enhanced by Zemanta

Wednesday, August 21, 2013

Fukushima

by Ingolf Eide, Online Opinion: http://www.onlineopinion.com.au/view.asp?article=15377&page=0

English: Spent fuel pool at a nuclear power pl...
Spent fuel pool (Wikipedia)
A recent Reuters article ("After disaster, the deadliest part of Japan's nuclear clean-up") proved something of a rabbit hole.

Having ventured down into this unfamiliar terrain, new tunnels kept opening up.

The 2011 earthquake and tsunami left 400 tons of "highly irradiated spent fuel" more or less hanging in the sky 30 metres up in Reactor Building No 4.

Its roof, and much else, was pulverised by a hydrogen explosion so there's no containment structure left.

Only desperate efforts in the immediate aftermath when all power sources were knocked out kept the pool in which the fuel rods are stored covered with water.

It wasn't alone in suffering severe damage.

Reactor Nos 1, 2 and 3 (which were all online when disaster struck) are now in permanent shutdown with their reactor cores largely or entirely melted down and sitting in intensely hot lumps at the bottom of their containment chambers.

Vast quantities of water keep their temperature within tolerable bounds but much of it is leaking into the groundwater and, eventually, the Pacific.

What sets No 4 apart is three things. First, it has far more spent fuel in its cooling pond then any of the others because for maintenance purposes the entire fuel contents of its reactor had been transferred to the pond only four months previously.

Second, because of that transfer, some 550 of the 1231 used fuel rod assemblies were much more radioactive than normal.

And, finally, the building itself is structurally unsound. Tokyo Electric Power Co (TEPCO) have done some shoring up, but it wouldn't take too much of a shake to crack it, or maybe even tip it over.

D-Day for TEPCO's plan to move this spent fuel to a safer location is nigh. Since the infrastructure to handle spent fuel was destroyed, they've had to recreate that capacity from scratch. Handling fuel rod assemblies is a delicate business and no one can know if they'll succeed.

The plan is to start in November and finish within a year. It's just one (particularly important) piece of the wind-down of Fukushima, estimated by a spokesman "to take about 40 years and cost $11 billion." The total cost for Japan may range up to $100 billion.

There were some alarming scenarios raised in the article.
No one knows how bad it can get, but independent consultants Mycle Schneider and Antony Froggatt said recently in their World Nuclear Industry Status Report 2013: "Full release from the Unit-4 spent fuel pool, without any containment or control, could cause by far the most serious radiological disaster to date."
And Arnie Gunderson talked about a few ways that sort of release could happen.
"There is a risk of an inadvertent criticality if the bundles are distorted and get too close to each other," Gundersen said. He was referring to an atomic chain reaction that left unchecked could result in a large release of radiation and heat that the fuel pool cooling system isn't designed to absorb. "The problem with a fuel pool criticality is that you can't stop it. There are no control rods to control it," Gundersen said. "The spent fuel pool cooling system is designed only to remove decay heat, not heat from an ongoing nuclear reaction." The rods are also vulnerable to fire should they be exposed to air, Gundersen said.
Fascinating, so much so I badly wanted to get a better handle on the processes at work. Was all this unduly alarmist, or not? It seems not. Let me share some of the fruits of my trip down the rabbit hole.
When reactor fuel is used up, no longer useful for fission purposes, it's replaced.

The spent fuel is still intensely radioactive, however, which simply means some of the materials created through the fission process are unstable and constantly emit a stream of particles and gamma rays until they attain a more stable structure.

It must therefore be stored under water for at least a year and usually much longer.

The water in the spent fuel pool does two things: through a heat exchange mechanism, it keeps the fuel assemblies at reasonable temperatures; and, it captures the radiation streaming out of the fuel rods.

There's a lot of this spent fuel about, some 260,000 tons, most still in storage ponds and growing by 8-10,000 tons per year (the US has some 65,000 tons, Japan about 19,000 tons). It's seriously nasty stuff.

Spent fuel from light water reactors (like Fukushima's) is composed of 93.4% uranium (with only ~0.8% U-235, the fissile isotope), 5.2% fission products, 1.2% plutonium and 0.2 % other transuranic elements. It's the middle two that are the potential killers.

Some of the fission products break down very quickly to more stable (i.e. less harmful) elements, but two hang around for a long time and are particularly dangerous: strontium-90 and caesium-137. Both have half lives of about 30 years and they mimic potassium and calcium respectively.

They're therefore rapidly absorbed into the food chain and become concentrated in higher-order creatures, like us. Iodine-131 is similarly lethal; it mimics iodine and concentrates quickly in the thyroid.

The good news is its half life is only eight days and so it's only a factor in nuclear explosions or reactor accidents.

All this radiation also generates heat. Once the spent fuel is removed from the reactor, the radiation (and heat generation) tails off dramatically; after one year, it's down by a factor of 10 and by 10 years it's reduced to about 1% of its starting level.

For much of the first 100 years, the radioactivity comes principally from the fission products with strontium-90 and caesium-137 dominant after the first 10 years. After a few hundred years only the transuranics are still going strong: plutonium, americium, neptunium and curium.

Still, the radiotoxicity of spent fuels remains a potentially lethal hazard for hundreds of thousands of years. And, given the plutonium content, the fuel also of course has to be safeguarded for all that time.

What about the heat generation, though? This was a question I wanted to get to the bottom of. With radiation tailing off so quickly, maybe after a year or two even exposure of spent fuel rods to the air wouldn't be catastrophic, in which case Fukushima might be through the most dangerous phase.

Not so, unfortunately. Gundersen and co were not exaggerating. In "Technical Study of Spent Fuel Pool Accident Risk" (2001), the US Nuclear Regulatory Commission wrote:
In summary, 60 days after reactor shutdown for boildown type events, there is considerable time (> 100 hours) to take action to preclude the fission product release or zirconium fire before uncovering the top of the fuel. However, if the fuel is uncovered, heatup to the zirconium ignition temperature [900°C] during the first years after shutdown would take less than 10 hours even with unobstructed air flow. After five years, the heat up would take at least 24 hours even with obstructed air flow cases. [PWR is a Pressurised Water Reactor; BWR a Boiling Water Reactor. Both are light water reactors]

So, however one gets there (whether by leakage, a crack in the pool or, heaven forbid, another earthquake, or even a slow boiling off if the heat exchange mechanisms were to fail), even after five years those fuel assemblies would burn.

Before such a fire starts, the cladding around the fuel rods would almost certainly swell and burst, releasing "radioactive gases present in the gap between the fuel and clad." It's the next stage that's truly catastrophic.
If the fuel continues to heat up, the zirconium clad will reach the point of rapid oxidation in air. This reaction of the zirconium and air, or zirconium and steam is exothermic (i.e. produces heat). The energy released from the reaction, combined with the fuel's decay energy, can cause the reaction to become self-sustaining and ignite the zirconium. The increase in heat from the oxidation reaction can also raise the temperature in adjacent fuel assemblies and propagate the oxidation reaction. The zirconium fire would result in a significant release of the spent fuel fission products which would be dispersed from the reactor site in the thermal plume from the zirconium fire.
Unlikely as it is, this is the nightmare scenario at Fukushima.

Lethal radioactive particles, notably strontium-90 and caesium-137, would stream into the atmosphere and be dispersed according to the vagaries of the weather, winds and currents.

Even worse, were this process to unfold at Reactor No 4 (or at any of the others, of course, it's just that No 4 is the most vulnerable), it might become impossible to prevent similar processes eventually unfolding in all the other spent fuel repositories at the site (substantial as it is, the load at No 4 is little more than 10% of the total).
It has been a standard practice in the nuclear industry to avoid consideration of all of these possibilities, based on the assumption that there will be "lots of time" to react to any emergency involving the spent fuel pool, as it will normally take days for the spent fuel to reach the melting point and it will be a "simple matter" to refill the pools with water if necessary. This ignores the fact that major structural damage may make it impossible to approach the spent fuel pool due to the lethal levels of gamma radiation emanating from the spent fuel once the protective shielding of the water is gone.
To judge by various estimates found here and there on the net, the damage from such an ultimately uncontrolled spent fuel fire hardly bears thinking about.
Based on U.S. Energy Department data, assuming a total of 11,138 spent fuel assemblies are being stored at the Dai-Ichi site, nearly all, which is in pools. They contain roughly 336 million curies (~1.2 E+19 Bq) of long-lived radioactivity. About 134 million curies is Cesium-137 - roughly 85 times the amount of Cs-137 released at the Chernobyl accident as estimated by the U.S. National Council on Radiation Protection (NCRP).[author's emphasis]
Viewed in this light, the light of what might have been, March 2011 starts to look like a win. For days (perhaps weeks) after the 11th, things teetered on the brink.

Had just one spent fuel pool seriously cracked, had some pumps failed for an hour or two too long, it might have been all over; there may then have been no way to indefinitely keep the fuel assemblies covered with water and the whole thing would have fed on itself.

As it was, all that escaped into the air was a comparative smidgen. Enough, mind you, to cause the evacuation of 160,000 people and create a 20 km exclusion zone which is still mostly in force.

Thankfully, the evacuations were ordered early enough to avoid any significant radiation damage to residents surrounding the plant.

The curious thing is, in one sense radioactive materials are more manageable than I'd thought. Particle emissions, for example, are for the most part easily blocked, sometimes with as little as a sheet of paper.

Gamma rays are altogether more vicious, although even they can be shielded against with sufficient depth of water, concrete, lead or steel. The real trouble starts when radioactive particles get out into the environment.

Then, they very quickly end up in living creatures where they wreak their damage directly. Once they've escaped and been scattered by wind, water and rain, the deed is done, much of it irrevocable.

And the effects, well, they go on, and on. Thousands, tens of thousands, hundreds of thousands of years. Much further into the future than Neanderthal man lies in the past. Have we taken leave of our senses, perhaps, to take such risks, however slight, for so little gain?

This article was first published on Conversation at Stanley Park.
Enhanced by Zemanta

Friday, August 16, 2013

Australian Endangered Species: Leatherback Turtle

by Mark Hamann, James Cook University and Kimberly Riskas, James Cook University
 
T2hqjdt9-1376444365
Don’t be fooled, this little guy will grow to be the largest turtle in the world. Flickr/Jennie - My Travels

Leatherback turtles (Dermochelys coriacea) are the largest, oldest and most widely distributed of the world’s marine turtles.

Its appearance alone distinguishes the leatherback from its relatives: shell-less and bluish black in colour, with seven fleshy ridges along its back, and dappled all over with white spots.

It is the only extant member of the ancient Dermochelyidae family, which first appeared around 100 million years ago.

Adult leatherbacks can grow to lengths of up to two meters and weigh as much as 700kg. In addition to its great size, the leatherback also undertakes the longest migration of any marine turtle, swimming on average 6,000km between feeding and nesting grounds.

Their diet consists almost entirely of jellyfish but also includes tunicates (relatives of sea squirts) and other soft-bodied invertebrates.

In pursuing prey, leatherbacks can dive to depths exceeding 1,000m - a part of the ocean beyond the physiological limits of all other diving animals except beaked whales and sperm whales.

Like these mammals, leatherbacks have adaptations to survive the lower temperatures and crushing pressures of these deep dives.

Leatherbacks can be found in all of the world’s tropical and temperate oceans, and have been recorded in frigid sub-polar waters far outside the ranges of other marine turtles.

In Australia, leatherbacks forage in coastal waters around much of the country. Regular sightings occur in Western Australia, the Gulf of Carpentaria, eastern Australia and the cooler waters of southern Australia.

Leatherbacks nest sporadically in Australia, particularly in the Northern Territory. No nesting has been recorded in eastern Australia since 1996.

There are no significant Australian rookeries, as nesting in the western Pacific region is concentrated in neighbouring countries such as Indonesia, Papua New Guinea and the Solomon Islands.

Most of the world’s leatherback turtles live in the Atlantic Ocean. The world’s largest nesting population is in Gabon, in western Africa.

Status

There are seven different populations of leatherback turtles, determined by genetic and migratory studies. Each of these vary in size, range, status and trends. There are large populations in the Atlantic Ocean, and the north west Atlantic population is increasing.

But the Pacific Ocean populations have declined by over 80% in the past 30 years. In an extreme case of prolonged egg harvest and poor management of turtle bycatch, the number of nesting females at Terengganu, Malaysia declined from over 3,000 in 1968 to just two in 1993. This population has not recovered and is not expected to do so.

The IUCN is currently reviewing the global status of the leatherback turtle. In Australia, the EPBC Act classifies the leatherback as endangered.

Threats

The leatherback faces myriad threats throughout its range. Decades of consumption such as the collection of eggs for food and use as aphrodisiacs have nearly wiped out leatherbacks in Indonesia, Mexico and Costa Rica.

Large numbers of leatherbacks are also captured incidentally in commercial fisheries. Leatherback foraging habitat often overlaps with that of valuable pelagic fish species, such as tuna and swordfish.

As a result, high levels of adult capture and mortality in these fisheries pose a grave threat to populations. Entanglement in discarded nets and lines, or “ghost fishing”, is a significant but understudied problem.

Plastic pollution of the world’s oceans is a ubiquitous and pervasive threat to leatherback turtles, which mistake floating plastic bags and other debris for jellyfish.

In 2009, a scientific study found plastic debris in one third of leatherback turtle necropsies across the globe. Leatherbacks are especially at risk for eating plastic, which is carried by ocean currents to locations where the turtles feed.

Further threats to leatherbacks include loss of nesting beaches to coastal development, light pollution, nest predation by feral animals and continued illegal egg harvest.

Strategy

Conserving leatherbacks will require action from many different nations across the globe. Thankfully, their situation has not gone unnoticed and efforts are already underway in several countries.

For example, in Costa Rica, the Leatherback Trust is a non-profit foundation that works to engage the community and protect nests at crucial beaches in Las Baulas National Park. Since its inception, poaching of eggs has been eliminated within the park.

In 2012, the US National Marine Fisheries Service designated nearly 44,000 square km of critical habitat area for foraging leatherbacks off the west coast of the United States.

Although this population nests in the western Pacific, the adults feed off the US coast, and are often caught as bycatch in gillnet and longline fisheries.

In the early 2000s, governments and NGOs helped establish community-based conservation in Papua New Guinea, West Papua in Indonesia, Solomon Islands and Vanuatu.

These projects have gathered data that are essential for managing leatherbacks in their key Pacific nesting grounds. These projects need ongoing support.

In Australia, leatherbacks are protected under state and Commonwealth legislation, and commercial fisheries are working to reduce leatherback bycatch by implementing bycatch and discarding workplans.

All commercial fishermen are legally obligated to report any interaction between leatherbacks and fishing gear to the relevant management authority.

Conclusion

The status of the leatherback turtle is unique for each population and varies across the globe, from healthy in the Atlantic to seriously declining in the Pacific. There is hope in the knowledge that key governments in the Pacific take the issue seriously.

Perhaps most notably, the involvement of local communities in conservation efforts augurs a better future for the leatherback. 

The Conversation is running a series on Australian endangered species. See it here

Mark Hamann receives funding from the National Environmental Research Program and has previously received funding from the ARC, industry, and government. He is Co-vice Chair of the IUCN Marine Turtles Specialist Group (Australasia).

Kimberly Riskas does not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article, and has no relevant affiliations.
The Conversation

This article was originally published at The Conversation. Read the original article.

Sunday, August 11, 2013

Senate Inquiry on Extreme Weather Won't Help Australia Prepare

by Jean Palutikof, Griffith University and Sarah Boulter, Griffith University

The Australian Senate inquiry on preparedness for extreme weather events was, according to Green’s Senator Christine Milne, an opportunity to bring “urgency and ambition” to the issue.

The final report was released this week, and Australia has once again missed the opportunity to address the challenge of future extreme weather events.

The inquiry’s mandate was to report on recent trends in extreme weather events, and on what we know about the future occurrence of extreme events under climate change. It was also asked to assess Australia’s preparedness: in key sectors, at all levels of government and within emergency services.

Most importantly, it was to report on progress in developing effective national coordination of climate change response and risk management, and any gaps in Australia’s Climate Change Adaptation Framework.

So, how did it do? Did it improve Australia’s preparedness for climate change? Did it deliver a comprehensive review of the state-of-play? Did it make recommendations that would form a firm foundation for Australian climate change response policy?

Sadly, the answer on all counts has to be no.

The report is sound in its review of the science, but there is nothing new. The timing of the report is unfortunate in this respect - the Intergovernmental Panel on Climate Change’s fifth assessment of the physical science will be released next month and will contain the definitive international scientific position on climate change, including future occurrences of extremes.

Until then, fifth assessment findings are embargoed, meaning that the report has had to rely on the 2011 IPCC special report on extremes.

The report comes into its own in its review of the work which is being done in climate change science in Australia, on the impacts (especially costs) of past events and responses. It cannot be faulted for thoroughness and even-handedness.

But, essentially, it is telling us what we already know. We have heard it many times before from equally authoritative sources - some extremes are becoming more common, are costly to the Australian economy and people, and are likely to continue this way in future.

The real question is, what are we going to do about it? And there the report falters.

It makes a number of recommendations that are piecemeal and uncoordinated. It recommends that disincentives to insurance should be removed and that authorities should work with community service organisations in planning and responding to extremes.

Flood mapping should be prioritised, and building codes adjusted. Facilities caring for vulnerable groups - such as hospitals and aged care homes - should have emergency management plans in place, and emergency services should be better coordinated.

There’s a reason these sound familiar - last year’s Productivity Commission report Barriers to effective climate change adaptation made some very similar recommendations.

Indeed, the final recommendation of the Senate Report is to continue to implement the recommendations of the Productivity Commission report.

The report’s recommendations fail to hang together to deliver a basis for comprehensive and coherent policy. Their piecemeal nature means they cannot address the challenge of a world in which many extremes in many places are likely to become more common and more severe.

Australia is no further along in meeting the challenge of climate change.

This can be contrasted to the recently released report from the UK government on Making the country resilient to a changing climate. This is comprehensive, insightful and, most importantly, commensurate to the climate change challenge.

Why the difference? Purely and simply, because there is a statutory obligation in the UK to deliver to parliament, at specified intervals, climate change risk assessments for the nation.

These must then be followed up by plans to evaluate the immediacy and size of the risk, and the actions that need to be taken. This statutory obligation is set out in the Climate Change Act 2008.

What will it take for Australia to bring into law this level of reporting requirement? It will require politicians and public servants who are sufficiently versed in the science to accept the reality of climate change.

They will have to recognise the need for action now and, finally, be emboldened by the prospect of Australia taking a global leadership role in addressing climate change.

At the moment, we don’t meet even the first of these requirements. The Senate report is the product of that failure.

I am Director of the National Climate Change Adaptation Research Facility, based at Griffith University, which received financial support from the Australian government through the Department of Climate Change and Energy Efficiency.

Sarah Boulter is affiliated with the National Climate Change Adaptation Research Facility hosted by Griffith University.
The Conversation

This article was originally published at The Conversation. Read the original article.