Our 20th Century Space Legacy – Part 5: America Goes to Venus

The 1960s were not only the era of Apollo and the race to the Moon, they were also a time for experimenting with technology that could get us to our closest planetary neighbours – Venus and Mars. The Soviets with the Luna program, and the United States with Ranger, Lunar Orbiter and Surveyor developed spacecraft systems for missions beyond Earth orbit. Their destination was the Moon. But the opportunity to extend the reach of robotic spacecraft to explore neighbouring planets led to the development of ambitious programs with multiple robotic probes and orbiters with the capability to go beyond the Moon.

In this blog we look at the technology and missions that unlocked the mysteries of Venus. Venus presented a total mystery before humans sent unmanned spacecraft there. The remarkable scientific discoveries by robotic spacecraft starting in the 1960s changed our understanding of our nearest planetary neighbour. But we also gained in many other ways. We developed new materials capable of withstanding extreme temperatures and acidity to cope with the Venus’ surface environment. We learned about the extreme effect on a planet of a runaway greenhouse contributing to a better understanding of the impact of rising levels of greenhouse gases here on Earth. We applied the laws of physics in developing new celestial navigation techniques to enhance spacecraft performance. We perfected remote sensing and radar technology that we use here on Earth today. Who would have thought that in going to Venus we would be able to exploit the technologies we used to explore that planet to help us find new sources of mineral wealth, oil and gas here on Planet Earth. It’s a great story and one in which the United States played a leading role.

Mariner Goes to Venus

The American Mariner program launched the Earth’s first interplanetary spacecraft. Mariner spacecraft used the proven designs of the Moon-based Ranger missions. Each spacecraft incorporated within its magnesium casing a panel of science experiments, communications, data encoding and computing, timers, attitude control for navigation, power supply, battery and rocket motor. The Mariner-2, the first to reach Venus,  housed a 1,000 watt-hour silver-zinc cell battery recharged by two deployed solar panels. A directional dish antenna with a 3-watt transmitter extended from the side and below the craft receiving data from and relaying it to Earth ground stations.

The launch vehicle of the program was the American workhorse for medium payloads and the Mercury program, the Atlas, using a second-stage Agena booster to break out of Earth’s orbit. During Mariner-2’s flyby of Venus, in December 1962, it reached within 30,000 kilometers (approximately 19,000 miles) of the planet surface. Mariner made several discoveries including Venus’ slow retrograde rotation, high surface temperatures and atmospheric pressure, atmospheric composition, mostly CO2, its continuous cloud cover rising up to 60 kilometers (37 miles) above the surface, and no detectable magnetic field.

The Mariner program was not exclusively focused on Venus and included Mars as its other destination. So it wasn’t until Mariner-5, a surplus backup spacecraft to a Mars mission, that the Americans returned to explore Venus with a June 1967 launch and arrival in October when spacecraft passed within 4,000 kilometers (approximately 2,500 miles) of Venus’ surface while taking magnetic readings and studying the ultraviolet emissions from its atmosphere. With most of the remaining Mariners dedicated to Mars, it was 1973 before the United States launched the next Mariner to visit Venus. This time, however, the mission had two destinations, Venus and Mercury. The Mariner-10  mission flew within 4,200 kilometers (2,600 miles) of Venus’ surface before using its gravity to assist the spacecraft in navigating on to Mercury. This type of celestial navigation had never been tried before and today has become a standard practice in planetary missions.

Mariner-10 provided the first quality pictures of the two inner planets of the Solar System. Venus appears on the left and Mercury on the right. Source: NASA

America’s Next Venus Visitors

After Mariner-10, came Pioneer Venus. This ambitious research program combined two spacecraft, an orbiter and a multiprobe. Both were launched in 1978 using Atlas-Centaur rockets. Pioneer Venus-1, the Orbiter brought radar mapping to planetary exploration. Known as Range-Doppler or Doppler Radar, it could determine features, elevation and other surface characteristics hidden under the planet’s clouds. Although the formal mission of Pioneer Venus-1 ended in 1982,  most of its instruments continued to operate for a decade after when it finally burned up after its orbit decayed. The map it compiled seen in the image below gave us the first comprehensive image of the planet’s entire surface showing plateaus and mountains as high or higher than those on Earth. In addition to the radar imaging the spacecraft also had 12 experiments onboard dedicated to studying the gravity, magnetic properties, and atmosphere of the planet.

Pioneer Venus-1 used Range-Doppler radar to give us our first comprehensive map of the surface of Earth's closest neighbour. Source: NASA

Pioneer Venus-1’s companion, Pioneer-Venus-2, demonstrated remarkable robotic skills upon its approach to Venus, launching one large and 3 smaller probes, each designed to collect
data while in parachute-assisted controlled descent into the atmosphere. This too was an American first. The probes produced interesting data that indicated little atmospheric haze below 30 kilometers (approximately 17 miles). Two of the three small ones survived impact with the surface continuing to transmit with one continuing to send telemetry for more than an hour before succumbing to the high surface temperatures.

The Technical Achievements of Magellan

A decade later, Magellan, a second orbiter carrying new radar technology arrived in 1990 and settled into polar orbit around Venus. Launched by one of the American Space Shuttles, Magellan had a radar technology onboard with high-resolution imaging capability. This technology, called Synthetic Aperture Radar Imaging, or SAR for short, took advantage of characteristics within radar signals to capture the feedback, process it digitally and create high-resolution images. Magellan mapped 98% of planet creating images like the one seen below.

The comprehensive picture of Venus that Magellan relayed to Earth gave scientists an understanding of the planet’s geological record. Venus’ surface, unlike Earth, appeared to be very young, most of it only 500 million years old and created during a planet-wide period of active vulcanism. From data collected by Magellan scientists concluded that planet’s environment had remained consistent since that volcanic episode. And unlike Earth, Venus had no moving crustal plates with no continental drift at work.

The evolution of radar imaging technology is demonstrated when you compare Pioneer Venus-1 mission results with those of Magellan seen in this picture. The image is colour-enhanced to bring out details. Source: Jet Propulsion Laboratory, NASA

Magellan’s mission ended in 1992 with the firing of its onboard rockets to allow it to enter and burn up in Venus’ atmosphere. Before that, however, the spacecraft not only provided spectacular radar maps showing details as small as 100 meters (330 feet), but also global gravity field maps. In Magellan mission controllers were able to test for the first time a new maneuvering technique called aerobraking, using the planet’s atmosphere to slow and steer a spacecraft. This was to prove useful in other interplanetary flights.

Since Magellan – Not a Lot Happening with Venus

Venus has largely been forgotten since the Magellan mission. The planet has been visited by spacecraft going to other planets in the Solar System using Venus as a gravity-assist for acceleration, deceleration and course correction. The Galileo spacecraft used Venus this way in 1989 on its way to study Jupiter. Cassini-Huygens did two gravity-assist flybys of Venus in 1998 and 1999 before heading to Saturn. And most recently, Messenger used Venus in two flybys in 2006 and 2007 to adjust its course and speed on its mission to Mercury.

In our next blog on the subject of space we look at the Soviet Venus missions and their contribution to the advancement of technology and our scientific understanding.

Bioengineering Update – If the Climate Changes Wouldn’t it be Easier to Change Us?

Re-engineering the planet may be tougher than re-engineering humanity argues S. Matthew Liao, of New York University, in an article, Human Engineering and Climate Change published in Ethics, Policy & Environment. With the impact of greenhouse gases and rising atmospheric temperatures, and with the growth in human population expected to exceed 9 billion by mid-century, Liao and his co-authors, Anders Sandberg and Rebecca Roache, of Oxford University, create an argument for altering our species to better adapt to a changing world.

The arguments for this approach include:

  1. Human engineering may be potentially less risky than geo-engineering the planet to mitigate climate change.
  2. Human engineering could decrease human-induced climate change.
  3. Human engineering could solve many of our social and health problems and contribute to a more sustainable planetary footprint.
  4. Biomedical modification is already within sight with the mapping of the human genome and our ability to insert genetic information into the DNA of individuals suffering from genetically induced diseases.

Why would re-engineering us be better than re-engineering the planet?

  1. Geo-engineering is a very imperfect science and climate scientists are still trying to perfect models to explain the interaction between atmosphere, solar radiation, our oceans and land masses.
  2. Geo-engineering requires all nations to commit to a common strategy for reducing our carbon footprint. Humans have shown a disinclination to achieve any kind of consensus on a common approach to date.
  3. Ge0-engineering could have unforseen consequences that further damage the environment globally. We could create a runaway cold event or induce more rapid heating if we choose the wrong technological solution.

What kinds of changes would we consider making to humans?

  1. Make humans smaller so they need less to eat and use fewer resources.
  2. Make all humans vegans by genetically modifying them to not tolerate meat and thus free up land currently used for the meat industry and repurpose it for crops.
  3. Make humans less prolific in reproduction to reduce population.
  4. Make humans more sympathetic and altruistic to reduce war and conflict.

I recently read Margaret Atwood’s messianic novel of the future, Oryx and Crake.  If you are not familiar with it, the subject focuses on bioengineering with outcomes darker than that suggested by the proposals made by Liao et al. The authors freely admit that human engineering solutions may be considered preposterous but if we are to survive as a species in a habitable Earth, it is an option worthy of debate.

Agriculture – Part 4: The Impact of Climate Change in the 21st Century

What will be the impact on agricultural production of increases in carbon dioxide and other greenhouse gases?

The American Society of Agronomy, Crop Science Society of America and Soil Science Society of America recently issued a position statement on climate change. In that statement the three societies stated “a comprehensive body of scientific evidence indicates beyond reasonable doubt that global climate change is now occurring and that its manifestations threaten the stability of societies as well as natural and managed ecosystems.”  The statement indicated that “changes in climate are already affecting the sustainability of agricultural systems.”

Agriculture’s Impact on Climate

What part is our increasing need to produce food and our pattern of land use playing as a contributor to  climate change?

  1. Deforestation removes the most natural carbon sink on the planet. Forests are great absorbers of CO2. By clear cutting boreal forests in temperate zone countries and through burning of rainforests in tropical countries to clear the land for sugarcane and other cash crops we contribute to the rise of atmospheric CO2.
  2. Animal husbandry on an industrial scale creates methane (CH4) a gas with 20 times the heat absorption rate of CO2. CH4 sources include not just the gases belched and eliminated from cattle, sheep, goats and other ruminants, but also gases derived from farm animal waste.
  3. Wetland agriculture such as rice paddy flooding results in the release of CH4.
  4. Animal waste decomposition contributes to nitrous oxide (N2O) release, with 290 times the heat absorption rate of CO2. Excessive application of nitrogen-based fertilizers also generates N2O.

These three greenhouse gases, CO2, CH4 and N2O represent a significant contributor to atmospheric warming and 10-15% of the total present in the atmosphere can be attributed to agricultural activity.

 Climate Change’s Impact  on Agriculture

Climate change leads to aberrant weather conditions, temperatures, precipitation, and storms and winds that are destructive to traditional agricultural land use. Six notable climate change impacts are:

  1. Prolonged higher temperatures affecting early plant growth, flowering, pollination and subsequently the quality and quantity of  harvests.
  2. Precipitation changes leading to either prolonged dry periods, or excessive wet ones contributing to soil erosion, soil saturation and plant desiccation.
  3. Increased CO2 positively impacting some crops while harming others.
  4. The combination of higher temperatures, precipitation and increased CO2 interacting with other environmental factors such as ozone leading to reduced yields.
  5. Modification to soil mitigated by all the above causing loss of fertility, decomposition of organic matter, increased salinity, reduced water capacity, altered biological composition, and increased erosion.
  6. Changes to local ecosystems introducing insect pests and weeds to areas where they normally were unseen.

How Agricultural Practices Can Change to Mitigate Contributions to Climate Change

In food crop production, the way soil,  seeding,  cultivation and harvesting are managed can contribute dramatically to reductions in greenhouse gas emissions. Agricultural best practices to increase the sequestering of carbon and other contributors to atmospheric warming include:

  1. Reduced tillage and the baring of soil between plantings.
  2. Ending mono culture by rotating crops that farmers plant and harvest.
  3. Better management of irrigation so that water doesn’t end up in soil where there is no crop cultivation, and excessive water doesn’t cause soil saturation.
  4. Managing nitrogen-based fertilizer application so that it is appropriately used within the crop life cycle and not excessively applied.
  5. Planting pulse crops that get their nitrogen from the air and soil bacteria. Such crops include peas, lentils, chickpeas, faba beans, soybean and lupin to name a few.
  6. Altering cultural practices in rice-growing regions of the world to plant dry rice varieties to eliminate standing water in rice paddies.

According to “The New Atlas of Planet Management,” authored by Norman Myers and Jennifer Kent (University of California Press, 2005) domesticated animals outnumber humans on this planet by a factor of 4. There are 3 times as many chickens as people. The remainder of domesticated animals include ruminants, pigs and domesticated pets. The predominant ruminants include cattle, sheep, goats, buffalo, camels and llamas of which there are approximately3 billion. Ruminants are the principal animal contributors of methane gas.

What changes do farmers have to implement in animal husbandry to mitigate greenhouse gas production?

  1. Changes to the types of grass grown for consumption by cattle.
  2. Changes to the mix and types of feeds given pigs and ruminants. For example by feeding pigs reduced dietary protein, and by changing the grain mix to corn and soybean rather than barley and canola, CH4 emissions can be reduced by 20 to 40%.
  3. Better management of animal waste such as applying manure immediately to dry soil rather than storing it, adding straw or allowing it to remain wet. If stored in water farmers can trap CH4 using impermeable covers and use the vented trapped gas as a fuel for generating heat and electricity.

How Agriculture Can Adapt to Climate Change

As the atmosphere warms and precipitation and wind patterns alter, agriculture will feel the impact. The methods for mitigating contributions to climate are almost the same as those needed to adapt to it. Farmers will have to:

  1. Change the type of crops they grow particularly if climate zones shift transforming regions, for example,  from  wet to semi-arid, and from semi-arid to arid.
  2. Apply soil conservancy techniques to mitigate against wind erosion and other climate change variables.
  3. Apply drip irrigation to conserve water as farming areas dry out
  4. Experiment with a wider variety of crops, choosing those most suited to the changing climate.
  5. Develop new strategies to manage  new and unfamiliar insect and weed pests introduced by changes to the local ecosystem.
  6. Develop closed systems of agriculture using greenhouse technologies such as described in my previous blog about rethinking the  farm.

Geoengineering – Part 1: Reworking Our Planet’s Atmosphere in the 21st Century

Climate change as a result of human activity on Earth is a science that has more and more taken on credibility as we track rising global temperatures, ozone depletion, vanishing polar ice, shrinking alpine glaciers, and extreme weather systems that are a departure from recorded meteorological history. Humanity has several choices. We can stay the course continuing to pump out atmospheric-warming pollution and see what happens, or we can try and change humanity’s consumption of fossil fuels, or we can look at ways of re-engineering our environment to mitigate the greenhouse effect. Never before has humanity had to experiment on a global scale to address such a far reaching problem. In 2009, Douglas Fisher wrote an article that appeared in Scientific American, entitled, “Engineering the Planet to Dodge Global Warming.” In it he wrote “The idea of tinkering with planetary controls is not for the faint of heart. Even advocates acknowledge that any attempt to set the Earth’s thermostat is full of hubris and laden with risk.”

Hubris and risk….absolutely. But humans have been tinkering with climate for years. In 1946, an American, Dr. Vincent Schaefer, tried to create artificial clouds by seeding them with silver iodide crystals. There is no certainty that cloud seeding actually works but enough anecdotal evidence has accumulated to make many people around the world attempt cloud seeding. Probably the most famous recent experiment occurred at the 2008 Olympic Games in Beijing, China where the government deployed 32,000 people working with light aircraft, rockets and shells to spread silver iodide crystals or dry ice in clouds 50 km upwind of Beijing. The goal was to prevent rain from interrupting the August 8 opening ceremonies because historical records indicated a 41% chance of precipitation on that date. China spent a lot of money in this effort setting up 26 control stations reporting every 10 minutes on the status of local weather after each seeding event.  According to the Beijing Municipal Meterological Bureau 1,104 rain dispersal rockets were fired from 21 sites in the city between 4 p.m. and 11:39 p.m. on the day of the opening ceremonies, successfully intercepting a stretch of rain clouds from moving towards the stadium. Heavy rains were record around Beijing but not during the time of the opening ceremonies.

In 1990, John Firor wrote “The Changing Atmosphere,” a book that described what we were doing to the atmosphere through our own neglect. He described acid rain, ozone depletion, increases in greenhouse gases, and other atmospheric pollutants and what they were doing to degrade the atmosphere. Firor foresaw the need for a coordinated strategy among all nations to tackle this problem. In his book he stated that it was almost impossible to halt the continued pollution of our atmosphere but suggested steps to slow the process.

For Firor one of the most important steps in changing the atmospheric equation was stabilizing human population growth. We, however, continue to propagate the species at an alarming rate. In 2011 the world’s human population will surpass 7 billion. By mid-century it is projected that we will surpass 9.2 billion. If we as a species can slow the current birth rate to zero growth then the population should stabilize at around that number throughout the balance of the century. If we don’t, however, at present growth rates human population could exceed 14 billion according to a recent U.N. study. Part of this sustained population surge will be due in part to longer lifespans with life expectancy of 97 by 2100 and 106 by 2300. Human population growth will change our atmosphere but that is not the re-engineering that is the topic we want to describe here. We’ll deal with human population and the planet’s carrying capacity in a future blog. Short of all members of the human species stopping breathing there are many ways we can begin to re-engineer the atmosphere.

So let’s begin by describing the current atmospheric challenges we face and the technologies that we need to deploy to reverse the effects of fossil fuel addiction and industrial resource consumption.

What gases and pollutants are we talking about?

Carbon dioxide (CO2) is the primary gas that climatologists point to when talking about atmospheric temperature changes. Carbon dioxide concentrations today are higher than at any time in the last half-million years. Since the start of the Industrial Revolution carbon dioxide has been growing from 280 parts per million (ppm) to 382 ppm in 2006, a rise of 36 percent. Since 2006 CO2 continues to rise at a rate of about 1.9 ppmv/year.

Methane (CH4) is the second greenhouse gas that has seen a sharp increase of 148% from pre-industrial levels to today.

Nitrous oxide (N2O) had shown little variance over 11,500 years before the Industrial Revolution but recently in the last few decades of the 20th century it has increased by 18%.

Aerosols can effect cloud formation as well as the amount of solar radiation that strikes the earth’s surface. Aerosols can also impact atmospheric temperature. Typical aerosols come from the burning of fossil fuels. Coal-fired power plants produce sulfates that reflect solar radiation and cool the atmosphere. By reducing coal-fired plants suflates in the atmosphere have started to decrease. Another aerosol is soot, again a byproduct of fossil fuel burning as well as the burning of forests for land clearance. Soot, also known as black carbon, tends to be a local atmospheric phenomenon effecting atmospheric temperature and cloud formation. Open pit mining, salt pans and mineral precipitate operations can also contribute organic carbon aerosols into the atmosphere. These precipitates affect air quality quite dramatically and can contribute to global atmospheric changes including cooling and increased cloud formation.

Airborne precipitates and gases are not the only contributors to atmospheric alterations. Land use plays a significant part in changing the reflective capability of our planet and as a result the concentration of radiation-generated heat that gets trapped in the atmosphere. The fraction of solar radiation reflected by a surface or object, often expressed as a percentage is called albedo. Snow has a high albedo. Forests have a low albedo. The oceans have a low albedo. The growth of cities, deforestation and desertification are playing an increasing role in changing our atmosphere.

How can we rework the atmosphere to stabilize it and reverse the impact that greenhouse gases, aerosols, and land-use changes have wrought?

1. The first and foremost is ending our dependence on fossil fuels and immediately reducing the burning of these fuels. One third of our fossil fuel consumption comes from burning oil, natural gas, and coal to generate electricity.  Every power plant has the capability to disperse millions of tons of carbon dioxide into the atmosphere annually. Stop the burning of forests to clear them from agricultural use.

2. Decrease carbon emissions from other industrial processes. In the process of creating cement, refining metals, chemicals and fossil fuels we generate many more millions of tons of carbon dioxide. Carbon capture and the reuse of it as an industrial resource can even play a profitable part in the manufacturing process.

3. There are an increasing number of carbon capture and storage technologies being demonstrated today. The most commonly used is deployed at power plant and manufacturing facilities where the flue gas stream is captured. This is a post-combustion process. Two other processes capture carbon dioxide at the pre-combustion and combustion phases within power plants. These current deployed technologies either only capture a fraction of the carbon dioxide stream eminating from plants or cannot be deployed because the retrofits would be prohibitively expensive. This makes reducing carbon dioxide emissions in an economically sustainable way a significant industry problem that governments, through subsidies and tax incentives, may be able to address.

4. Carbon sequestration poses a variety of challenges. Nature’s way of capturing carbon is through photosynthesis. Plants are natural carbon sinks. They take in carbon dioxide and expel oxygen. If only sequestration could be so kind. Through sequestration we capture carbon dioxide either by putting in the ground or in water.

Today, underground sequestration is used by oil companies today as a means of getting additional oil from depleted fields. Pumping carbon dioxide under pressure into oil reservoirs is good business. Because oil reservoirs are porous rock formations overlayed by harder capstones we can find similar geological characteristics in sandstone, shale and other sedimentary rock as well as volcanic rock such as basalt and use these porous formations for sequestration. An interesting consequence of injecting carbon dioxide into basalt formations is the alteration of the rock which turns into limestone.

Sequestering carbon dioxide in water has potential ecological implications. The deep ocean has been considered the ideal place for carbon storage. Liquefied carbon dioxide when injected into the deep ocean (below 3,500 meters) in theory should keep the carbon dioxide permanently trapped. Carbon dioxide in liquid form and subjected to deep ocean pressures turns into clathrate hydrate, an icy substance that in theory cannot be absorbed by ocean water. Experiments in deep ocean carbon sequestration to date have shown mixed success. Sometimes the carbon dioxide stabilizes and sometimes it breaks up in the salt water with potential implications for ocean life. More recently it has been suggested that liquid carbon dioxide can be stored in large polymer containers that are placed in the bottom of the ocean. The target area is in the ocean depths of the Pacific, an area called the abyssal plain. The liquid carbon dioxide would be pumped through pipelines to polymer bags hoding up to 160 million metric tons, equivalent to two day’s of global human output.

Another solution for sequestration has a potential happy ending. Since fossil fuels are a byproduct of carbon sequestration over geological time, we may be able to synthesize fossil fuel production by injecting carbon dioxide deep into the earth’s crust and thereby reproduce the very forces that created fossil fuels naturally. Using gravity we could artificially create a carbon cycle that would give us a continuous supply of fossil fuels for as long as we needed them. Should we be able to develop such technical skill then all the sequestered carbon that we pump into the oceans and underground may prove to be a valuable resource.

Energy in the 21st Century: Part 3 – Synthetic Fuels from Bitumen

Easily accessible oil and gas finds, usually referred to in the industry as conventional hydrocarbon reserves, are a thing of the past. Instead resource developers are increasingly turning to unconventional alternatives such as deep ocean exploration and extraction, the mining of bitumen deposits and oil shales, and the creation of oil products from biomass and garbage.

In 2013 Canadians will celebrate the one hundredth anniversary of the first commercial extraction of oil from what has been labeled the Athabasca Tar Sands, the oil sands of Northern Alberta and Saskatchewan. That first attempt to exploit these saturated sand deposits used hot water to separate the bitumen. To date this remains the way bitumen is extracted from oil sands.

What is bitumen? If you think of asphalt then you have a sense of what this material is like. When extracted from the sand to which it is attached it has the consistency of cold molasses and is black in colour. Bitumen is not transportable in its extracted state. Nor can conventional refineries use it to create oil and oil byproducts. It needs to undergo significant change to give it the viscosity to travel through pipeline. Upgraded bitumen from Western Canada consists of naptha and heavy and light gas oils.

The cost of upgrading, the environment in which oil sands are located, and the remoteness of the location, had made bitumen commercially unfeasible for much of the 20th century. But as oil prices rose and geopolitical considerations began to impact more easily extracted oil resources, oil sand operations became much more attractive to investors. In 1967, the Great Canadian Oil Sands Syncrude project finally came on stream extracting bitumen from the sands using similar hot water processes to those employed back in 1913. Since 1967 many other companies have started  bitumen extraction operations in Western Canada. Today there are an estimated 1.7 trillion barrels of bitumen locked up in the oil sands with from 175 to 315 billion barrels of recoverable hydrocarbon product  using present and developing technologies. This makes these deposits the single largest hydrocarbon source on the planet.

Is bitumen only found in Northern Alberta and Saskatchewan? No, in fact bitumen oil sand deposits can be found in countries all around the world including the United States, Venezuela, Russia, Cuba, Indonesia, Brazil, Trinidad & Tobago, Jordan, Madagascar, Colombia, Albania, Romania, Spain, Portugal, Nigeria and Argentina.

Currently the oil sands of Western Canada produce almost a half-billion barrels of oil annually. At current extraction rates the sands can remain commercially active for the next three centuries. Even if annual production were to double or triple the oil sands would remain viable for the remainder of the 21st century as a petroleum resource. Having said that there is a fly in the ointment and it has to do with the planetary impact of current oil sands production technology.

The biggest knock against bitumen is in its extraction and environmental impact. When the first commercial operations in Western Canada started the extraction process used large volumes of water to flush the oil from the sands in which it was embedded, in fact between 2 and 4.5 cubic metres for every cubic metre of oil.  In Northern Alberta and Saskatchewan the major water source for these oil sands projects was one river, the Athabasca, a tributary of the Mackenzie River that flows into the Arctic Ocean. What happened to the water during the process raised further alarms. With only 10% being returned to the river and the rest being held in storage ponds that contained significant pollutants, it was only a matter of time before drawing water from the Athabasca became a public relations nightmare.

A study released at a UN climate conference in Kenya in 2006  stated that oil sands extraction projects were threatening both the quality and quantity of water in the Mackenzie River system of which the Athabasca is a tributary. The study by the Sage Centre and World Wildlife Fund-Canada  pointed out that water drawn from the Athabasca had contributed to a drop in water flow of 20% at Fort McMurray according to data records from 1958 to 2003. The study further stated that oil sands projects were using 359 million cubic metres of water from the Athabasca, twice the water used by the city of Calgary, Alberta’s largest urban centre with over 1 million people. The study forecasted increased Athabasca water extraction of 50% as new projects came online. This growth in water usage was deemed to be unsustainable having a significant impact on agriculture, cities and the environment of the river system. The recommendation was to shut down any new oil sands projects until the water problems was solved.

The biggest use of water in oil sands production today is to generate steam for underground injection. Some oil sands projects are now using underground aquifers rather than the Athabasca. In many cases the water being drawn from the aquifer is not potable with a high saline content. One oil sands project, Imperial Oil’s Cold Lake facility has been able to decrease fresh water use from 3.5 barrels for every barrel of bitumen in 1985 to half a barrel per barrel of bitumen produced today.

The oil sands projects are generators of greenhouse gases.  The energy to process bitumen comes from the burning of natural gas containing methane, carbon dioxide, nitrogen and hydrogen sulfide. Every barrel of synthetic crude requires enough natural gas equal to warm 1-1/2 houses daily. Natural gas is abundant and fairly clean but in burning it the carbon dioxide generated contributes to increased greenhouse gases.

The equipment needed to dig up and transport the oil sands further contributes to greenhouse gases.  When added up the percentages are 13% of greenhouse gases come from the extraction process, 30% from the upgrading process and the balance from the burning of natural gas. The total amount is a staggering 29.5 megatons. That’s 29.5 million tons of greenhouse gases and as more projects come online that number may grow even higher.

The industry is looking at solutions such as carbon sequestration, that is pumping the carbon dioxide underground into stable rock formations where it can be permanently captured. The challenges of sequestration go beyond cost considerations. Carbon sequestration science is relatively new. If forecasters are right, we will need reservoirs capable of containing a trillion tons of carbon dioxide by the end of the 21st century. Salt water saturated sedimentary rock formations are currently considered the best bet for sequestration. These sedimentary formations are more than 800 meters deep so that they do not impact on potable water, and where high the depth pressure can maintain the carbon dioxide in a high-density state. But no one knows if storage of this type is sustainable for hundreds if not thousands of years. How can you account for seismic activity that could fracture the rocks creating seams where the carbon dioxide can escape.

Scientists and engineers are also looking at sequestering carbon beneath the ocean floor. theorizing that the pressure from ocean water and the sediments would keep the carbon dioxide permanently locked up and incapable of entering the sea water.

The 21st century may also come up with new methods for sequestration. Some scientists cite historic carbon dioxide atmospheric concentrations as evidence that the Earth can naturally deal with the excess. They theorize that when atmospheric concentrations of carbon dioxide became high the gas was absorbed by ocean water and combined with calcium ions to form limestone.

So as long as our modern world economies and those of the developing world continue to rely on oil the oil sands will remain a significant source of production and an environmental challenge.