Global land ecosystems are becoming less efficient at absorbing carbon dioxide

An update by NASA on global warming trend.

Plants play a key role in mitigating climate change. The more carbon dioxide they absorb during photosynthesis, the less carbon dioxide remains trapped in the atmosphere, where it can cause temperatures to rise. But scientists have identified an unsettling trend – 86% of land ecosystems globally are becoming progressively less efficient at absorbing the increasing levels of CO2 from the atmosphere.

Land ecosystems currently play a key role in mitigating climate change. The more carbon dioxide (CO2) plants and trees absorb during photosynthesis, the process they use to make food, the less CO2remains trapped in the atmosphere, where it can cause temperatures to rise. But scientists have identified an unsettling trend – as levels of CO2 in the atmosphere increase, 86% of land ecosystems globally are becoming progressively less efficient at absorbing it.

Because CO2 is a main "ingredient" that plants need to grow, elevated concentrations of it cause an increase in photosynthesis, and consequently, plant growth – a phenomenon aptly referred to as the CO2  fertilization effect, or CFE. CFE is considered a key factor in the response of vegetation to rising atmospheric CO2 as well as an important mechanism for removing this potent greenhouse gas from our atmosphere – but that may be changing.

For a new study published Dec. 10 in Science, researchers analyzed multiple field, satellite-derived and model-based datasets to better understand what effect increasing levels of CO2 may be having on CFE. Their findings have important implications for the role plants can be expected to play in offsetting climate change in the years to come.

“In this study, by analyzing the best available long-term data from remote sensing and state-of-the-art land-surface models, we have found that since 1982, the global average CFE has decreased steadily from 21% to 12% per 100 ppm of CO2 in the atmosphere,” said Ben Poulter, study co-author and scientist at NASA’s Goddard Space Flight Center. “In other words, terrestrial ecosystems are becoming less reliable as a temporary climate change mitigator.”

What’s Causing It?

Without this feedback between photosynthesis and elevated atmospheric CO2, Poulter said we would have seen climate change occurring at a much more rapid rate. But scientists have been concerned about how long the CO2 Fertilization Effect could be sustained before other limitations on plant growth kick in. For instance, while an abundance of CO2 won’t limit growth, a lack of water, nutrients, or sunlight – the other necessary components of photosynthesis — will. To determine why the CFE has been decreasing, the study team took the availability of these other elements into account.

“According to our data, what appears to be happening is that there’s both a moisture limitation as well as a nutrient limitation coming into play,” Poulter said. “In the tropics, there’s often just not enough nitrogen or phosphorus, to sustain photosynthesis, and in the high-latitude temperate and boreal regions, soil moisture is now more limiting than air temperature because of recent warming.”

In effect, climate change is weakening plants’ ability to mitigate further climate change over large areas of the planet.

Next Steps:

The international science team found that when remote-sensing observations were taken into account – including vegetation index data from NASA's Advanced Very High Resolution Radiometer (AVHRR) and the Moderate Resolution Imaging Spectroradiometer (MODIS) instruments – the decline in CFE is more substantial than current land-surface models have shown. Poulter says this is because modelers have struggled to account for nutrient feedbacks and soil moisture limitations – due, in part, to a lack of global observations of them.

“By combining decades of remote sensing data like we have done here, we’re able to see these limitations on plant growth. As such, the study shows a clear way forward for model development, especially with new remote sensing observations of vegetation traits expected in coming years,” he said. “These observations will help advance models to incorporate ecosystem processes, climate and CO2 feedbacks more realistically.”

The results of the study also highlight the importance of the role of ecosystems in the global carbon cycle. According to Poulter, going forward, the decreasing carbon-uptake efficiency of land ecosystems means we may see the amount of CO2 remaining in the atmosphere after fossil fuel burning and deforestation start to increase, shrinking the remaining carbon budget.

“What this means is that to avoid 1.5 or 2 °C warming and the associated climate impacts, we need to adjust the remaining carbon budget to account for the weakening of the plant CO2 Fertilization Effect,” he said. “And because of this weakening, land ecosystems will not be as reliable for climate mitigation in the coming decades.”

*Courtesy of NASA/Photo credit: NASA

 

 


American democracy is tested; but foremost is political stability, then sustainability in all else.

Steven Cohen.

It may seem paradoxical, but political stability is a prerequisite for the change we need to transition our economy to one that is environmentally sustainable. The beating heart of America’s economic wealth and power is the politically stable system of law that we Americans take for granted. Investors around the world know that a dollar loaned to the United States or invested in American corporations will not disappear or be stolen by a corrupt, lawless regime. Last week, we saw both the fragility and the resilience of our political system. Its stability was attacked by an aspiring autocrat and his deluded followers as they ransacked the U.S. Capitol building. Its resilience was demonstrated as determined legislators worked all night to complete the certification of the duly elected President of the United States.

It was shocking, but sadly, not surprising. And it is far from over. Inauguration Day and the days leading up to January 20 will see more political violence. Hopefully this time our police and military will be better prepared to resist it.

President-elect Biden and his team have an enormous task ahead of them. They must vaccinate the nation, restore the economy, promote equity, address racism, combat climate change and reinforce and solidify our democracy. That work requires a stable, functioning political process. In my view, there is a mistaken belief that autocracies are stable, and democracies are not. The American experience has been the opposite: that a democracy governed by a system of law and built on the consent of the governed provides the highest probability of political stability. The consent of the governed, not the “muscle” of the autocrat is the source of genuine political stability. But as we have learned over the past four years, our political system is more fragile than we thought. Trump’s attack on the electoral system before and after the election was relentless. Fortunately, it was met with determined, bi-partisan resistance. Democracy does not run on autopilot. It requires people to place principle over power. We saw that with the Republicans in charge of Georgia’s elections. We saw that in Congress before the assault on the Capitol building and it only grew stronger after Trump’s mob was expelled from the building. A belief in the Constitution and the rule of law dominated the discussion of our elected leaders.

But it was far from unanimous. Fear of the Trump base led elected officials all over the nation to parrot the president’s disinformation about the presidential election. Millions of misled voters all over the nation fell victim to the campaign of lies and fantasy perpetuated by the insecure and vain man who simply could not accept his electoral loss. We are fortunate that Trump is an incompetent aspiring autocrat. A more skilled operator might have had greater success in attacking our political institutions.

Since the initial attack was quickly repelled, the resilience of our institutions should provide some assurance that political stability will be maintained. Disinformation-fueled extremism will continue, but it will no longer be led by the most powerful elected official in the world. The images of the Capitol building desecrated by a mob should serve to delegitimize this form of political extremism, as will the calm, principled and moderating voice of President-elect Joe Biden.

And we need calm voices and political stability to take on the climate crisis and the challenge of creating an economy that provides economic opportunity without destroying the planet. The type of economic transformation we need will require massive long-term investment of capital. Government needs to invest in green infrastructure to decarbonize our economy and private capital must be attracted to investments in renewable energy, electric vehicles, and the production systems and supply chains of the circular economy. Long term investments require government incentives, and we require a stable government to assure that these long-term investments will eventually pay off.

Political stability is not simply a set of laws but is based on belief in the sanctity of those laws. It is a social construct. A dominant social paradigm about how the political world works. A great challenge to that set of beliefs is the ability of social and mass media to create a universe of alternative facts. The recent election is a visible case in point. Scores of challenges to the election were filed in court and dismissed over and over by judges all over America. But the mob attacking the Capitol continued to repeat the same fiction in scores of interviews and on social media. Clearly, some of the intensity stemmed from the president repeating these falsehoods relentlessly. Removing the presidency from the equation and separating Trump from his 80 million-plus Twitter followers should help, but political stability and the capacity for constructive political and economic change requires a shared consensus about reality.

Attacks on the electoral process, the seriousness of COVID-19 and the science of climate change have been part of the political landscape of the Trump administration for years. The result has been a massive pandemic impact, a steadily warming planet, and a Congress hunkered down in the basement while mobs ran amuck above. These impacts are closely connected and a direct result of our incompetent but aspiring autocrat-president seeking to retain his hold on political power.

We Americans are fortunate that most of us have never lived under conditions of political instability. While racism and xenophobia make America less free than it should be, and too many fear they will be attacked for their appearance or accent, there remains a calm predictability in our daily lives that people in Syria, Afghanistan, Sudan, Iraq and other places in the world long for. That calm predictability is why wealthy people from all over the world purchase real estate in the United States and try to ensure that some of their wealth is invested here. Coupled with our vast military power, America possesses the wealth and stability that is needed to invest in the renewable resource-based economy. I know that sounds like a logical contradiction since the lust for economic power is what created the crisis of environmental sustainability. But we need organizational competence, financial capital and political power for a peaceful transition to an environmentally sustainable economy. The transition will be a high wire act, maintaining a productive economy while eliminating its destructive environmental impacts. We need to repair the airplane and fly it at the same time. An economic crash would slow and possibly end the transition to sustainability. Our methods of production and consumption must be transformed rather than reduced. A stable political system inspiring economic confidence is a prerequisite to a successful transition to sustainability.

Our military might, global reach, and vast power have both costs and benefits, but this nation’s vast power makes the goals of the Green New Deal feasible. The transition we need requires America’s leadership and without that leadership, it is hard to see how the climate crisis and the interconnected crisis of environmental sustainability can ever be addressed. We have spent the past four years relying on corporations, non-governmental organizations, cities, states and civil society to lead the renewable resource transition in America. Although we’ve made progress, it’s clear that the job requires federal leadership and that leadership requires political stability and a shared factual understanding of how the world works. While last week was wrenching, the electoral results in Georgia, the courage of many Republican elected officials, and the silencing of Trump’s Twitter account give us reason to hope that better days lie ahead.

*Courtesy of Earth Institute, Columbia University.

Photos credit: AFT


Is the African continent more vulnerable to climate change than other regions?

 

Photo credit:AFP

Richard Washington*

The African continent will be hardest hit by climate change.

There are four key reasons for this:

  • First, African society is very closely coupled with the climate system; hundreds of millions of people depend on rainfall to grow their food
  • Second, the African climate system is controlled by an extremely complex mix of large-scale weather systems, many from distant parts of the planet and, in comparison with almost all other inhabited regions, is vastly understudied. It is therefore capable of all sorts of surprises
  • Third, the degree of expected climate change is large. The two most extensive land-based end-of-century projected decreases in rainfall anywhere on the planet occur over Africa; one over North Africa and the other over southern Africa
  • Finally, the capacity for adaptation to climate change is low; poverty equates to reduced choice at the individual level while governance generally fails to prioritise and act on climate change

Is Africa sleepwalking into a potential catastrophe?

Monsoons altering African climate is replete with complexity and marvels. The Sahara is the world's largest desert with the deepest layer of intense heating anywhere on Earth.

In June and July the most extensive and most intense dust storms found anywhere on the planet fill the air with fine particles that interfere with climate in ways we don't quite understand.

The region is almost completely devoid of weather measurements yet it is a key driver of the West African monsoon system, which brings three months of rain that interrupts the nine-month long dry season across the Sahel region, south of the desert.

For the decades following the 1960s and peaking in 1984, there was a downturn of rainfall of some 30% across the Sahel, which led to famine and the deaths of hundreds of thousands of people and the displacement of many millions.

No other region has documented such a long and spatially extensive drought.

Evidence points to Western industrial aerosol pollution, which cooled parts of the global ocean, thereby altering the monsoon system, as a cause.

The currently observed recovery of the rains is projected to continue through the 21st Century, particularly over the central and eastern Sahel.

 Africa's capacity to adapt to climate change is low - and this year led to landslides in Kenya

But that change seems to depend on exactly where future heating in the central Sahara peaks, emphasising cruelly the region we least understand.

In southern Africa we are seeing a delay in the onset and a drying of early summer rains, which is predicted to worsen in forthcoming decades.

Temperatures there are predicted to rise by five degrees or more, particularly in the parts of Namibia, Botswana and Zambia that are already intolerably hot.

 

The East African paradox

Meanwhile over Kenya and Tanzania, the long rains from March to May start later and sooner - leading to an overall decrease in rainfall.

This observed change sits uncomfortably next to predictions of a wetter future in the same season - a problem scientists have termed the East African Climate Paradox.

Central Africa, one of three regions on the planet where thunderstorms drive the rest of the planet's tropical and sub-tropical weather systems, lives perilously close to the rainfall minimum needed to support the world's second largest rainforest system.

Even a little less rainfall in the future could endanger the forest and its massive carbon store.

We know remarkably little about that climate system - it is scarcely even monitored - there are more reporting rain gauges in the UK county of Oxfordshire than the entire Congo Basin.

Africa's complex climate system is, unusually, influenced by the three main global ocean basins.

Emerging from one of those rapidly warming oceans, tropical cyclones Idai and Kenneth in March and April 2019 destroyed parts of Mozambique, Zimbabwe and Malawi, with Kenneth following a particularly unusual path over Tanzania.

Scientific breakthrough

But on the scientific front there is hope. In collaborative efforts we are working intensely hard to improve climate prediction.

More than 1,000 people died after Cyclone Idai hits Mozambique and Zimbabwe

Projections of climate change depend on climate models of which there are dozens, each as complicated to understand as the real world.

Through efforts such as the ongoing Future Climate for Africa (FCFA), a programme funded by the UK's Department for International Development and Natural Environment Research Council, the experience and insights of African climate scientists have led to a discernible jump in our ability to understand and model African climate.

We have new insights brought through that scientific ingenuity.

Each region and sub-region of Africa is changing differently but an emerging commonality is a shift towards more intense rainfall - even where there is observed and projected future drying.

The rainfall arrives in shorter bursts, causing more runoff and longer dry-spells in between.

New models, developed as part of FCFA, are now run at extremely high resolution with grid spacing of around 4km (2.5 miles) for the entire continent.

Understanding thunderstorms

The results point unambiguously to an increase in both rainfall intensity and the length of dry spells, and we have strong reason to believe them.

Central to that rainfall change is the behaviour of thunderstorms, which deliver around 70% of African rain.

Standard global climate models can only represent these key systems indirectly but the new models are capable of representing thunderstorms systems adequately for the first time.

This is part of the approach we are adopting - to find out exactly how the models simulate the changing weather.

From an extremely modestly resourced lab in Cameroon, for example, Wilfried Pokam and his team of researchers are exposing the way that the central African climate system and southern Africa are linked, thereby breaking the mould of our stubborn piecemeal, regional view of the continent's climate system.

 African governments have generally failed to prioritise climate change

Such breakthroughs are improbable when you consider that these researchers download massive data sets through cheap Sim cards in their mobile phones and analyse the output overnight.

By day, they keep the first Lidar system in central Africa running. The Lidar measures winds in the lowest few kilometres of the atmosphere, helping to fill the vast data void in central Africa.

They are part of a set of young scientists joining the race to set adaptation to climate change in motion before Africa is overwhelmed. It is a matter of social justice that we succeed. Africa will be hardest hit by climate change, but has contributed the least to causing that change.

*Richard Washington is a professor of climate science at the School of Geography and the Environment at Oxford University in the UK. Courtesy of BBC.


Inhabitants of Africa need only look to Taiwan to flatten their Covid 19 infection rates

*Photo credit: AP News

Steven O. Kimbrough (Wharton, University of Pennsylvania)

Christine Chou, (National Dong Hwa University, Taiwan)

When the novel coronavirus and its disease, COVID-19, first spread in China, Taiwan was regarded as the next country most likely to be affected, due to its close geographic and economic ties with China. However, by mid-July 2020, after more than six months of rapidly growing COVID-19 cases around the world, Taiwan still counted substantially fewer cases than most countries. The worldwide news media have noted Taiwan’s initial success story, attributing it to Taiwan’s resilience, pervasive national health system, central command structure, rapid medical equipment build up, early prevention and transparent information sharing, as well as other factors. While these factors surely have played important roles in contributing to this initial success, it is too soon to tell whether that success will continue.

The purpose of our case study is to describe the work by a special group of people to assist in the pandemic response in Taiwan. That work has culminated, so its story can now be told. Our case study is based on 3,060 online community messages, 32 online shared interviews and information from several personal contacts. See “Not All Heroes Wear Capes: The Contributors Behind the Battle Against the Coronavirus Outbreak in Taiwan”for a fuller version of the study, including a detailed timeline.

The basic facts of the case are the following: Rationing of face masks began in Taiwan on January 28, shortly after the coronavirus appeared. This was partly in response to panic buying, but problems persisted, with long lines at all convenience stores that were originally designated to sell masks. There was also much agitation and anxiety among the public. At this point the idea of a name-based rationing system — tied to the national health system records — for buying face masks in the pharmacies was proposed. Under the system, each citizen or foreigner with a valid alien resident certificate could purchase two masks within a 7-day period using their identification cards as of February 6.

Once news of the forthcoming arrangement was released on February 4, a novel collaboration among the public, private and civic sectors began to emerge spontaneously. More than 1,000 software developers joined in the task of providing apps and other tools to identify in real time where face masks were available, sparing the public wasted time and anxiety. By the beginning of March, 59 map systems, 21 line applications, three chat bots, 23 mask sales location search systems, 22 apps, five audio systems, two information sharing systems, and one online mask reservation system were launched. Several applications have attracted more than 2 million users. The tools have been very effective, easing public anxiety and preventing a black market from emerging. As Microsoft executives Jaron Lanier and E. Glen Weyl wrote, “These tools showed where masks were available, but they did more than that. Citizens were able to reallocate rations through intertemporal trades and donations to those who most needed them, which helped prevent the rise of a black market.” In the end, democracy and social capital in Taiwan were strengthened.

“In the end, democracy and social capital in Taiwan were strengthened.”

The rationing system and the searching tools fully met their expectations until late April when the government was able to produce ample numbers of masks domestically. The government began to donate masks to various countries in need beginning in early April and was able to accumulate uncollected masks to donate more to other countries in late April. There are several important lessons to be learned from this case:

  1. An Existing Platform

The software community coalesced on the g0v.tw platform, which is “an online community that pushes for greater information transparency [and] that focuses on developing an information platform and effective related tools for citizens to participate in society.” This platform was first set up in 2012 when a group of engineers was not satisfied with the government’s stance towards data availability.

  1. Persistent Key Members

The channels used by the g0v community (#general and then #covid19) had persistent, attentive members. In the development of the mask-searching system, several leaders responded to the community multiple times every day. The top three members’ IDs were kiang, minexo79 and tnstiger, all of whom steadfastly replied to channel members’ messages, and continue to do so.

  1. Openness of Government Data 

Thanks to the universal national health system, the Ministry of Health and Welfare had complete data available on pharmacies around the country. That data included pharmacists’ store codes, locations, business hours, mask inventory, and ways of issuing numbers to distribute the masks. This data was made available to the g0v developers after they requested it.

  1. Emotion Sharing

The g0v community shared frustrations among its members as the project got underway and elation as successes were achieved. Emotion sharing was a key element in binding the community together and serving its higher purposes.

Stepping back, we should see the generous behavior of the g0v community in a larger context. It is an example of people spontaneously coming together in the spirit of community service during a public disaster. Rebecca Solnit documented multiple such examples in her 2009 book, A Paradise Built in Hell. Her chronicle begins with the San Francisco earthquake of 1906 and the acute firsthand observations of the philosopher William James, who noted the initial effective and peaceable self-organization of response by the diverse citizenry. Sadly, that period of comity and effectiveness soon ended because authorities imposed force where it was hardly needed and, in fact, detrimental. Solnit found this to be a recurring pattern up to the present day.

“In a time of immense challenge, each contributor became a hero in his or her own way.”

Happily, Taiwan so far has been an exception. In this case, the government responded with welcome and alacrity to the pro-social impulses of the g0v community. Audrey Tang, the government’s digital minister, has been a linchpin. With expert skills and knowledge in information technology, and the political skills that come with holding an important position in the government — a rare combination — she actively supported the projects and served as a crucial go-between for the multiple stakeholders present. An exceptional talent, she is also openly a transgender woman who, remarkably for Taiwan’s historical culture, has achieved the highest levels of access and influence in government and society. Her voice is eagerly sought and listened to. See “How digital innovation can fight pandemics and strengthen democracy” for Tang’s broader take on the situation.

The success of the name-based mask projects was enabled by an unusual combination of elements including: outstanding leadership and commitment (the digital minister, the leaders of the g0v collective); trust and residual goodwill (among the g0v community, and between the citizens generally and the government, which had recently obtained a very strong electoral mandate); deep preparation and steady, highly competent, informed leadership by the government that welcomed the g0v contributions; a well-educated, highly skilled group of techies with the freedom and capacity to contribute without being paid for their services; and a general, creative openness to diverse people and ways of thinking (including enlisting volunteers to visit pharmacies in person and collect additional data).

Above all, the success of this case relied on many volunteers willing to contribute large amounts of their time and effort. As writer Andrea Randall put it so well, “Heroes don’t always wear capes, badges or uniforms. Sometimes, they support those who do.” In a time of immense challenge, each contributor became a hero in his or her own way. Not only did they help solve the problem, but they also warmed everyone’s hearts with their effort and generosity. The gratitude they merit is perhaps even more for the lasting value and example they created for the future, and what this means for Taiwan’s social capital, than for their fine achievement of the day.

*The authors acknowledge the help and authorization from CC BY 4.0 by g0v Contributors and helpful comments from Finjon Kiang. 

*Courtesy of Wharton School.

 


Climate change is killing coffee production; East African coffee farmers know this too well

*Photo credit: Emily Garthwaite

Elizabeth Sharpiro-Garza (Duke University)

Michael Hoffman (Cornell University)

Editorial commentary: What is discussed here has particular relevance to East African countries such as Kenya and Ethiopia where coffee production accounts for a significant share of gross domestic product.

A rich cup of coffee is one of life’s little pleasures, but it will become more difficult and expensive to obtain in the near future. Coffee is among the crops under threat from climate change. An extensive study published in Januaryfound that 60% of wild coffee species — or 75 of 124 plants — are at risk of extinction.

Global warming, deforestation, disease and pests are contributing to the decline, and scientists warn that without conservation, monitoring and seed preservation measures, one of the world’s most popular drinks could become a thing of the past. Beyond the environmental implications, coffee is a $70-billion-a-year industry that is supplied mostly by small-scale farms in parts of Africa and Latin America. Not only is the supply chain in danger, but so are the livelihoods of the estimated 25 million farmers who sustain themselves by growing coffee. In addition, countries that rely on coffee as a major sector of the economy could see a significant decrease in their gross domestic product numbers year after year. “Make no mistake,” former Starbucks CEO Howard Schultz told Time magazine last year, “climate change is going to play a bigger role in affecting the quality and integrity of coffee.”

The Knowledge@Wharton radio show on Sirius XM invited two experts to discuss what is happening in the coffee industry, how companies are responding and what consumers could be facing down the road. Elizabeth Shapiro-Garza is associate professor of the practice of environmental policy and management at Duke University’s Nicholas School of the Environment. Michael Hoffman is an entomology professor at Cornell University and executive director of the Cornell Institute for Climate Smart Solutions. The following are key points from their conversation.

Climate Change Affects Quality and Quantity

Hoffman and Shapiro-Garza don’t downplay the issues with coffee production or the ripple effects on the environment and economy. The problems are quite serious, they said, and action is needed now to ensure coffee is enjoyed by future generations.

“It is an incredible threat,” Shapiro-Garza said. “And I think that it’s really important that we’re starting to talk about some of the solutions.”

Although the industry is dominated by two bean varieties — high-quality arabica and low-quality robusta — wild species are needed to boost the quality of commercial plants. Those wild plants serve as a genetic library, enabling scientists to cross-breed them to create plants that are more drought or disease resistant, for example.

“There’s a fungal disease that just loves the new warmer conditions and higher humidity, and that’s a real serious pest,” Hoffman said. “There’s also something called the coffee borer, which is spreading worldwide, and that is also a very serious pest and one that’s really difficult to control. So, there’s a whole suite of challenges facing the small coffee producers worldwide.”

In Central America, a disease known as stem rust cut coffee production by 15% in 2012-2013, pushing up prices per pound by 33% in the United States, according to Time.

In other coffee growing regions, changes in rainfall can affect production. Too much rain can cause mold or interfere with harvesting; too little can result in substandard fruit.

“It is an incredible threat. And I think that it’s really important that we’re starting to talk about some of the solutions.”–Elizabeth Shapiro-Garza

“What’s really tough is that climate is changing in different ways across the landscape, and it’s really hard to predict how it will change” Shapiro-Garza said. “These impacts are being felt every place where coffee is grown, but in very different ways and in ways in which it’s difficult to predict how the climate change will progress. So, it’s difficult to plan for how to adapt if you don’t know what your climate is going to be like 10 years from now.”

Small-scale Growers Are Hit Hardest

About 70% of the world’s coffee comes from smallholder farms of two hectares or less, Shapiro-Garza said. One reason those small farms are so prevalent is that arabica beans need high elevation to grow, which means farmers are planting in mountainous areas where large-scale production would be impossible.

“That means that as coffee markets go down, as production goes down, as we get further impacts on climate change, such as increases in pests and diseases and other hits to their production, those are the people who are incredibly susceptible to those kinds of economic hits,” she said. “It is affecting overall GDP of these countries, but it’s also affecting some of the most vulnerable people in those populations.”

Shapiro-Garza has conducted research on smallholder farms in Latin America and said the solution isn’t as simple as moving to a different plot.

“You don’t have the resources to buy new land,” she said. New coffee plants can take up to five years to bear fruit, “so if you think about having to move your crops someplace else, plant new bushes and wait five years to get any production, that’s a huge risk.”

Hoffman agreed, saying small-scale farmers have limited capacity. They can’t afford to invest in irrigation or make radical changes. Even if they could move “up-slope,” they don’t own that land, and doing so could lead to further deforestation.

“At that local level, on that farm level, the challenges are pretty severe,” he said.

Given those challenges, it would seem that large-scale farms would fare better. But the experts pointed out that high-quality beans need elevation, which can’t be found in expansive tracts.

Consumers Will Notice the Decline

While the research released last month painted a dire picture for the future of coffee, consumers aren’t yet feeling the widespread effects of supply chain problems. Java seems as abundant as ever, with endless varieties in stock on supermarket shelves and corner cafes popping up all the time.

But the experts said coffee lovers eventually will feel the impact. Prices will go up, quality will go down, and premium beans will be harder to find.

“And some of our choices may just disappear. Some of the particular specialty coffees will just no longer be on the market.”–Michael Hoffman

“The production worldwide is such right now that the consumer is not yet feeling that,” Shapiro-Garza said. “But as time goes on, it might mean that you go to your favorite coffee shop or the grocery store to buy a bag of specialty coffee, and the quality just won’t be the same, or you can’t get the same types of coffee that you’re used to. The other thing that will be hit over time is actual overall production, which could lead to price increases as well.”

Hoffman noted that coffee prices have gone up in the short term, but not enough to change consumer purchasing behavior. That will change long term, he said.

“And some of our choices may just disappear,” he said. “Some of the particular specialty coffees will just no longer be on the market.”

Coffee isn’t the only commodity affected by climate change. In his book, Our Changing Menu: What Climate Change Means to the Foods We Need and Love, Hoffman explains how human activity is threatening a number of food staples around the globe. Heat, carbon emissions, water quality and other environmental factors are lowering the quantity and nutritional quality of wheat, rice, corn, cacao and other crops.

“In one way or another, everything on the menu is changing,” he said.

What’s Being Done to Save Coffee

From small-scale farmers to big producers, those involved in the coffee supply chain are taking steps to save the vital crop. In areas that are heating up, some farmers are planting larger trees to shade the smaller coffee plants underneath them, Hoffman said. In Latin America, governments that are dependent on coffee are investing in research to make more resilient plants, Shapiro-Garza said.

Sellers are responding, too. Starbucks, for example, is working with farmers to help provide seeds, monitor production and develop different strategies. Starbucks said it’s sharing the information it collects about adaptive farming techniques with other coffee farmers around the globe.

“It may be hard for people to understand why we are sharing all this information,” Schultz told Time. “If we don’t, there’s going to be tremendous adverse pressure on the coffee industry.”

The experts applaud the efforts underway to keep the java flowing, but they remain concerned about the crop’s long-term success.

“There are a lot of different initiatives that are moving forward within the industry to support coffee farmers in changing their practices in adapting to climate change, to looking to other areas where they could produce coffee as another strategy,” Shapiro-Garza said. “But it’s a tough problem for a lot of reasons.”

 


African States Can No longer Be Bothered With Covid-19, But A New Variant Has Arrived.

John O. Ifediora.

As the Covid-19 pandemic in sub-Sharan African countries conveyed a sense of existential urgency in March, 2020, African governments, with the aid of international organizations and rich western countries, surprised many with swift implementations of public safety measures recommended by the World health Organization (WHO). These measures included mandatory mask wearing, social distancing, the use of hand-sanitizers, and, as a measure of last resort, closed their borders and limited domestic travels. For a sustained period, these measures, while not fully observed by the public, limited the spread of Covid-19.

An accounting of how successful these safety measures were in checking the spread of the epidemic is hard to quantify, primarily because the supply of test-kits was inadequate, and record keeping of positive test results was unreliable at best. But to the extent that the morgues and hospital ICUs were not over-flowing with unburied remains of Covid-19 victims, a reasonable extrapolation may be made that the safety measures in place at the emergence of the pandemic were helpful. But that was then; at present, all safety measures are strictly pro forma as governments turn their attention to other pressing ‘needs,’ and the public resumes its search for economic sustenance and culturally prescribed social gathering, alas minus distancing.

The laxity shown by governments and the public in recent months has reversed the benefits of early strict enforcement of safety measures. The infection and death rates are now on the rise in the continent, and with inadequate medical infrastructure that defines the lots of constituent countries, a nasty viral onslaught is projected to decimate populations and economies far more than anticipated. This is especially worrisome as Covid-19 mutates and unleashes replicas of itself that are far more infectious, and perhaps deadlier; such replica, labeled B.1.17, has emerged in Britain, and under study by scientists to determine its characteristics and genetic make-up.

SARS-Covid-2, better known as Covid-19, is behaving as expected of viruses, which is to mutate into more or less deadly versions of itself until an effective vaccine minimizes its efficacy in transmission and associated health complications. The B.1.1.7 virus has now been isolated in Denmark, Australia, Iceland, Britain and the Netherlands. In Britain, its transmission rate is 50% higher than that of Covid-19, and comes with a higher viral load detectable in the nose and throat, but its ability to cause more complications and death is yet to be determined. In South Africa, a variant of Covid-19 with a much faster transmission rate is now in play, and under study.

These developments do not bode well for African countries. Unless governments revert to their early state of vigilance and enforcement of safety measures put in place at the outset of the Covid-19 pandemic, things could get nastier very fast.

 

 


Religion As A Source of Social Neurosis In Times of Collective Stress

 

John O. Ifediora.

In all nations, the quality and relevance of countervailing social institutions matter. That this is the case is particularly of import since institutions are rules that govern individual and collective behavior in any society. In this regard reference is here made to primary and enabling rules and observances that inform and guide conduct, specifically religious, political and economic institutions. In nations where these social institutions have evolved to the point where individual rights and freedom of choice are accorded universal cognizance with appropriate checks and protection, the polity is reasonably well-adjusted. Under this state of affairs, malfunctions in any of the constituent institutions are unlikely to have lasting effects, and minimal corrective measures are needed to restore normalcy; this sentiment enjoys durable currency in advanced democracies such as the United Kingdom, France, Japan, and the United States, where abnormalities are generally reflections of discontent, and may pose no serious danger to established norms unless left unattended. It is thus presumed that advanced democracies have built-in mechanisms that inexorably return them to long-run equilibrium in the event of temporary malfunctions in any of its institutions. Events within the last decade, however, have made this presumption less serviceable.

Religion-inspired violence is of ancient origin, and has found expression in many established faiths. In the normal run of things, malfunctions in religious and political institutions are always and everywhere responsible for all forms of societal neurosis that inflict a nation’s psyche in times of stress and uncertainty. That individuals, in extreme cases, are willing to kill the innocent in order to advance religious and political goals attest to the potency of deranged and malfunctioning institutions that guide and inform collective action. Suicide bombers readily come to mind; but whether society acknowledges it or not, these suicide bombers, once well-functioning members of society, were mentally deranged. No well-adjusted person wants to die; only the neurotic choses to die. And to a large extent, they are victims of distorted religious and political institutions that cut across nations at various stages of socio-political development. The ancient relationship between Christians and Jews provides an excellent context to this narrative, and would be used as a case-in-point in this discourse.

The Essence of Religion

Religion encompasses much; but chief among its defining features are rituals, symbols, practices, and a body of beliefs that afford meaningful interpretations of the meaning and purpose of human existence. Everything else associated with religion is only meaningful within the context of this defining belief system, for it provides the rationale by which rituals and symbols are reasonably apprehended. It is in this sense that theology is regarded as the foundation for faith; it is also in this regard that the search for the roots of violence inspired by religion must necessarily begin with the foundational theology, and doctrines that inform any religion’s answer to the question of salvation.

Certain types of theologies define precise and constrained bounds within which individual practitioners of the faith are accepted as true believers, and are thus deemed religiously legitimate. Such religious perspective is often accompanied by a strong belief of exclusive ownership of the ‘true’ meaning and purpose of human existence in relation to the Divine; but almost invariably this belief system implicates self-righteousness and exclusivity, both of which, under the right circumstances, are conducive to fanaticism. The intolerance of other faiths generated by such exclusive claim to the ‘truth’ has been the source of unimaginable inhumanity visited upon individuals, groups, and communities throughout history.

Of particular relevance to the discourse of theologically induced intolerance is what Glock and Stark referred to as ‘religious particularism.’ By this they mean a doctrinal claim that redemption or salvation is available only to certain individuals that meet specific criteria. More specifically, “religious particularism is the belief that only one’s own religion is legitimate. To the particularistic mind there are no faiths, but one true faith”(Glock and Stark, 1966). The ardent believer thus sees himself as one of the select few that comprise the chosen, ‘the salt of the earth, the light of the world, a prince disguised in meekness who is destined to inherit this earth and the Kingdom of heaven’(Hoffer, 1951).

But in modern societies that accommodate pluralistic views, particularism may be liberally or conservatively expressed. A liberal expression is more likely to accept all religious faiths as legitimate so long as they subscribe to the existence of one God; whereas a conservative strain of particularism may insist that religious legitimacy resides only in one faith, and delegitimize all other expressions of religiosity. It is in this regard that Colerideg writes, “He who begins by loving Christianity better than the truth, will proceed by loving his own sect or church better than Christianity, and end in loving himself better than all”(Mailer, 1963). Whether liberally or conservatively expressed, particularism delegitimizes all religions that are outside the confines of what is deemed the proper faith. The practical implications of the breadth of particularism are substantive, for they implicate exclusivity, and the potential for conflict amongst people of faith.

So far, the impression of particularism is that one who holds such view is very likely to view his religious status superior to others, and engage in invidious self-righteous judgment of the legitimacy of other faiths. But it is one thing to hold such view and altogether another to act on such held belief of superiority. It is when both are combined that particularism is especially potent and dangerous. The implication here is that some people of faith are perfectly capable of harboring particularistic views but do not act on them; while this capacity is atypical, it holds a powerful means by which people of differing religious status can reach a common understanding and acceptance.

But of import to this discourse is the situation where particularism leads to hostility, specifically its manifestation in Christendom dating from early antiquity. In this regard, the troubling questions are: how did Christian particularism lead to antisemitism, and what factors made it possible? In the Christian tradition an overarching issue involves the matter of salvation, and what practitioners need to do to be redeemed, and ultimately saved. Thus in order to uncover the factors that led to Christian particularism, it is necessary to look at its criteria for salvation, and what existing social conditions would enable its implementation. Thus, to generate and sustain particularism, the controlling Christian theology must first sow the seeds of particularistic ideas that consist of a generality of doctrinal claims informed by a body of beliefs that are proclaimed to be universally true, and contain the only ‘truth’ that is exclusive of those held by other faiths. Existing social factors and conditions would then limit the extent to which such particularistic ideas are implemented.

Once universality of a body of beliefs is claimed, the extent to which Christian theology may engender particularism is determined by the degree of specificity of its theological tenets. Thus, the more elaborate and detailed the tenets are, the greater the specificity of the theology. But specificity alone is not a sufficient condition for Christian particularism; there must be, in addition, a clear conception of people who do not meet the requirements of Christian religiousness as articulated by the tenets of its theology. It is this last step that makes Christian particularism partially whole. But being particularistic does not automatically lead to hostility towards the Jewish faith or Islam. The history of paganism and cultism shows that both belief systems were sufficiently detailed in their claims and doctrinal values of their gods; this is especially true of African, Roman, and Greek gods but none claimed universality of their beliefs. And as such, they were all able to co-exist peacefully. Only when the Christian theology was imbued with the aura of universality did it become fully particularistic.

Once the Christian church had developed its particularistic sensibilities, there remained a question of how to implement it. The missing ingredient in this regard was power – the ability to impose Christian ideology on non-believers, in this case the Jews. It must, however, be noted here that the question of how a particularistic body wields power in society is a function of its numerical status. A majority with a particularistic idea has the potential to be both violent and vicious in the face of resistance from bodies with opposing beliefs; indeed a majority need not have a particularistic theology to be violently repressive. On the other hand, when a minority is particularistic, it risks hostile confrontation with the majority that may oppose such imposition on several grounds, one of which is a perceived threat to existing ways of life informed by socio-cultural institutions. A historical case in point is the attempt to impose Judaism on the classical world. Olson explains:

“The ancient Jews having spread colonies throughout the Mediterranean world, armed with their particularistic view of a true and only god, embarked upon a campaign of active proselytization although in a minority status. The antagonistic response of the classical society followed. Even Rome, with its permissive, and eclectic, and somewhat instrumental approach to religion, the Rome which boasted of raising temples to the gods of every conquered nation, found itself unable easily to accommodate a religion that claimed not merely to be true, but to be singularly true” (Olson, 1962).

Emperor Constantine’s conversion to Christianity gave the Church the requisite instrumentality for the enforcement of its particularistic theology; for it then rose in triumph over the remains of the Roman Empire, which, by all accounts, was one of the most militaristic and efficiently organized pre-modern societies in human history. The outcome was a vicious and brutal imposition of the Christian version of the means to salvation. The hierarchical organization of the Church facilitated concentration of power in a Supreme Pontiff, which enabled both a monolithic expression of the Christian ideology, and effective suppression of internal dissent (internal dissent appeared much later). The Church was now a combination of potent social factors ---a particularistic theology, a majority status, centralized internal power, and derived external power. History shows that the early Church did not wield these powers with care or restraint.

With the historic Jews in a minority status, two dominant issues defined their relationship with Christians. One pertained the mutual claim to the Old Testament by both Jews and Christians as their common heritage. But in order for Christians to claim this heritage, they would also have to claim descent from Judaism, the faith of Moses, and by extension, recognize that ancient Jewry enjoyed a unique religious status that pre-dates Jesus, and had exclusive claim on the Old Testament. The second issue was the assertion that the Jews had fallen out of favor with God, thus leaving Christians as the sole inheritors of God’s grace and favor. This claim was based on interpretations of events in the Old Testament as the means through which the Jews forfeited their special relationship with God. On these disquieting issues, Hilberg elaborates:

“A crucial issue in the theological disputes between Jews and Christians during the first three centuries C.E. concerned legitimate succession from the Old Testament faith. Having emerged from its initial status as a Jewish sect, when Paul won Peter and his followers to the doctrine that gentiles could come into the faith without adopting Mosaic Law, Christianity, nevertheless, was irrevocably committed to the Old Testament as a prophetic basis for New Testament fulfillment. The proclamation of the divinity of Jesus was not to be taken as raising up a new god; rather Jesus was claimed to be the Son of the Old and eternal Yahweh, and Christianity the final resolution of an established religious tradition”(Hilberg, 1961).

The Jews rejection of the claim made by the Church would not settle the matter, nor was it the answer the mighty Christian majority expected from Jews; the Church needed legitimacy, but more importantly, it needed the Jews’ approbation of its theological formulation to affirm its claim as the legitimate heir to God’s favor. Something had to be done, but the extension of a friendly hand of persuasion was not part of the devised means; meanwhile more rejections of other aspects of Christian orthodoxy were to come, and the consequences would be unimaginable. The grounds for sustained repression of the Jews were now being laid.

Christianity’s Regime of Intolerance And Brutality in Early Antiquity

The Old Testament is essentially the history of the Jews, and without equivocation proclaimed Jews as the Chosen People of God. But the same Hebrew texts are indispensible to Christianity, thus acceptance of the primary thrust of the Old Testament would threaten the legitimacy of Christianity, a non-Jewish faith. The question to be resolved by the early Church was how to reconcile its non-Jewish status with the doctrine of the Chosen People. The theology developed by the Church as a solution to this unsettling issue was simple enough: Jesus was God’s revelation to the world, thus fulfilling the prophesies of the Old Testament, and marks the beginning of a new set of rules for God’s relationship with humanity. The death of Jesus of Nazareth was atonement for sins committed in the past, and now only through the Son of God, Christ, and his acceptance as the Messiah would one qualify for the Kingdom of God. Given that Jews have rejected the Messianic status of Jesus, they are hence unredeemed, and until such time as they see fit to accept Christ as the Saviour, they remain fallen from grace and out of God’s favor. Legitimate succession has now passed from Jews to Christians, and the Old Testament has been fulfilled in the New Testament, thus preserving its continuity. The Church thus staked its claim, and justified its existence.

But the Church was not done; the most powerful charge against the Jews was yet to be incorporated into its orthodoxy--- the charge of deicide. For more than two millennia the crucial issue that strained Jewish-Christian relations was the presumed collective role played by Jews in the crucifixion of Jesus. Early Church writers strongly believed that the Jews were responsible for this act, and should be held accountable. The ensuing vitriolic, and persecution were brutal and bloody, and culminated in the most unspeakable horror visited upon Jews both in medieval times and in recent history. The charge of deicide undergirded antisemitic actions against Jews by early Christians, and continues to inspire modern day antisemitism. The Council of Nicaea’s 325 C.E. creedal proclamation of Jesus as the “Very God of Very God” and “One of substance with the Father” provided strong justification for early Christian belief that Jesus, being the Son of God, was an extension of Divinity. Killing him was akin to killing the Divine (Gager, 1983). The statement by Stephen in the Book of Acts did not help matters:

“You stiff-necked people, uncircumcised in heart and ears, you are forever opposing the Holy Spirit, just as your ancestors used to do. Which of the prophets did your ancestors not persecute? They killed those who foretold the coming of the Righteous One, and now you have become his betrayers and murderers”(Acts 7:51-52, NRSV).

The canonization of New Testament Scriptures at the urgings of the Councils of Laodicea in 363 C.E, Hippo in 393 C.E., and Carthage in 397 C.E. further exacerbated the inexorable deterioration of Jewish-Christian relations (Grosser, 1983). The individual Christian who played a significantly noticeable role in damaging Jewish-Christian relations, and the consequent attacks on Jews is John Chrysostom (ca. 345-407). While in Antioch, he produced a series of eight sermons directed against Jews or Judaizers. His first Homily read as follows:

“Do not be surprised if I call the Jews wretched. They are truly wretched and miserable for they have received many good things from God yet they have spurned them and violently cast them away – The Jews were branches of the holy root, but they were looped off” (Chrysostom, quoted in Littell, 1975).

The theological antisemitism inspired by the Church, in due course, became the inspiration for secular antisemitsm. Beginning in the fourth century, Church leaders began to put in place restrictive measures against Jews, for if conversion of Jews would take longer than desirable, they felt it wise to prevent Jews from ‘contaminating’ Christians. Some of the more draconian measures put in place precluded intermarriage, sexual intercourse, eating together, and all significant social contacts between Jews and Christians. The Third Synod of Orleans, in the sixth century, banned Jews from employing Christian servants, and prohibited their presence on the streets when Passion Week was being observed. The Talmud, the Jewish Holy Book, was ordered burned in the seventh century, and in about the same period the Synod of Clermont prohibited Jews from holding public offices (Rothman, 1982). These edicts from the early Church formed the basis of cultural traditions of discriminations against Jews, and ultimately provided sustenance for Nazi atrocities in the mid 20th century. The following edicts are illustrative:

We decree and order that from now on, and for all time, Christians shall not eat or drink with Jews, nor admit them to feasts, nor cohabit with them, nor bathe with them. Christians shall not allow Jews to hold civic honors over Christians, or to exercise public office in the state.”

----------- Pope Eugenius IV, Decree, 1442.

 

  1. Marriages between Jews and citizens of Germany or kindred stock shall be prohibited. Marriages concluded despite the law shall be considered void even when they were concluded abroad.
  2. Nonmarital sexual intercourse between Jews and citizens of Germany or kindred stock shall be prohibited.
  3. Jews shall not employ in their households female citizens of German or kindred stock under 45 years of age.

 

------ German law enacted September 15, 1935.

 

images-1

Antisemitism has been, and would remain in the foreseeable future, the ‘elephant in the room’ in any serious discussion of Jewish-Christian relations; recent neo-Nazi activities in Europe and the United States of America furnish grounds for such pessimism. The source of Christian antisemitism may still be found, with relative facility, in the body of beliefs that inform Christianity, and its traditional orthodoxy.

The point of this historical narrative is to allay any presumption that religion inspired intolerance and violence is of recent origin. What the world is now witnessing is a different strain of particularism; this time it is extremist interpretation of Islamic theology in contradistinction to other faiths, and Western cultural and social sensibilities. The relevant question now is what inspired it and how will it end? The perception of unfair treatment by adherents may not be unreasonable.

 

Bibliography

Berhard, Olson E. Faith and Predujice. New Haven: Yale University Press, 1962.

Beare, F.W. The Earliest Records of Jesus. New York: Abingdon Press, 1962.

Earkin, Frank. What Price Prejudice? Christian Antisemitism in America. New York: Paulist Press, 1998.

Gager, John. The Origins of Anti-Semitism: Attitudes toward Judaism in Pagan and Christian Antiquity. New York: Oxford University Press, 1983.

Glock, Charles Y., and Rodney Stark. Christian Beliefs and Anti-Semitism. New York: Harper & Row, 1966.

Grosser, Paul and Egwin Halperin. Anti-Semitism: Causes and Effects. New York: Philosophical Library, 1983.

Hilberg, Raul. The Destruction of the European Jews. Chicago: Quadrangle Books, 1961.

Hoffer, Eric. The True Believers. New York: Mentor Books, 1951.

Ifediora, John O., “The Blood Libel Legend: Its Longevity and Popularity.” Position Paper, University of Cambridge Program on JCR (2013), pp. 1-14.

Langmuir, Galvin. Toward a Definition of Antisemitism. Los Angeles: University of California Press, 1990.

Lieu, Judith. “The parting of the Ways: Theological Construct or Historical Reality?” Journal for the Study of the New Testament 56, (1994): 101-119.

Littell, Franklin H. The Crucifixion of the Jews: The Failure of Christians to Understand the Jewish Experience. New York: Harper & Row, Publishers, 1975.

Mailer, Norman. The Presidential Papers. New York: Putnam, 1963.

Prager, Dennis and Joseph Telushkin. Why The Jews? The Reason for Antisemitism. New York: Simon & Schuster, 2003.

Rothman, Stanley, and Robert Lichter. Roots of Radicalism: Jews, Christians, and the New Left. New York: Oxford University Press, 1982.

Ruether, Rosemary. Faith and Fratricide: The Theological Roots of Anti-Semitism. New York: The Seabury Press, 1974.

 

 


The Dutch Disease Syndrome and Choice of Governance: The Nigerian Experience.

John O. Ifediora.

If country-specific data, and the statistical analysis based on them are good surrogates of what they represent, then figures from The World Bank indicate, by any reasonable standard, that Nigeria’s economic performance since independence in 1960 has been abysmal. Using the most reliable survey of the country to date, the figures show that in 1970 the per capita GDP for the country was US$1,113, but by 2000 it had fallen to US$1,084. Between 1970 and 2000, the poverty and income distribution indices show similar deterioration. The poverty rate, measured as those living on less than US$1 per day, had risen from 36% to approximately 70%; this means, in raw numbers, that the number of people living in extreme poverty had risen from 19 million in 1970 to 90 million in 2000. The income distribution figures are no less discouraging; for they show that in 1970, the top 2% and the bottom 17% of the population had an equal percentage share of the national income; but in 2000, the top 2% and the bottom 55% had about the same share of the national income (World Bank, 2005). This, in spite of the fact that between 1965 and 2000, Nigeria had derived a total net revenue of US$350 billion from oil exports. Similar surveys of many sub-Saharan African countries reveal equally disquieting trends (World Bank, 2007); but why? Why has economic development proved so elusive in Nigeria? Is the absence of sustained economic growth, despite massive assistance from donor states and windfalls from oil revenues, a result of native factors that are inhospitable to economic development? World Bank staff economists took the lead in seeking answers to these questions, and as a result numerous economic studies have since been commissioned, and undertaken (Van de Walle, 2001). The results and policy recommendations vary.

The Dutch disease syndrome or the natural resources curse provides, amongst many, a plausible explanation; but the explanation it provides can only be profitably utilized when understood within the context of the political realities that inform the governance structure of Nigeria. And since state actions or policy initiatives put in place by the ruling elites have both social and economic consequences, the problem of economic development cannot be entertained as a strictly economic phenomenon; the controlling regime type matters. Since poor economic performance and regime type are central to Nigeria’s underdevelopment, suggested remedies must be holistic. In this paper, I take the position that industrialization, regulated international trade, and targeted foreign direct investment are the appropriate path to economic growth, and enduring democracy.

Introduction.
The level of economic development and the path taken to sustain such development are invariably deterministic of chosen forms of governance, and the political realities that inform them. The literature on political economy is near unanimous on this claim, especially as it pertains the experiences of the developing world --- Africa, the Middle East, and parts of Asia. The relationship between economic development and choice of governance is indeed intricate, for it touches all mechanisms by which a society deploys its resources to meet defined goals and expectations, and how it defines and distinguishes itself from others. It is to this relationship that one must appeal in order to understand, not only why countries in sub-Saharan Africa remain economically underdeveloped, but also the forms of government that exist in the sub-continent.

With the sole exception of South Africa, all sub-Saharan African countries depend largely on extractive activities, and foreign aid to sustain their faltering economies. Some countries like Nigeria, Equatorial Guinea, and Gabon depend on oil revenues; most of the rest depends on revenues from other forms of mineral extracts, but all, in spite of derived revenues from natural resources depend, to varying degrees, on foreign aid. These two pre-dominant sources of sustenance for the sub-continent (natural resource and foreign aid) are powerful determinants of the path to economic development taken so far, and the choice of political governance that define constituent countries.

In African states, especially those endowed with abundant natural resources, there is strong evidence that ‘rents’ from these resources have shaped the allocation of political power within these states as political elites use derived economic benefits to sustain their privileged positions in government. That authoritarianism, and one-party political dominance were fashionable in most of these states in less than a decade ago is traceable to natural resource rents; that modern African states today with multi-party political structures are more often than not dominated by one political party is equally explainable, with relative facility, by the same phenomenon. It is this link that I intend to explore in this paper. Specifically, I intend to explore the link between the path taken to economic development and the choice of government in sub-Saharan Africa in general, but with specific emphasis on Nigeria. I also intend to use the explanatory properties of the Dutch disease syndrome to explore this link, and then recommend solutions. The ensuing analysis and solution are based on the understanding in economics and political science literatures that economic growth and rising personal income are generally conducive to democracy, while slow growth rates and low income levels tend to encourage authoritarianism.

1. The Dutch Disease or Natural Resource Curse Paradigm

In its popular apprehension, the Dutch Disease is the observed correlation of the discovery of marketable natural resource within the territorial competence of a state and the subsequent decline in the state’s rate of economic growth. While this observed correlation had been the primary preserve of development economics, it has now been profitably used by political scientists to seek explanations on how this observed economic phenomenon defines, to a large extent, the political realities in a nation-state; both in terms of political arrangements, and the relationship between the state and the governed.

The Dutch Disease syndrome or the natural resource curse has been serviceable in attempts to explain why countries that unexpectedly discovered highly marketable natural resources end up with a poorly performing economy or actually experience de-industrialization. While this was the original comprehension of the concept, it has now been used to explain similar experiences in aid-recipient countries. Typically, this economic phenomenon manifests itself in one of two variants. In a fixed nominal exchange rate regime, an influx of foreign exchange income from any source, i.e. income from natural resource export, foreign aid, etc., would tend to impact domestic relative prices. Since prices of internationally traded goods are set by world markets, prices of goods consumed domestically would be inflated, thus giving rise to resource reallocation towards domestically traded goods. This causes a reduction in investment in export goods, and a subsequent decline in the amount produced. Domestic production of all goods may actually fall if the inflow of foreign exchange income makes imported goods cheaper, and more attractive. This is one variant of the Dutch disease effect.

Another variant is when the influx of foreign exchange income occurs in a flexible or floating nominal exchange rate regime; in this instance, the influx induces an appreciation of the exchange rate of the domestic currency, thus making exports more expensive while cheapening imports. This has the unwanted effects of reducing domestic production in favor of imports. At the extreme, it discourages industrialization or induces de-industrialization as the country relies ever more heavily on imported goods at the expense of domestic production, experience, and technology that come with learning by doing. This is the classic sense of the Dutch Disease syndrome, and affords reasonable guidance to the Nigerian experience, and those of countries in central Africa. As an explanatory tool, it helps explain why resource-rich countries such as Nigeria, the Congo, and Libya have been outperformed economically by resource poor-countries such as South Korea. And why each of these resource-rich countries remains governed by authoritarian or quasi-democratic regimes, while their relatively resource-poor counterparts are either established democracies or seriously engaged in democratization.

2. The State of Economic Development in Africa
Despite the influx of development aid, both in-kind and financial, from donor nations and international financial institutions, Africa’s economy remains abysmally weak, and accounts for less than1.2 percent of the World’s GDP (World bank, 2006), even though more than 10 percent of the world’s population calls it home (UNDP, 1996). The World Bank and the IMF, beginning in the early 1970s, used different combinations of monetary and fiscal policy instruments that included loans and technical assistance to help spur development in the region but to no avail. By the late 1990s, it became clear to all concerned that reform efforts in Africa have failed (Meredith, 2005); a new approach was needed.

In the main, economists tended to equate development with economic growth, and these growth models essentially regressed a measure of Gross Domestic Product over a standard array of endogenous and exogenous variables believed to influence national income (Barro, 1991). The regression results invariably lend support to the usual expectations: the stock of physical capital, the level of human capital development, openness to international trade, political competition, level of inflation, and government expenditure, all presumed to have remarkable effects on economic growth (Klein and Luu, 2003). However, in most of these studies, the dummy variable that captures everything else outside the standard vectors were consistently found to be significant (Van de Walle, 2001), thus giving empirical sustenance to the suspicion that there are certain African characteristics that are, perhaps, not malleable to the Western notion of development.

What the ‘dummy variable’ captured has been given different interpretations. Hagen (2002) argued that geographical (landlocked or not) and geopolitical circumstances are to blame for Africa’s poor performance. Chen and Feng (1996) pointed to political and social instability as the main culprits, while Lal (1988) posits that harmful cultural norms are partly responsible for Africa’s underdevelopment. Landes (1998) concurred with the findings of Lal, and emphasized the importance of certain cultural features that ‘spring from a society’s most deeply held ethical principles’ in moving a country towards the path of sustainable development. The flaw in these suppositions is that they are presented as ‘reasons’ for why sustained development has failed to take hold in the sub-continent; but a cursory look at the history of economically developed nations in Western Europe, North America, and parts of Asia would reveal that these same ‘reasons’ and circumstances, except for climatic conditions, were very much present in these countries, but they managed, in spite of them, and with time and judicious use of their resources, to achieve economic development.

3. A Brief Literature Review on Development Efforts in Africa
Beginning in the early 1970s, African leaders and ‘well-meaning’ international agencies grappled with the forces that have, almost interminably, subdued socio-economic development in the continent despite generous foreign aid, progressive moves toward western-style democracy, and significant independence in resource management (UNDP REP., 2000). But in spite of these indicators that should, in the normal run of things, conduce to sustainable development, Africa remains economically backward, and boasts the highest incidence of poverty and illiteracy in the world while being ravaged by devastating civil wars, and intractable diseases (World Bank, 2001).

Up until the mid-1970s, development experts from the IMF and the World Bank advised African leaders that development was synonymous with economic productivity and growth (Killick, 1998). The leaders concurred, in part because they needed financial assistance to sustain the economies both past and present administrators had essentially bankrupted, and because these leaders had no pragmatic alternative description of what development in Africa entailed.

But this interpretation of the essence of development by foreign experts was not merely advisory; African leaders could not take it or leave it, for it formed the basis of aid or loan programs received from the IMF, the World Bank, and members of the Paris Club of international lenders (Klitgaard, 1990). Since the assumed goal was to help African countries develop economically, it was deemed essential that domestic productivity be modified to accommodate more export-oriented goods. However, for these goods to be competitive in the world market, domestic currencies had to be devalued; moreover, currency devaluation served the additional need to dampen the appetite for excessive imports, and to compel more internal consumption of domestically produced goods. Currency devaluation and ‘Structural Adjustment’ of domestic programs thus became the primary prerequisites for loans from both the IMF and the World Bank, and became part of the broader conditionality clause in subsequent loan agreements.

Evidence now shows that both programs failed miserably, and that their combined effects were devastating to African economies ...currency devaluation meant that these countries could not afford to import advanced technologies from developed countries, and the structural adjustment programs insisted on by the World Bank experts in the early 1970s discouraged investment in human capital, i.e. healthcare and educational systems (Klitgaard, 1990). The precipitous decline in both sectors in sub-Saharan African countries is a direct consequence of these policies; for they enabled venal African leaders to re-direct resources to ‘white elephant’ projects that were more conducive to misappropriation of financial resources than to production of goods and services needed to enhance industrial capacity, and infrastructure development (Jeffries, 1993).

This economistic view of development, which held sway from the early 1960s to the mid 1970s, is essentially one that requires a shift from an agrarian economy to one dominated by export-based manufacturing, and provision of modern services in secondary and tertiary sectors. The aim being to stimulate faster growth of the Gross Domestic Product (GDP), encourage more export of domestically produced goods, attract foreign investments, aid, and loans for infrastructure enhancement, and capacity-building (Emizet, 1998). Thus, to the economist, development in Africa is reducible to certain quantifiable indicators that indicate a trend ----the economy is either moving in the right direction or not; if the GDP is rising and sustainable over a certain period, then mission accomplished; the country is developing. Other social factors that should complement a developing society were essentially ignored, e.g. the level of literacy, availability of adequate healthcare services, the educational system, political stability, housing, and cultural observances.

The literature on development is immense, and continues to evolve to reflect the insights of specialized fields and disciplines that were once considered outside the scope of matters relevant to development studies. There is now near unanimity amongst academics and practitioners that development is driven by innovation (Knack and Keefer, 1995), good governance (Osborn, 2004), responsive institutions (North, 1990), and other yet unknown variables. These views are particularly relevant to the forty-eight countries that constitute sub-Saharan Africa, where all efforts in the last three decades to nudge them towards sustainable development have failed (Williamson, 1994).

The problems of underdevelopment in Africa continue to be severe, and unabating. A number of reasons have been cited for this conundrum, including in many instances, the persistent contradictions between private and public roles (North, 1990). In this regard, it is emphasized that most African nations are yet to fully emerge from the patrimonial mode of post-colonial era wherein transplanted ideas and domestically informed notions of integrity compete and clash with the goals of personal enrichment and group enhancement made possible by the state apparatus (Meredith, 2005). Thus, ‘the legacy of colonial legality, with its suppression of indigenous economic and political competition against the state’ has encouraged and enabled African elites to dominate and misappropriate resources by means of the state rather than allow transparency and accountability (Goldman, 1980). This outcome, argues Berg (1993), is the essence of bad governance which perverts the norms of legitimacy, laws, and conventions embodied in domestic institutions designed to administer the affairs of a society.

These institutions, in a very important sense, are rules; rules that dictate, regulate, and constrain civil activities and behavior, and consequently play critical roles in the economic development and the wellbeing of society. On this basis, Caiden (1992) theorized that the difference between developed Western countries and poor African nations has less to do with productive capacity than with prevailing domestic institutions that affect individual incentives for innovation, and the deployment of factors of production. Furthermore, because developed economies have institutions that support contracts and property rights that are essential for effective market transactions and complex commercial agreements, the system is able to enhance predictability of actions by restraining opportunism and arbitrary influences by the elites. To this effect Douglass North (1990) wrote, “The inability of societies to develop effective low cost enforcement of contracts is the most important source of both historical and contemporary underdevelopment in the Third World.”

To many disinterested observers, however, the principal cause of Africa’s vicious and debilitating cycle of underdevelopment is bureaucratic corruption (Klitgaard, 1990; Mauro, 1995). While bureaucratic corruption can be found in both developed and developing nations, its consequences (its distortionary effects on resources allocation) are particularly baleful to nations in Africa with weak socio-political institutions, and inadequate economic infrastructures (Ifediora, 2005). For in these developing African countries, important policy decisions are often guided, not by sound public policy but by personal interests; the outcome is usually a thoroughly compromised and debased economy (Ifediora, 2005).

4. Application and Analysis of the Dutch Disease Model
There is strong empirical evidence that suggests a long and reasonably durable causal relationship between a natural resource-dependent state and authoritarianism. Studies by Wantchekon (1999) and Ross (2000), for example, provide substantive grounds for this observation. But less studied, however, is whether the same causal relationship exists between similarly situated states and democracy or any of its variants; or is it the case that states whose primary source of revenue derives from natural resources are, at least in the immediate future, condemned to autocratic regimes, dictatorships and failed attempts at democratization? Studies by Auty (1990), Gelb (1989) and more recently, Sachs and Warner (1997) seem to suggest the latter by linking natural resource dependence to slow economic growth, which, while not by itself definitive, conduces to non-democratic impulses and regimes. This outcome has also been traced by political scientists to the amenability of weak social institutions to natural resource dependence, in the sense that such dependence encourages states to rely on a system of patronage which invariably defeats the development, in the short-run, of democratic regimes underpinned by a competitive electoral system, transparency, and functional institutions.

The dependence on natural resources to spur economic growth is by itself not harmful, and most definitely not a curse. They are rather endowments from nature that should, when properly utilized, generate both domestic benefits and useful externalities to other states. The harm, however, stems from how states and their agents manage the rents derived from these resources. In the economics parlance, rents are returns to effort that are in excess of incurred cost and normal returns to investment. This surplus over cost and normal profit are more remarkable in oil than in other mineral resources for the simple fact that the cost of extraction is lower in oil production. The abundance of such rents makes the extraction of tax revenues unnecessary, hence the absence of accountability and transparency on how derived revenues are spent by government officials.

It is the manner in which derived revenue or rent is spent that gives rise to the damaging effects of natural resource abundance; if rents are spent wisely, the natural resource curse or the Dutch disease effect would be kept at bay. But since rent does not expend itself, it follows that the choices made by people in government on how and where to allocate derived rent are defined by the nature of the governance system they operate in. Thus, the state, aptly referred to as a ‘rentier state’, becomes the focus of analysis in comparative and international politics in relation to the political economy of natural resources. Okruhlik puts this perspective quite succinctly:

“Neither Weberian nor Marxist conceptions of statehood adequately account for development in oil states. The Weberian emphasis on extraction does not apply because oil states have been largely relieved of that function. Thus, a defining exchange between the state and society is absent. The Marxist emphasis on class does not apply because people do not identify themselves by their relation to the means of production. The more salient identities are based on family, tribe, religion and region. Since the rentier state depend on an enormous expatriate labor force, the primary class distinction is that between indigenous citizens and foreign resident workers.”(Orkruhlik, 1999).

More recent studies on the effects of natural resource on states and regimes have centered on political stability, domestic conflicts, and capacity (Collier and Hoeffler, 2005; Snyder and Bhavanani, 2005). The mechanisms or channels by which any of these effects may be realized typically involves any combination of these three explanatory models: a rentier state thesis, a repression approach, and the rent-seeking hypothesis. These models may also be profitably used to explain the link between natural resource driven growth and choice of governance.

The rentier state thesis presumes that as revenue from oil, for instance, becomes progressively dominant in a state’s revenue function, the state gradually moves from one of extraction to a distributive state. Through this transformation, the state abandons its traditional means of raising revenue (taxation and fees), and relies almost exclusively on revenue derived from the export of oil. Once the state is relieved of its dependence on tax revenue, the crucial bond between the state and the governed is broken, hence there is no need for accountability on how derived revenue or rent is spent because the governed are no longer burdened by taxation, and by implication, the state sheds its obligation to represent the interests of the governed. But the loss is not one-sided, for by not having a robust taxing apparatus, the state loses a vital aspect of its capacity as a governing entity --- the ability to collect and document relevant information a modern state needs to govern effectively. The transformation into a distributive state now enables the governing elites to use derived rent to appease select social groups and buy political influence.

The rentier state model adequately explains the Nigerian political experience since the oil boom in the late 1970s. The military elites were able to buy off potential opposition to their various regimes with rent derived from oil; and for all practical purposes, the civilian regimes that succeeded the military dictators seem to have perfected this process in spite of all outward pretences to democratic principles and observances. The regimes in Libya, Egypt, and that of the Pahlavi in Iran are readily explainable by a variant of the rentier state thesis that predicts prolonged authoritarian regimes made possible by rents (Beblawi and Luciani, 1987). Despite the inherent weaknesses of a rentier state, Chaudhry (1997) argues that in periods of high rents, rulers are able to sustain their rule with larger payments to social groups essential for the survival of their regimes, and as such are able to retain power longer than they would have in the absence of rents.

The repression thesis, as recently studied by Bellin (2002), and Ross (2001), formalizes what is commonly observed in developing nations of Africa and Latin America, where oil revenue is used to finance the purchase of antiquated military hardware used by dictators to repress the governed. In the absence of functional and effective social institutions, oil wealth enables authoritarianism through investments in repressive instrumentalities designed to suppress opposition and preserve existing distribution of political and economic power. This was certainly the case in Nigeria under the military regimes that lasted from 1983 to 1998, where a disproportionate percentage of the Gross Domestic Product was spent to equip and sustain military personnel in total disregard for the crumbling educational and essential social institutions. The ravages inflicted on the country in that era is still resonating in the country in the form of dysfunctional schools, decrepit road networks and transportation systems, and suffocating civil bureaucracy. The strategic value of oil to the developed world also makes states rich with oil resources immune to international pressure; thus, Western countries, in deciding how to deal with authoritarian regimes with oil reserves defer to market driven principles rather than the protection of human rights and good governance, and by so doing help prolong oppressive regimes. It is in this regard that oil wealth renders regimes less receptive to external pressure to reform or liberalize.

The rent-seeking model is closely tied to the ‘patron-client’ paradigm which focuses on how resource abundance generates rents that accrue to the ruling elites who then use it to generate political support from those who seek access to such rents. In oil rich countries such as Nigeria, the risk of distortionary activities is more remarkable than in other resource-led growth countries for the simple fact that oil has a high return to cost ratio than other primary commodities. This higher level of rent provides added incentives for the governing elites to loot the public treasury to preserve or enhance their economic and political interests. But this is not confined to the elites; with time, the non-privileged populace soon realize that rent-seeking rewards talents better than entrepreneurial efforts. In Nigeria, for instance, rent-seeking has led very talented young men to invest very little in formal education, and instead enlist in the military where access to government contracts is achieved with relative facility. The outcome of this is a debased and misdirected market economy with an army of unskilled and ill-educated labor force. It is for this reason that “resource-rich countries like Nigeria, Argentina and Venezuela have been outperformed by resource-poor countries such as Korea and Taiwan. In particular, despite huge oil windfall, Venezuela has suffered a decline in per capita output of 28% from 1970 to 1990, and Nigeria experienced an output contraction of 4.4% from 1980 to 1990.”(Lam and Wantchekon, 2003). With perverted incentives comes income inequality, which in turn creates the necessary conditions for patronage politics.

In Nigeria more than 55% of oil rent is retained by the federal government, which then distributes 34% of this sum to 36 state governments that must subsequently deal with ethnic and regional competition for oil revenue that have so far defined Nigeria’s institutionalized patronage system (Bienen, 1995). From this sum the governing elites in power benefit their ‘clients’ through the apparatus of federal and state government by way of contracts or direct disbursement of funds to political operatives and deputies. By these means, the ruling party guarantees its perennial dominance of all other political parties, and consequently imposes on the governed a defacto one-party government. In Libya, and Equatorial Guinea, the same phenomenon may be responsible for observed authoritarian regimes. The almost certain outcome of such rent-seeking behavior is the marginalization of a country’s electoral system by making fair and free political competition impossible, thus making quasi-democracy or authoritarianism the default political regime.

All three thesis stem from the Dutch disease syndrome, and collectively help explain why Nigeria remains economically underdeveloped, and governed, since her independence in 1960, by a variety of regimes, none of which may be reasonably defined by the tenets of democracy or its underlying principles. But these realities, I propose, would change if and when the ruling elites put in place sound industrial and trade policies. To these issues I now turn.

5. The need for Industrialization
Since the late 1970s, Nigeria’s industrial policy has been guided by policy recommendations from the Washington consensus (The IMF and The World bank), in the main because the country is heavily indebted to these institutions, and thus obliged to implement prescribed economic programs aimed at creating the ability to pay back borrowed funds. While these programs may be useful in meeting short-run goals of debt-reduction, they are, however, at odds with long-term objectives of sustained economic growth.

The current national industrial policy is one that advocates export-substitution in near total disregard for domestic-based manufacturing. The point being why expend borrowed and needed resources re-inventing the ‘wheel’ when all essential manufactured goods could easily be imported from industrialized countries? Thus, under this policy of export-substitution, no emphasis is placed on developing a domestic industrial base with manufacturing capabilities. The country thus went from an agrarian economy to a service and trade based one, and in the process by-passed the crucial stage of industrialization, a processes that no industrialized country has failed to undertake, and one that is crucial for sustained economic growth. This strategy of export-substitution as means to economic growth, is not confined to Nigeria; it is the controlling strategy in almost all sub-Saharan African countries indebted to the IMF and the World Bank as part of the conditionality requirements for loans.

I here contend that the absence of industrialization in Nigeria, occasioned by the strategy of export-substitution, is a major cause of underdevelopment in the country for the simple reason that its absence denies the country the ability to mechanize all relevant sectors of its economy. The ability to manufacture products domestically is essential to economic growth through the process of ‘learning by doing’, which in turn, enhances the opportunities for technological innovations necessary for sustained development. But how exactly does industrialization conduce to economic development, and by what channels is the domestic economy improved by the process of industrialization? Economic development, in its popular apprehension, entails a continuous process by which a society is structurally transformed from one that is primarily agrarian to that which is predominantly industrial, and urban with the attendant characteristics of rising collective wealth, meaningful diversity of choices, and stability of social institutions (Mellor, 1998). This transformative process, by testimony of historical precedents, is achieved in four progressive stages (Bromley, 1995): First, a major institutional shift occurs requiring investment in modern technology, establishment of new industries, creation of new markets, and strengthening of existing infrastructure; second, derived technology from newly created industries is made readily available to the agricultural sector, thus mechanizing and engendering economies of scale in craft, farm, and animal husbandry activities. Local and regional factor markets are energized by incentives made possible by developing product markets in the broader economy; third, the agricultural sector is now fully mechanized and is now part of the industrial economy. The growing industrial base invariably leads to urbanization, and with increased efficiency in the agricultural sector, urban dwellers now spend less of their income on food, and more on manufactures and services; in the fourth and final stage, agricultural activities become less significant in the overall economic productivity of the domestic economy as industrial productivity and tertiary services rise in importance. This has been the customary path taken by Western European economies to economic development, and for that matter, the rest of the industrialized world.

But as stressed by Bromely (1995), this received interpretation of economic development, and the process by which it is achieved, while admittedly teleological and descriptive of the path taken by countries in temperate climates (Western Europe/North America), maybe ill-suited for countries in sub-Saharan Africa. This view has gained remarkable currency in the literature, and will be addressed in due course; but for the matter under immediate consideration, the traditional interpretation of economic development, and the channels by which it is achieved are presumed serviceable, and remain largely relevant to Nigeria, albeit with significant modifications to account for lived experiences, cultural sensibilities and traditional observances. Given that all known industrialized countries are also economically developed, and that all underdeveloped countries are either agrarian or largely dependent on their extractive industries for sustenance, industrialization and the accompanying beneficial externalities are imperative to Nigeria’s economic development. It is in this sense that Nigeria must emulate the industrial strategies adopted by economically developed countries if the goal is sustained economic growth with less emphasis on its agricultural and extractive sectors.

Creating an industrial base, and the attendant benefits
Countries that have benefited from industrialization invariably have in place certain minimal prerequisites ----- a reasonably reliable supply of electricity, a functional educational system responsible, at the first instance, for training the work force, and the acquisition of practical and idle knowledge, a dependable communication and transportation network, and responsive civil institutions that enforce rules and private contracts. Nigeria is severely deficient in these areas, and must have these minimal requirements in place in order to profitably implement any industrialization regime. Once these requirements are in place, the country should target specific sectors of the economy that, given the productive and technological stage, and needs of the country, would benefit most and relatively sooner than others from modern technologies acquired from industrialized countries. Firms in these sectors would typically be densely concentrated in particular geographic areas in order to take advantage of both direct benefits and relevant spillovers in the form of human capital, advanced knowledge, and institutional development (Greenwald and Stiglitz, 2006). It is also through these spillovers or externalities that the broader economy benefits from industrialization.

To David Hume, “the best way to improve agriculture is through the roundabout way of first improving the manufacturing industry ---and we now have a millennium of historical data to back up Hume”(Reinert, 2007). Unlike the agricultural sector where production units are usually dispersed, and small in size, industrial productivity usually requires large and stable firms that have the ability to absorb novel ideas, and thus serve as reliable sources of innovation. Innovative ideas, practical knowledge acquired through ‘learning by doing’, and accumulation of human capital, are some of the direct benefits a society derives from industrialization; these and accompanying spillover effects are transmitted to the domestic economy in the form of economic growth. The channels of transmission are relatively well established; for instance, a manufacturer of automotive equipments acquires modern technology from firms in industrialized countries, and subsequently makes the technology available to related domestic firms, and from these firms the technology trickles down to the agricultural sector and finally becomes part of the accumulated productive knowledge that stimulates further economic productivity and growth. Greenwald and Stiglitz put it quite succinctly, “For the developing country, there is further reason for promoting the industrial sector: it is the window to the world, the channel through which more advanced knowledge gets transmitted to the developing country for both industry and agriculture.”(2006). It is also a window through which the world may, with time, derive other forms of useful knowledge from the developing country.

6. The harmful Effects of ‘Free” International Trade to Nigeria’s Economy
A major contributing factor to underdevelopment in Nigeria is the belief that free international trade based on the concept of comparative advantage would lead to economic growth. In this regard, Nigeria was encouraged, again by the IMF and the World Bank, that it should devote its resources to export-based production of agricultural and extractive primary goods, with a view to attracting ‘hard currency’ into its economy. The outcome of this trade policy is that it encouraged Nigeria to specialize on productive activities subject to diminishing returns to effort, and ‘perfect competition’. The economic history of the World’s industrialized countries makes it abundantly clear that countries that remain largely at the extractive and agricultural stage of economic productivity remain poor for two reasons: first, these activities depend on a fixed input, land, and as such productivity is subject to diminishing returns; second, the products are universally ‘similar’ and thus operate in a world market where producers have no effective control of the prices for their products, and face intense competition from more efficient producers in industrialized countries. The combined effect is restraint on income for producers, and by extension, for the country. Thus, Nigeria, in order to generate higher aggregate income, must undergo a structural shift that favors manufacturing and industrial activity; for not only would this transformation enhance its agricultural capacity in the intervening period, it would also create an industrial base necessary for sustained economic growth. Only when such structural shift in the domestic economy is successful would free international trade be beneficial. The point to be emphasized here is that developing countries in sub-Africa are essentially poor because their productive activities are “either devoid of learning potential and/or the fruits of learning---rather than producing local wealth --are passed on to their customers in rich countries in the form of lower prices. From this perspective, what we call ‘development’ is essentially knowledge-and technology-based rent that often is reinforced, rather than reduced, by free trade between nations at very different levels of development.” (Reinert, 2007) The Nigerian experience is no different.

Experience has long shown that free international trade amongst countries that are far apart in their various stages of industrialization harms the least developed ones. It therefore follows that free trade on the basis of comparative advantage is most advantageous to countries near parity in industrial strength, and that countries like Nigeria without an established domestic industrial base are better-off economically protecting their agricultural sectors from the ravages of international competition until they first industrialize. But how should Nigeria engage the outside world in matters concerning trade, finance, and the acquisition of modern technologies? The concept of ‘managed’ international trade will suffice; by this I mean purposeful engagement that targets specific domestic sectors for development through judicious use of indigenous resources and imported technology. It also means restricting Foreign Direct Investment to select industries.

7. Should Foreign Direct Investment Be Part of Nigeria’s Development Strategy?
Capital, like all productive economic inputs, flows to where it can obtain the highest returns possible. It is in this context that one must understand what motivates foreign investors to put their financial resources at risk in a developing country. Countries that provide investors with the proper social infrastructure, a pool of relevant work force, a safe environment, and a potentially strong market for their products and services would attract foreign investment. This means that countries must either have the potential for, or have experienced sustainable economic growth before it gets the attention of foreign investors. It is in this sense that policy makers should not regard Foreign Direct Investment (FDI) as a cause of economic growth, but rather that FDI has the potential to contribute to economic expansion only after such expansion has begun. It is also for this reason that developed countries, as a collectivity, receive more than 85% of all FDI made and received annually (Chang, 2008).

Foreign Direct Investment is an important component of a county’s Capital Account, and consists primarily of investments made in the host country by foreign firms and other private investors. It is also a reasonably stable source of extra capital (in 1997 net FDI inflows into developing countries was $169 billion, between 1998 and 2002 it averaged $172 billion per year; World Bank, 2004); and contributes immensely to a country’s external balance, while enhances domestic productivity through technology transfer and managerial expertise. This is because foreign investors generally seek to have direct influence or control of the firm or enterprise they are funding. But there is a catch; when the host country has an unregulated and open capital market, FDI can be monetized and sent out of the country in very short notice. Foreign firms can use their local assets in the developing country as security for internal loans and transfer the funds out of the country, thus creating a negative impact on the country’s foreign exchange position (Chang, 2008). Thus, while FDI continues to be serviceable and indispensable to economic growth and development, it can be problematic, and has the potential to retard long-term growth if left unregulated. This occurs through its ability to suppress current and potential domestic competitors in the developing country. With the ability to provide superior products and services at competitive prices, the level of productive capacity in the developing country is effectively compromised in the long-run if indigenous firms are killed-off through competitive pressure.

In order for a developing country, like Nigeria, to experience the maximum benefit of FDI, it must regulate its entry, and direct it to targeted industries where it is most beneficial. China, for example, severely restricts FDI but still manages to attract 10% of total FDI in the world (World bank, 2006). This is principally because it offers the potential for rapid growth, and the requisite social infrastructure that investors and transnational firms find attractive. The same holds true for South Korea, and India; they all restrict the level of FDI that flows into their economies, and only allow those that are needed in specific sectors. This strategic approach to development has served these countries well; china used a 30% tariff to protect its industrial base, and India used one that is above 30% to achieve the same objective while imposing severe restrictions on FDI (Chang, 2008).

It is on these grounds that I urge a managed international trade regime that includes a significantly limited import-substitution strategy in Nigeria, and to target and protect a few relevant industries that are essential to the establishment of a domestic industrial base. This policy would afford import-protection to these industries for a fixed time period, and then gradually lifted to expose them to foreign competition once they are reasonably established. By systematically lifting initial protection mechanisms, firms in these industries would be compelled by competitive market forces to learn, innovate, and adopt efficient production practices.

Concluding Remarks
Any serious effort to engage development problems in Nigeria must begin by taking notice of the reality that socio-economic development in the country may be attained, and sustained only if the processes engaged toward these ends are properly mindful of the cultural and social experiences of Nigerians. This means looking at things from the point of view of those whose welfare one seeks to improve; for only when the life experiences of the indigenous people are clearly understood would it be possible to work within the context of their cultural and traditional observances to establish accommodative social, political, and economic institutions necessary for sustained development. This approach is what I have termed ‘contextual development’; a process that requires a balanced integration of indigenous cultures, religious beliefs, prevailing social arrangements, and new ideas from developed nations into a unique development strategy that suits a particular nation-state. Contextual development thus requires a good understanding of the needs of the people, how to design and implement programs that take advantage of the peculiarities of the society, collective expectations, and the form of political governance that is freely chosen by the governed. It also requires, as an imperative, that one who embarks on development programs in Nigeria be acquainted with the cultural belief system in the country, the role religion plays, the level of literacy, availability of skilled labor, traditional roles of the sexes, prevailing social arrangements, and most importantly, what development means to the people.

The novelty of this approach to development can be found, not so much in the idea, but in its implementation; for experts in development studies are now very much aware that the old policy of imposing change from without has not produced desired results, but has instead made matters worse despite decades of development assistance to Africa (Easterly, 2001). This strategy necessarily rejects the old development model of one-size-fits-all that assumes social and political institutions as given, and then proceeds to impose pre-packaged solutions that lack relevance to local realities, practices, and climatic conditions. It is in this very important sense that Bromley (1995) is particularly relevant --- that there maybe more than one path to economic development; the path taken by Western countries was accommodative of the lived experiences and circumstances in the West, the African path would have to be accommodative of African realities.

References

Acemoglu, Daron, and Robinson, 2001, “A theory of political transitions”, American Economic Review, 91(4), 983-963.

Acemoglu, D., Johnson, S., Robinson, J., 2001, “The Colonial Origins of Comparative Development: An Empirical Investigation,” American Economic Review, Vol.91 N0.5; pp.120-153.

Auty, Richard, 1990, Resource-based Industrialization: Sowing the oil in Eighty developing Countries, Clarendon Press: Oxford, UK

Barro, R. J., 1991, “Economic Growth in a cross-Section of Countries,” Quarterly Journal of Economics, 106; pp. 407-43.

Beblawi, Hazem, and Luciani, Eds. 1987, Nations, State and Integration in the Arab World, Volume 2: The Rentier State, Croom Helm: London, UK.

Bienen, Henry (1983), “Oil revenues and Policy Choice in Nigeria”, World Bank Staff Working Paper.

Brautigam, Deborah, 1994, “What Can Africa Learn from Taiwan,” Journal of Modern African Studies, 32; PP. 111-36.

Bromley, Daniel, W., 1995, “Development reconsidered: The African challenge”, Food Policy, Vol. 20, No.5; pp. 425-438.

Caiden, G., 1992, “Dealing with administrative Corruption,” Working papers Series of the Center for International Reform and the Informal Sector, University of Maryland at College Park; pp. 23-45.

Chang, Ha-Joon, 2008, Bad Samaritans; New York: Bloomsbury Press.

Collier, P., Hoeffler, A., 2005, “Resource Rents, Governance, and Conflict”, The Journal of Conflict Resolution, Vol. 49, No. 4 PP. 625-633.

Easterly, W., 1993, “Good Policy or Good Luck? Country Growth Performance and temporary Shocks,” Journal of Monetary Economics, 32; pp. 549-483.

Easterly, W., 2001, “The lost Decades: Developing Countries’ Stagnation in Spite of Policy Reform 1980-1998,” Journal of Growth 6; pp. 135-57.

Emizet, Kisangani, 1998, “Confronting Leaders at the Apex of the State: The Growth of the Unofficial Economy of the Congo,” African Studies Review, 41; pp. 99-137.

Fielding, D., 2007, “Aid and Dutch Disease in the South Pacific,” Research paper No. 2007/50; United Nations University.

Gelb, Alan, 1988, “Oil Windfall: Blessing or Curse”, Oxford University Press, UK

Greenwald, B., Stiglitz, J., 2006, “Helping Infant Economies Grow: Foundations of Trade Policies for Developing Countries”, American Economic Review, Vo. 96, No.2 pp. 141-146.

Ifediora, John, 2005, “The effects of Bureaucratic Corruption on Economic Development: The Case of Sub-Saharan Africa,” Economic Indicators, NESG, vol.11, No.2, pp.8-20.

Jeffries, Richard, 1993, “The State, Structural Adjustment and Good Governance in Africa,” Journal of Commonwealth and Comparative Politics, 31; pp. 20-35.

Klein, P., Luu, H., 2003, “Politics and productivity,” Economic Inquiry 41; pp. 433-47.

Landes, D., 1998, The Wealth and Poverty of Nations: Why Some Are So Rich and Some So Poor; New York: W.W. Norton.

Killick, Tony, 1998, Aid and Political Economy of Policy Change; London: Routledge, pp. 30.

Klitgaard, Robert, 1990, Tropical Gangsters; New York: Basic Books, pp. 7-237.

Krugman, P., Taylor L., 1978, “Contractionary Effects of Devaluation”, Journal of Development Economics, 14; pp. 445-56.

Lewis, D., Rodger, D., Woolcock, M., 2005, “The fiction of development: Knowledge, Authority and Representation,” Working Paper Series, London School of Economics and Political Science, No. 05-61, pp. 1-8.

Masters, W., McMillan, M., 2000, “Climate and Scale in Economic Growth,” WPS/2000-13, June.

Mauro, P., 1995, “Corruption and Growth,” The Quarterly Journal of Economics, CVI; pp.682-712.

Meredith, Martin, 2005, The fate of Africa (New York: Public Affairs), p.414-560.

Nordhaus, W., 2005, “Geography and macroeconomics: New data and new findings,” Inaugural article, National Academy of Sciences.

North, D., 1990, Institutions, Institutional Change, and Economic Performance (Cambridge University Press), P. 17-38.

Okruhlik, Gwenn (1999), “Rentier States, Unruly Law, and the Rise of Opposition: the Political Economy of Oil States”, Comparative Politics, 31(3), 195-315.

Orsborne, Evans, 2004, “Measuring Bad Governance,” The Cato Journal, 23, No.3, pp. 403-22.

Reinert, Erik, 2007, How Rich Countries Got Rich And Why Poor Countries Stay Poor, New York: Public Affairs.

Rose-Ackerman, S., 1978, Corruption: A Study in Political Economy; New York: Academic Press.

Ross, Michael, 2001, “Does Oil Hinder Democracy?” World Politics, 53, 122-162.

Sachs, J., Warner, A., 1997, “Sources of Slow Growth in African Economies,” Journal of African Economies, 6; pp. 335-76.

Scott, James (1972), “Patron-Client Politics and Political Change in Southeast Asia”, American Political Review, 66(1), 91-113.

Torvik, Ragnar, (2000), “Natural Resources, Rent Seeking and Welfare”, Journal of Development Economics, Vol. 67, Issue 2, pp. 455-470.

United Nations, 2002, Millennium Development Goals, Data, and Trends; New York: United Nations.

United Nations Development Program, 1999, The Human development Report; New York: Oxford University Press, pp. 171.

Van de Walle, 2001, African Economies and the Politics of Permanent Crisis (Cambridge University press) pp.5- 35.

Wantchekon, Leonard, Jensen, 2000, “Resource Wealth and Political Regimes in Africa”, Yale University Center of African Studies working Paper.

World Bank, 2006, World Development Indicators; Washington, DC: World Bank, pp.45.

World Bank, 1998/99, Global Economic Prospects and the developing Countries, (Washington, DC: World Bank), p.53.

Williamson, Oliver, 1994, “The Institutions and Governance of Economic Reform,” Proceedings of the World Bank Annual Conference on Development Economics; pp. 171-197.


The Petrolization of Nigeria's Economy and Her Current Discontent

In 1908 the first major oil exploration in Nigeria was conducted by Nigerian Bitumen Corporation, a German company. The search was unsuccessful. It took twenty-nine years for another serious undertaking to discover the presence of oil in 1937; this time the effort was led by a Dutch, and a British company – Shell and British Petroleum(BP). The companies formed a consortium known as Shell-BP Petroleum Development Company, and in 1939 received an Oil Exploration License from the federal government of Nigeria to drill for oil. In 1956 Shell-BP drilled the first productive oil well in Oloibiri, a town in the delta region of the country, and by 1958, the company successfully pumped and exported five thousand barrels of crude oil. Ever since, however, production levels have steadily increased, albeit with both minor and major disruptions due to conflicts, and bureaucratic inefficiencies. By 1968 every oil company in Nigeria was owned and managed by foreign firms; but in 1969 the federal government enacted an initiative, Decree N0. 51, that enabled it to gain full control of the oil industry. This decree, upon implementation, vested ownership in the federal government of all petroleum discovered within the territorial competence of the country. Once achieved, Nigeria joined OPEC in 1971.

images-1
Lagos, Nigeria

By the mid 1970s, the oil sector, still relatively in its infancy, remained completely in the hands of foreign firms, notably Shell, with 42% share of total crude production; others, in order of importance, were SAFRAP (French) with 21%, and Gulf Oil (USA) with 15 percent. During this period increasing oil production and the accompanying oil boom in the international arena made it possible for Nigeria to embark on numerous state-sponsored infrastructure development. But as the federal government pursued its social agenda the percentage of oil revenue that went to oil producing states steadily declined from 50% in 1970 to 20% between 1975 and 1979; between 1992 and 1999, it had declined to 3 percent (UNDP, 2006:15). However, in 1999 a revised national constitution increased the share of oil revenue going to oil producing states to 13%; unfortunately, the bulk of this revenue never went to improve the welfare of individual states or their inhabitants; instead it was misappropriated at state and local government levels (Collier, 2001).

Rising oil revenues in the early 1970s made attempts at industrialization possible; this period also witnessed improvements in health and education sectors. But because the projects embarked upon were capital intensive and poorly designed, they did not have desired development effects and did not lead to expected diversification of the economy. The squandering of realized oil revenue led to higher borrowing and a remarkably high debt burden as international interest rates rose in the late 1980s, and oil revenue dropped. In a quick-fix response the federal government introduced structural reform policies in 1986, and predicated their implementation on a sharp devaluation of the national currency that subsequently caused domestic prices to rise with devastating impact on the middle class; the prevailing level of poverty did not fare better. In due course, partly because of the failings of the central government and an entrenched patronage system of resource allocation, social infrastructure began to deteriorate; the healthcare and education sectors bore the heaviest burden through drastic cuts in funding. By the second half of the 1980s the percentage of the population living in extreme poverty rose form 35% in 1970 to 70% by mid 1990s (World bank, 2001). According to recent surveys, more than 50% of the population now lives on less than $1 per day, and more than 80% lives on less than $2 per day (UNDP, 2008:35).

In the late 1980s it was clear to all concerned, but especially to development experts, that Nigeria was quickly becoming a prime example of the 'curse' that natural resources can bring. Paul Collier (2009) was persuasive on this:

“Indeed 50 years of substantial oil production have not resulted in sustainable socioeconomic development in the country. The poverty rate is extremely high, with 50% of the population living on less tan $1 per day; in fact, the current poverty rate exceeds that of the period before the first oil boom of the 1970s, which was 35%. The social and transport infrastructure is in a desolate condition, and the country is marked by chronic internal instability and periodic flare-ups of violent conflict.”

The absence of quality drinking water, access to healthcare, and erratic electric supply make for serious development problems. Widespread unemployment is also a significant symptom of economic distortions that accompany oil-dependent economies. The oil industry in general is capital intensive and less reliant on human labor for its operations. In the case of Nigeria, this reality is exacerbated by the fact that crude oil from the country is shipped to foreign-based refineries, thus depriving the economy an important avenue for additional employment of labor and capital. As a consequence the entire oil industry employs approximately only 35000 workers in a country of over 130 million people.

 Since the formation of OPEC in 1960 under the auspices of Juan Pablo Alfonso of Venezuela, developed economies as well as oil-rich but yet to be developed ones, witnessed an unprecedented transfer of wealth made possible by the cartelization of oil producing countries. With proper guidance by this cartel oil prices in the international market quadrupled in 1973; this ushered in the first oil boom, and the revenue windfall forever changed both the material well being of recipient states and the geopolitics of oil. To the developed economies of the world dependent on oil as a major source of energy, the spectacular rise in crude prices represented an external ‘shock’ to their respective economies; to the oil-exporting states it was a bonanza that held the potential for economic growth and prosperity. But in less than a decade of this unprecedented wealth transfer, all oil-exporting countries within OPEC suffered a similar fate: bureaucratic inefficiencies, capital flight, production bottlenecks, crippling graft and corruption, overvalued currencies, and decline in Gross Domestic Products; their aspirations for a development trajectory fueled by industrialization made possible by oil revenues were shattered. The effect of this bonanza on oil producing states has been the subject of debate across academic disciplines (Karl, 1999).

In the case of Nigeria economic under-performance may be readily traced to these culprits: the multiplicity of states in a short time span without requiring that each state be economically viable, bad leadership and compromised social institutions, bureaucratic corruption, reliance on one natural resource (oil) to develop the economy, the absence of an effective industrial policy, and a policy of ‘free’ and unregulated international trade and exchanges. The fact that the country is comprised of ethnic groups with remarkably different histories, social institutions, and no common unifying factor such as language, culture or religion may also not be ignored as a contributor to its current economic and political state of affairs. For the concept of a nation implies integration of acquired or native divergences—the formation of a sense of belonging that forms the basis of nationality which ultimately suppresses sub-national loyalties. Indeed for many inhabitants of the newly created states, there is no identification with the state as a source of collective identity. This absence of a unifying symbol becomes acute when elevated to the national level, thus creating a shifting vacuous political community that lacks an anchor and a united sense of direction requisite for sustained development.

The discovery of oil in commercial quantities fueled the impetus to create more states on the mistaken assumption that it would enable the country to efficiently allocate its resources in different regions of the country, taking advantage of each region’s natural and technical endowments to propel robust and comprehensive economic growth. This assumption was predicated on the belief that each ethnic group, by advancing its own social and economic interest, would use resources derived from the central government more efficiently. Moreover, creating and giving more autonomy to individual states would serve the beneficent goal of de-centralization of political and economic power; shifting power, as it were, from the federal government to individual states that know how best to serve the needs of its citizenry. This assumption, while reasonable, was severely misplaced; the requisite social institutions were wanting.

The problem with this model of development is that it failed to recognize the real possibility of dependency on the center for continuous fiscal support; for economic dependency tends to erode the will for self-sufficiency. That this is the current reality is no surprise, for the states, realizing that the federal government is a reliable and guaranteed source of financial aid, did not see the need to be self-sufficient. But worse, individual states allowed existing industries in their respective regions before the sub-division to fall into disuse, and ultimately ceased to exist. Northern states, known for their proficiency in the production of groundnuts, hides and skins simply abandoned the sector; southern states, where palm oil, palm nuts, and rubber were formally produced in abundance, and sustained subsistence and commercial productivity, found it unprofitable to expend wind-fall resources on these endeavors ... it was simply less stressful to collect needed revenue from the federal government. In a very real sense, Oil revenue has now effectively displaced all other sources of national income. Essential agricultural and primary commodities that once sustained the newly independent country became subjects of imports; the economy was not fully petrolized.

The damaging consequence of this chosen path to development is the disincentive for self-reliance. The policy of state creation did not include, as a pre-condition for statehood, proof of economic viability. As a result almost thirty of the thirty-six states that now exist cannot support important state functions without financial support from the federal government; but more importantly, the limited revenue from oil is used to duplicate state functions, i.e. to pay for the services of more governors, and their administrative staff, more commissioners and associated agencies. At the end, these duplicative services dissipate the limited revenue from oil, and fail to meet the stated objective of targeted investment and development; they also provide more avenues for bureaucratic corruption, and outright theft of state resources. How to end this vicious cycle would be the subject of future articles.