Tag Archives: policy

Playing Politics with Energy Security: How the Latest Congressional Budget Deal Raids the Strategic Petroleum Reserve

Looking to finally reach a longer-term agreement to avoid an extended federal government shutdown last week, a bipartisan deal was reached in Congress in the early morning of February 9 that would fund the government for the next two years. As the details of the deal get combed over there is plenty to digest, even in just energy-related topics (such as the inclusion of climate-related policy), but one notable part of the budget agreement was the mandate to sell 100 million barrels of oil from the Strategic Petroleum Reserve (SPR). The stated goal of this move was to help pay for tax cuts and budgetary items elsewhere in the deal, but will that goal be realized or is Congress paying lip service to the idea of fiscal responsibility at the expense of future energy security?



Purpose and typical operation of the SPR

In a previous post, I covered more extensively the background and purpose of the SPR. In short, the SPR is the largest reserve supply of crude oil in the world and is operated by the U.S. Department of Energy (DOE). The SPR was established in the wake of the oil crisis of the late 1970s with the goal of providing a strategic fail-safe for the country’s energy sector– ensuring that oil is reliably available in times of emergency, protecting against foreign threats to cut off trade, and minimizing the effect to the U.S. economy that drastic oil price fluctuations might cause.

In general, decisions regarding SPR withdrawals are made by the President when he or she 1) “has found drawdown and sale are required by a severe energy supply interruption or by obligations of the United States under the international energy program,” 2) determines that an emergency has significantly reduced the worldwide oil supply available and increased the market price of oil in such a way that would cause “major adverse impact on the national economy,” 3) sees the need to resolve internal U.S. disruptions without the need to declare “a severe energy supply interruption, or 4) sees it a suitable way to comply with international energy agreements. These drawdowns, following the intended purpose of the SPR, are limited to a maximum of 30 million barrels at a time.

Outside of these standard withdrawals, the Secretary of the DOE can also direct test sales of up to 5 million barrels, SPR oil can be sold out on a loan to companies attempting to respond to small supply disruptions, or Congress can enact laws to authorize SPR non-emergency sales intended to respond to small supply disruptions and/or raise funds for the government. This last type of sale is what Congress authorized with the passing of the budget deal (see the previous article on the SPR to read more about how the SPR oil actually gets sold).

Source

While selling SPR oil to raise funds is legislatively permitted, this announced sale of 100 million barrels (15% of the balance of the SPR) is an unprecedented amount– the biggest non-emergency sale in history according to ClearView Energy Partners. More concerning than the amount of oil to be sold, though, is the ambiguity behind what exactly the sale of SPR oil will fund. Historically, an unwritten and bipartisan rule was that the SPR was not to be used as a ‘piggy bank’ to fund political measures. However, that resistance to using the SPR as a convenient way to raise money (for causes like infrastructure or medical research) was waned as Congress has faced the perennial opposition to raising taxes and the need for new sources of income.

Lisa Murkowski, Chairwoman of the Senate Energy and Natural Resources Committee, has echoed these frustrations about how the funds from the SPR sell-off will be used. When asked how Congress would spend the money, she simply replied it would be spent on “whatever they want. That’s why I get annoyed.” Despite the history of the SPR being an insurance policy for the U.S. energy sector and economy from threats of embargo from foreign nations, natural disasters, and unexpected and drastic changes in the market, the inclusion of SPR sales in this budget is just further indication of Congress trading out energy security and buying into other priorities. Taking the issue a step further, once the oil from the SPR is sold off, it likely becomes that much harder to convince Congress in the future to find the money to rebuild stocks with any additional oil stocks that might become necessary, both because the trajectory of oil prices is always climbing and thus naturally becomes more expensive to do so over time and because getting Congressional approval for new spending will always be more difficult politically than ‘doing nothing’ and just keeping SPR stocks at their current levels.

But is this selling of the SPR oil really in the name of deficit reduction and fiscal responsibility? Will the sale of this oil make an appreciable difference and help balance out the budget that Congress agreed to at (or, rather, past) the eleventh hour?

Crunching the numbers

Ignoring the previously authorized SPR sales, this budget deal alone included directive for DOE to sell 100 million barrels of oil from the SPR. What level of funds would this actually raise, and would it be enough to make a dent in the deficit? At current prices of crude oil that have hovered in the $60 per barrel (b) range, the sale would translate to about $6 billion– but the actual number depends on the price at which the oil gets sold, an uncertain number because the oil is being sold over the next 10 years and oil prices are notoriously variable.

We can make a certain degree of estimates based on the outlook of crude oil prices going forward (acknowledging at the outset the significant uncertainty that any forecast inherently assumes, especially in the oil markets that are affected by outside factors like government policy and geopolitical relations). To get a rough idea, though, we can look at the recently released 2018 Annual Energy Outlook (AEO2018) from the Energy Information Administration (EIA) which projects energy production, consumption, and prices under a variety of different scenarios (such as high vs. low investment in oil and gas technology, high vs. low oil prices, and high vs. low economic growth).

Source (Click to enlarge)

Brent crude oil (representative of oil on the European markets) starts at about $53/b in 2018 and goes up to about $89/b by 2027 in the ‘reference case’ (going from $27/b to $36/b in the low oil price scenario and $80/b to $174/b in the high oil price scenario). Similarly, West Texas Intermediate (WTI) oil (representative of the U.S. markets) starts at about $50/b in 2018 and goes to $85/b in 2027 in the ‘reference case’ ($243/b to $33/b in the low oil price scenario and $48/b to $168/b in the high oil price scenario). These figures present a pretty wide range of possibilities, but that is unfortunately the nature of oil prices in today’s climate. Further, EIA does unofficially consider these ranges to be akin to the 95% confidence intervals between which the actual prices are almost assured to be found, so we can still find value in these prices as the ‘best’ and ‘worst’ case scenarios.

For simplicity’s sake, we can assume this 100 million barrels sold will be sold in equal chunks of 10 million barrels per year from 2018 to 2027 (though the actual sale will certainly not follow this neat order, but the assumption will get us in the approximate range). In the below charts, see the amount of funds raised from this SPR sale assuming the actual sale price is the average of Brent and WTI prices in the AEO2018 reference case compared with using the price of Brent in the high oil price scenario (the largest total oil price in any side case) and the price of WTI in the low oil price scenario (the lowest oil price in all of the side cases). The top chart tracks the amount of money raised in each of the 10 years while the bottom chart then shows the cumulative money raised in these three scenarios over the course of the decade.

Click to enlarge

As shown, the low oil price scenario raises between $226 million and $326 million every year for a decade, totaling just shy of $3 billion in funds. In the high price scenario, the annual amount brought in is between $800 million and $1.7 billion per year, totaling about $14 billion in funds. In the reference case, the one that is most likely (though not at all assured) to be representative, each year the selling of SPR oil would bring in between $512 million and $868 million for a total of $7.5 billion in funds.

Now let’s be clear about one thing–raising somewhere between $3 billion and $14 billion is a lot of money. But in the context of this budget that was passed and the rising deficit of the federal government, how much of a dent will this fundraising through the sale of SPR oil really make?

The budget deal will add $320 billion to deficits over the next decade, which is almost $420 billion when factoring in interest according to the Congressional Budget Office. That massive increase in spending, an average of $42 billion per year, makes the funds from the SPR sale look like pocket change:

 Click to enlarge

Both the sale of SPR oil and the impact of this budget will be felt over the next 10 years, meaning these dollar figure present very apt comparisons. At the end of the decade, the high oil price scenario shows that SPR oil sales will only account for 3.4% of the deficit increase, while the reference case would account for 1.8% of the deficit increase and the low oil price scenario would only account for 0.7% of the deficit increase. Since the deficit would increase over the course of 10 years, another way to think of it is that the selling of SPR oil would account for 124 days of the deficit increase in the high oil price scenario, while the reference case would account for 65 days of the deficit increase and the low oil price scenario would account for 26 days of the deficit increase.

Outside of the increase to the deficit, the discretionary spending from the budget increase are to be $296 billion over the next two years (not including money given immediately to disaster spending, healthcare, and tax cuts). The SPR oil sale translates to between 1.0 and 4.8% of the discretionary spending increase or 7 to 35 days of the two years worth of spending increases.

Lastly, after accounting for this latest Congressional budget agreement, the CBO projects the federal deficit will increase to $1.2 trillion in 2019. If the sale of SPR oil is attempted to be pushed as a degree of fiscal responsibility in the wake of this budget deal, it is worth noting that the authorized sale of the SPR oil would only account for 1.2% of the total federal deficit in the best case scenario of high oil prices (0.2% in the low oil price scenario)– a metaphorical drop in the bucket (though for those curious, it’s actually significantly more than a literal drop in the bucket!).

What’s it all mean?

Buckets get filled drop by drop all the time, and it inherently requires many drops to fill up that bucket. So in this metaphor, each drop need not be disparaged for not being larger and doing more to fill up the bucket as it is the aggregate effect we should care about. Despite that truth, it is still fair to bring up whether the sacrifices required to gather that ‘drop’ were worthwhile. Going back to the origin and history of the SPR, Congress selling off large portions of the stocks of oil was never meant to fund ambiguous budgetary measures.

This 100 million barrels to be sold should also not be taken without the context of the sales already authorized by Congress last year that will also become reality in the next decade. Combined with the previously mandated sales, after this budget deal the SPR will be left with just over 300 million barrels of oil— about half of what it had been. So the negative side of this is that Congress appears ready and willing to gut the SPR. However the other side is that, because of the U.S. shale oil boom and other factors, the amount of net imports of oil and oil products to the United States has been dropping significantly. In the context of decreasing net imports, the amount of SPR stock measured in terms of ‘days of supply of total petroleum net imports’ has seen a comparable rise. What this means is that because the United States has become less dependent on foreign oil, less oil needs to be stored in the SPR to provide the same amount of import coverage.

Source (Click to enlarge)

In the wake of this budget passing and the previously announced SPR oil sales, many energy analysts came out to call these moves short-sighted at best, citing the following among the many reasons:

Because the budget that was passed was over 600 pages and was voted on before most people (or anyone) would realistically have a chance to read it, it’s yet to be clear what part of the budget will cause the most noise. But in terms of this surprising move by Congress with respect to the SPR, the questions to wrestle with become the following: Is it wise to sell off our oil insurance policy that might be needed in future tough times just because things are looking good for the present U.S. oil market? Is the financial benefit of reducing SPR oil stocks by such a significant amount  worth paying off a couple of weeks to a couple of months of the increased deficit, or is it possible that such a sale is only paying lip service to fiscal responsibility that allows politicans to point to an impressive sounding source of funds (up to $14 billion!) when in reality it doesn’t move the needle much (a maximum of 3% of the increase in deficit)?

Sources and additional reading

2018 Annual Energy Outlook: Energy Information Administration

America’s (not so) Strategic Petroleum Reserve: The Hill

Budget deal envisions largest stockpile sale in history: The Hill

CBO Finds Budget Deal Will Cost $320 Billion: Congressional Budget Office

DOE in Focus: Strategic Petroleum Reserve

Harvey, Irma show value of Strategic Petroleum Reserve, energy experts say: Chron

Petroleum reserve sell-off sparks pushback: E&E Daily

U.S. Looks To Sell 15% Of Strategic Petroleum Reserve: OilPrice.com

U.S. SPR Stocks as Days of Supply of Total Petroleum Net Imports: Energy Information Administration

Weekly U.S. Ending Stocks of Crude Oil in SPR: Energy Information Administration

Why the U.S. Shouldn’t Sell Off the Strategic Petroleum Reserve: Wall Street Journal

 

About the author: Matt Chester is an energy analyst in Washington DC, studied engineering and science & technology policy at the University of Virginia, and operates this blog and website to share news, insights, and advice in the fields of energy policy, energy technology, and more. For more quick hits in addition to posts on this blog, follow him on Twitter @ChesterEnergy.  

Debunking Trump’s Claim of “War on Beautiful, Clean Coal” Using Graphs

In President Trump’s first State of the Union Address last week, a wide range of topics in the Administration’s agenda were covered extensively while energy was largely pushed to the side. Trump did include two sentences on his self-described push for “American Energy Dominance,” and these two sentences sent wonks in the energy industry into a frenzy on social media:

“We have ended the war on American energy. And we have ended the war on beautiful, clean coal.”

My Twitter feed lit up with various energy journalists and market watchers who noted the impressiveness that just 18 words over two sentences could contain so many misleading, or outright false, claims.

Source

As one of those energy reporters who immediately took to Twitter with my frustration, I thought I would follow up on these statements last week with arguments why the claims of ‘clean coal’ and the supposed ‘war’ on it do not reflect the reality the Trump Administration would have you believe, and I’ll do so with just a handful of graphs.



What is ‘clean coal’?

As a pure fuel, coal is indisputably the ‘dirtiest’ energy source in common use in the power sector, accounting for about 100 kilograms (kg) of carbon dioxide (CO2) per million British thermal unit (MMBtu) of energy output. This output is notably larger than other major energy sources, including natural gas (about 50 kg/MMBtu), petroleum products like propane and gasoline (about 60 to 70 kg/MMBtu), and carbon neutral fuels like nuclear, hydroelectric, wind, and solar. In the face of the scientific consensus on CO2’s contributions to climate change, many have noted that one of the best actions that can be taken in the energy industry is to shift away from coal to fuels that emit less CO2— which has definitively given coal a dirty reputation.

The premise of ‘clean coal’ is largely a PR push (literally invented by an advertising agency in 2008)– an ingenious marketing term, but one that does not have much in the way of legs. When you hear politicians talking about ‘clean coal,’ it is usually referring to one or more of the following suite of technologies:

  • Washing coal before it’s burned to remove soil and rock and thus reduce ash and weight of the coal;
  • Using wet scrubbers on the gas generated from burning coal to remove the sulfur dioxide from being released;
  • Various carbon capture and storage (CCS) technologies for new or existing coal plants that intervene in the coal burning process (either pre-combustion or post-combustion) to capture up to 90% of the CO2 produced from its burning and then sending it miles underground for permanent storage instead of releasing it into the atmosphere; or
  • Anything done to the coal-fired power plant to increase the efficiency of the entire process of generating electricity (e.g., the 700 Megawatt supercritical coal plant in West Virginia that is so efficient it reportedly releases 20% less CO2 than older coal plants) and reduce the overall emissions.

Source

When most in the energy industry discuss ‘clean coal’ technology, they are typically referring to CCS. However it should be noted that Trump did not mention CCS by name in this (or any) speech. Some analysts have noted that the White House’s attempts to cut CCS funding and send the Secretaries of the Department of Energy (DOE) and Environmental Protection Agency (EPA) to supercritical coal plants are not-so-subtle hints that the Trump Administration’s preferred type of ‘clean coal’ is improving the efficiency of coal-fired generation. Even Bob Murray, the influential coal magnate, has written to the President to indicate his contempt for CCS, calling it a ‘pseudonym for no coal,‘ echoing the concerns of many proponents of coal that CCS is being pushed as the only ‘clean coal’ option so that if/when it fails (due to economic impracticalities) it would be the death knell of coal-fired generation altogether.

So regardless of which ‘clean coal’ technology the Trump Administration supports, issues remain. With regard to wet scrubbers, coal washing, and general plant efficiency improvements, the reductions in CO2 emissions are not nearly enough to compete with cleaner fuels. Even if all coal plants could be made 20% more efficient (and less reduce CO2 emissions by about 20%) like the West Virginia supercritical plant, which would be a massive undertaking, it would still result in coal generation being among the dirtiest energy in the country.

With regard to CCS, not only is the cost one of the biggest issues (which will be looked at in more detail later), but it does not remove all the pollutants from burning coal. Even with the most effective CCS capturing 90% of CO2 emissions, that leaves 10% of CO2 making its way into the atmosphere along with the other notable pollutants in coal gas (including mercury, nitrogen oxide, and other poisonous contaminants). When compared with the carbon neutral energy sources increasingly gaining ground in the United States, coal plants with CCS still hardly seem clean.

Again, the Energy Information Administration’s (EIA) listing of carbon dioxide emissions coefficients shows the CO2 emissions associated with different fuel types when burned as fuel. As previously noted, coal is the far-away leader on CO2 emissions coefficients as a pure fuel. In DOE analysis of future-built generation (an analysis that focuses on the costs and values of different types of power plants to be built in the future, which will come up again in more detail later), the only type of coal generation even considered is coal with either 30% or 90% carbon sequestration, with 90% being the technological ceiling and 30% being the minimum example of new coal-fired generation that would still be compliant with the Clean Air Act. The below graph, our first in demonstrating the issues with claims of a ‘war on beautiful, clean coal,’ plots the CO2 coefficients of major fuel sources in the U.S. power sector, including coal using no CCS, 30% CCS, or 90% CCS. Existing power plants do not have the same requirements under the Clean Air Act, so they might still be producing CO2 at the far right of the ‘coal’ bar (indeed, last year almost 70% of U.S. coal was delivered to power plants that are at least 38 years old meaning they are likely far from the most efficient coal plants out there). Coal plants that are touted as ‘clean’ because of their up to 20% increases in efficiency would still find themselves in the same (or greater) range of emissions as 30% CCS coal plants, while 90% CCS coal plants appear to the be the only ones that can compete with other fuels environmentally (though it comes at a potentially prohibitive cost, which will show up in a later graph).

Note that the data for these CO2 emission coefficients come from this EIA listing. The lines for 30%/90% CCS are not just drawn 30%/90% lower, but rather account for the presence of CCS requiring more energy and thus cause a dip in efficiency– this graph uses the rough efficiency drop assumed for CCS plants in this International Energy Agency report

These numbers paint a scary picture of coal and are the source of what causes many energy prognosticators to scoff at the utterance of ‘beautiful, clean coal,’ though it is important to be clear that these numbers don’t tell the whole story. While nuclear and renewable energy sources do not emit any fuel-related CO2, they are not completely carbon neutral over their lifetimes, as the building, operation, and maintenance of nuclear and renewable generation plants (as with any utility-scale generation source) all have their own non-zero effect on the environment. However, since fuel makes up the vast majority of carbon output in the electricity generation sector, any discussion of clean vs. dirty energy must return to these numbers.

Further, the separation of dispatchable vs. non-dispatchable technologies (i.e., energy sources whose output can be varied to follow demand vs. those that are tied to the availability of an intermittent resource) shown in the above graph is important. Until batteries and other energy-storage technologies reach a point technologically and economically to assist renewable (non-dispatchable) energy sources fill in the times when the energy resource is unavailable, dispatchable technologies will always be necessary to plug the gaps. So regardless of what drawbacks might exist for each of the dispatchable technologies, CO2 emissions and overall costs included, at least some dispatchable energy  will still be critical in the coming decades.

Who is orchestrating the ‘war on coal’?

Even with the knowledge that coal will never truly be ‘clean,’ the question then becomes why haven’t the advancements in coal energy that is cleaner and more efficient than traditional coal-fired plants become more prominent in the face of climate and environmental concerns? The common talking point from the Trump Administration is that there is a biased war on coal being orchestrated, and the actions of President Trump to roll back regulation is the only way to fight back against this unjust onslaught that the coal industry is facing. But again, from where is this onslaught coming?

The answer to this question is actually pretty easy– it’s not regulation that is causing coal to lose its place as the king of the U.S. power sector, it’s competition from more affordable energy sources (that also happen to be cleaner). The two charts below demonstrate this pointedly, with the left graph showing the fuel makeup of the U.S. electric power sector since 1990 along with the relative carbon intensity of the major CO2-emitting fuel sources, while the right graph shows what’s happened to the price of each each major fuel type over the past decade. The carbon intensity shown on the left graph is even more indicative than the first graph above in detailing the actual degree to which each fuel is ‘clean’ as it factors in the efficiency of plants using the fuel and indicates the direct CO2 emissions relative to electricity delivered to customers.

Click to enlarge

Note that the costs are taken from this EIA chart, with coal taken from fossil steam, natural gas taken from gas turbine and small scale, and wind/solar taken as the gas turbine and small scale price after removing the cost of fuel. Electric power generation and carbon emission data taken from this EIA source

Just from analysing these two graphs, a number of key observations and conclusions can be made about the electric power sector and coal’s evolving place in it:

  • In 1990, coal accounted for almost 1.6 million Gigawatt-hours (GWh) of power generation, representing 52% of the sector. By 2016, that figure dropped to 1.2 million GWh or 30% of U.S. power generation.
  • Over that same time period, natural gas went from less than 400,000 GWh (12%) to almost 1.4 million GWh (34%); nuclear went from less than 600,000 GWh (19%) to over 800 GWh (20%), and combined wind and solar went from 3,000 GWh (0.1%) to over 260,000 GWh (6%).
  • While the coal sector’s carbon intensity hovered around 1.0 kg of CO2 per kilowatt-hour (kWh) of electricity produced from 1990 to 2016 (even as CCS and other ‘clean coal’ technologies began to break into the market), natural gas dropped from 0.6 kg CO2/kWh to less than 0.5 kg CO2/kWh, while nuclear, wind, and solar do not have any emissions associated with their generation (again noting that there are some emissions associated with the operation and maintenance of these technologies, but they are neglible compared with fossil fuel-related emissions). The drop in natural gas carbon intensity combined with coal losing ground to natural gas, nuclear, and renewable energy led the electric power sector’s overall average carbon intensity to drop from over 0.6 kg CO2/kWh to less than 0.5 kg CO2/kWh.
  • While the narrative some would prefer to push is that coal is getting replaced because of a regulatory ‘war on coal,’ the real answer comes from the right graph where the cost to generated a kWh of electricity for coal increased notably from 2006 to 2016. Meanwhile, natural gas (which started the decade more expensive than coal) experienced a drastic drop in price to become cheaper than coal (thanks to advances in natural gas production technologies) while the low cost of nuclear fuel and ‘free’ cost of wind and solar allowed these energy sources to start and remain well below the total cost of coal generation. This natural, free-market competition from other energy sources, thanks to increasingly widespread availability and ever decreasing prices, is what put pressure on coal and ultimately led to natural gas dethroning coal as the predominant energy source in the U.S. power sector.

What these two graphs show is that the energy market is naturally evolving, there is no conspiratorial ‘war’ on coal. The technologies behind solar and wind are improving, getting cheaper, and becoming more prolific for economic, environmental, and accessibility reasons. Nuclear power is holding strong in its corner of the electricity market. Natural gas, more than any other, is getting cheaper and much more prominent to the U.S. power sector (while having the benefit of about half the CO2 emissions of coal), which is what has made it the natural ‘enemy’ of coal of the past decade or two. All that’s to say, the only ‘war on coal’ that’s been widespread in recent memory is a capitalistic, free-market war that will naturally play out when new energy sources are available at cheaper prices and contribute significantly less to climate change.

Will Trump policies reverse the course of coal in the United States?

Going back to the statement from Trump’s State of the Union Address, he claimed that his Administration had ended the war on clean coal. As stated previously, there was never an outward war on coal that was hindering the fuel. Even still, the main policy change from the Trump Administration with regard to coal was to repeal the Clean Power Plan (CPP) that aimed to cut carbon emissions from power generation.  However, many analysts predicted that would not change the current trends, as repealing the CPP does nothing to reverse the pricing pattern of the fuels. Indeed, this week EIA released its Annual Energy Outlook for 2018 and confirmed the tough future that coal generation has compared with natural gas and renewables– both with and without the CPP. While the CPP reduces the projections of coal generation, it doesn’t move the needle all that much and natural gas and renewables are still shown to surpass coal.

Source

So the major policy decision of the Trump Administration with respect to coal generation doesn’t appear to reverse the course of coal’s future. Again, this conclusion isn’t terribly surprising considering the economics of coal compared with other fuels. EIA projects the Levelized Cost of Electricity (LCOE) for different type of new power generation (assumed to be added in 2022) which serves to show the relative costs to install new power generation. In the same analysis, EIA projects Levelized Avoided Cost of New Generation (LACE), which can be thought of as the ‘value’ of the new generation to the grid (for more detailed description in the calculations and uses of these measures, read through the full report). When the LACE is equal or greater than the LCOE, that is in indication of a financially viable type of power to build (evaluated over the lifetime of the plant). So by looking at the relative costs (LCOE) of each power type and whether or not they are exceeded by their values (LACE), we can get a clear picture of what fuel types are going to be built in the coming years (and to continue the focus on whether coal or other fuels are ‘clean,’ let’s put the economics graph side-by-side with the CO2 emissions coefficients):

Click to enlarge

Note that the source of the data on the left graph is the EIA Levelized Cost of Electricity analysis, with the ends of the boxes representing the minimum and maximum values and the line in the middle representing the average– the difference in possible values comes from variations in power plants, such as geographic differences in availability and cost of fuel. Also note that, counter-intuitively, EIA’s assumed costs for 30% CCS are actually greater than for 90% CCS because the 30% CCS coal plants would ‘still be considered a high emitter relative to other new sources and thus may continue to face potential financial risk if carbon emissions controls are further strengthened. Again, the data for the right graph takes CO2 emission coefficients from this EIA listing by fuel type

Looking at these graphs, we can see that the cost of new coal generation (regardless of CCS level) not only exceeds the value it would bring to the grid, but also largely exceeds the cost of natural gas, nuclear, geothermal, biomass, onshore wind, solar photovoltaic (PV), and hydroelectric power (all of which emit less CO2 than coal). Thus even in the scenario where 90% of carbon is captured by CCS (which allows it to be ‘cleaner’ than natural gas and biomass), it still comes at a significant cost premium compared with most of the other fuel types. These are the facts that are putting the hurt on the coal industry, not any policy-based ‘war on coal.’ Even the existing tax credits that are given to renewable energy generation are minor when looking at the big picture, as the below graph (which repeats the above graph but removes the renewable tax credits from the equation) shows. Even if these tax credits are allowed to expire, the renewable technology would still outperform coal both economically and environmentally.

Click to enlarge

The last graphical rebuttal to President Trump’s statement on energy and coal during the State of the Union that I’ll cite comes from Tyler Norris, a DOE adviser under President Obama:

Source

As pointed out by Norris and other energy journalists chiming in during the State of the Union address, if the goal were to expand ‘clean coal,’ then the Trump Administration’s budget is doing the opposite by taking money away from DOE programs that support the research and development of the technology. In fact, at the end of last week a leaked White House budget proposal indicated even further slashes to the DOE budget that would further hamper the ability of the government to give a leg up to the development of ‘clean coal’ technology. Any war on energy is coming from the Trump Administration, and any battle that coal is fighting is coming from the free market of cheaper and cleaner fuels.

Sources and additional reading

20 Years of Carbon Capture and Storage: International Energy Agency

Annual Energy Outlook 2018: Energy Information Administration

Average Power Plant Operating Expenses for Major U.S. Investor-Owned Electric Utilities, 2006 through 2016: Energy Information Administration

Carbon Dioxide Emissions Coefficients: Energy Information Administration

Did Trump End the War on Clean Coal? Fact-Checking the President’s State of the Union Claim: Newsweek

How Does Clean Coal Work? Popular Mechanics

How much carbon dioxide is produced per kilowatthour when generating electricity with fossil fuels? Energy Information Administration

Is There Really Such a Thing as Clean Coal? Big Think

Levelized Cost and Levelized Avoided Cost of New Generation Resources in the Annual Energy Outlook 2017: Energy Information Administration

Trump touts end of ‘war on beautiful, clean coal’ in State of the Union: Utility Dive

Trump’s Deceptive Energy Policy: New York Times

What is clean coal technology: How Stuff Works

About the author: Matt Chester is an energy analyst in Washington DC, studied engineering and science & technology policy at the University of Virginia, and operates this blog and website to share news, insights, and advice in the fields of energy policy, energy technology, and more. For more quick hits in addition to posts on this blog, follow him on Twitter @ChesterEnergy.  

Energy for Future Presidents: The Science Behind the Headlines

I had come across Energy for Future Presidents: The Science Behind the Headlines by Richard A. Muller in a bookstore about a year and a half ago and immediately put it on my to-read list. Assuming I would be able to pick it up the next time I was in the store, I did not buy it that day and ended up not finding it in any bookstore I went to for the next year. However the concept of the book, giving an overview of every type of energy technology and policy that might be relevant in the coming years to future leaders with non-scientific backgrounds, is so important to me that I finally ended up caving and buying it off Amazon.

All-in-all this book provides an excellent overview of the landscape of the energy industry and associated public policies, doing so in a way that is accessible and easy enough to grasp for people who are completely unfamiliar with the topics but also goes in depth in a way that still provides useful and new insights to those who are immersed in the energy world. If there’s one main gripe I have with Energy for Future Presidents, it’s that it was published in 2012 and thus a number of its analyses and conclusions are based on data and technology from even before that year. Obviously that’s not Muller’s fault, and it only got exacerbated by my own delay in finally reading the book, but it’s worth bringing up for anyone who is seeking the latest and most up to date information.

Overall, any of the energy-related nitpicking I have with information in the book are minor compared with the overall success I think Muller found in covering a wide variety of topics for a curious but not-scientifically-based audience– from climate change to the Fukushima disaster, solar energy to synfuels, and electric vehicles to energy productivity. To frame the book on its titular goal of educating ‘future Presidents’ on energy, Muller importantly highlights that its not just important that the President be scientifically literate on energy topics, but he or she must also know the science well enough to explain it the public and Congress to inform the decisions that are ultimately made. Not only that, but often a President’s scientific advisers might disagree and the President will need to know the basics well enough to make the best decisions. In that respect, Muller spends a majority of the book providing the data and facts, but he also can’t help himself from providing his own opinion as a scientific adviser. This provides the reader a fun opportunity to try out that exact role of the President– take in what the adviser is suggesting and, knowing the facts behind it all, determine whether he or she agrees with the subsequent advice. I know I didn’t agree with every piece of advice that Muller gave, but that’s to be expected when discussing such hotly debated topics and it certainly did not take away from my enjoyment of the book.



Highlights

  • Energy Disasters: Regarding energy-related disasters, such as Fukushima and the Gulf oil spill, Muller suggests that the safe, conservative action of politicians in the immediate aftermath is to declare the incidents as extremely severe emergencies, as downplaying them and later being proven wrong would be a political and PR disaster. However, he goes through a number of these incidents and shows how, after the data is crunched, the real effects of the disasters are often much less significant than what they are initially made out to be. Not only does this do a disservice in diverting resources where they weren’t needed, but the panic caused by such grandiose declarations could end up doing more harm than good (e.g., unnecessary evacuations disrupting communities or overreactions to potential environmental effects harming tourism when it’s not warranted). His detailing of what ‘conservative’ estimates regarding disasters, and how such estimates inherently harbor the biases of those making the estimates, was particularly interesting and showed why a President should demand ‘best’ estimates in lieu of ‘conservative’ ones.


Source

  • Radiation Risks: Specifically regarding the risk for disasters at nuclear power plants and subsequent radiation, Muller details how the city of Denver already has a natural dose of radiation (0.3 rem per year) and suggests using this natural dosage of radiation as a tent pole where any nuclear incident that is found to cause this much radiation or less should not be one to cause panic or action (as has happened in previous nuclear incidents where the panicked reaction came from not understanding this type of natural radiation). Again, it’s important not only for the President to understand this but also to be able to educate and lead the public on the topic.
  • Climate Change: I appreciated Muller’s careful attention to climate change, stressing the idea that an individual cannot sense the temperature variations attributed to climate change on their own because of the difference between weather and climate, and how the part that actually matters is the subtle rise of global average temperatures (a basic distinction that frustratingly gets misunderstood and is often cited by those claiming climate change isn’t happening because it was particularly cold or snowy on a given day in a given location). Further, Muller’s detailing of how he was once labeled as a climate change skeptic was eye-opening, when in reality he did not find himself on one side or the other– rather he was just pushing for certain aspects of the data and science to be strengthened before any conclusions were made. The stories of this time in climate research illuminated just how committed he was to the science behind any policy, regardless of how it was labeled by the media or by his peers, and stresses how important it is that basic science take the lead and not any particular policy or conclusion that we might hope to be correct. Ultimately, Muller adds his voice to those scientists who have concluded that humans are causing catastrophic climate change and certain actions must be taken before it’s too late.
  • Emissions in Developing vs. Developed Countries: In terms of political solutions for climate change, Muller highlights how global of an issue it is (the United States cutting its emissions in half won’t mean much if the rest of the world doesn’t follow suit as well) and points out how a dollar spent in China can reduce carbon dioxide emissions much more than that same dollar spent in America. As a result, Muller suggests subsidizing China’s efforts– an interesting and data-backed idea, though I would be curious to see how a President would be able to sell the public on such a strategy in today’s political environment.  Further, when Muller laid out the economics of certain energy technologies and how they worked in the United States compared with a developing nation like India or China, I was surprised to learn that the cheaper cost of labor in the emerging economies actually flips the script on what solutions are viable (e.g., cheaper labor for solar panel manufacturing and installation means that solar energy can be much more competitive with natural gas in China, whereas the increased labor costs in the United States to not allow such an advantage to solar).
  • Energy Conservation: Knowing just how much information Muller was trying to cram into this book without making it too dense and cumbersome, I especially appreciated the attention he gave to topics like recycled energy and conservation. In particular, Muller’s detailing how much economic sense it makes on an individual basis and on a macro-basis to grab the low hanging fruit of energy efficiency, even to the point that it makes financial sense for a power company to invest in subsidizing the public’s energy efficiency measures in order for them to get the best return on investment (ROI) for their money. The part of the story I was previously unaware of (showing my young age) was how President Carter’s attempts to promote energy conservation during the 1979 oil embargo gave most of the public a bad taste regarding energy conservation as they equated it with a decrease in comfort and quality of life. Once the oil embargo was over, Americans turned the thermostats back up almost in defiance of the false choice the government had inadvertently presented between energy conservation and comfort. Changing people’s preconceptions of energy conservation, how it can be a personal money-maker while not affecting quality of life at all, is one of the most important tasks Muller assigns to the future Presidents reading this book. He does so himself by showing how something simple (but unfortunately not flashy) like installing insulation in attics across America would have a 17.8% annual ROI, while switching light bulbs to CFLs would have a 209% ROI.

Source

  • Natural Gas: The sections on natural gas were among some of the most immediately relevant and critical of Energy for Future Presidents, notably Muller’s discussions regarding the U.S. shale gas boom, its coming supplanting of coal as the largest fuel source for the power generating sector (which hadn’t yet occurred in 2012, but has now  happened), the importance of natural gas as a fuel that is 50% less emissive than coal as a middle ground, and the challenges but optimism surrounding natural gas vehicles. Regarding the environmental concerns of shale gas drilling, the sentence that stuck with me as a guiding principle  was the following: “companies have a financial incentive not to spend money unless their competition also has to spend money; that means the solution to fracking pollution is regulation.”

 

Nitpicks

  • Extreme Weather Events from Climate Change: Going back to the journey Muller undertook with regard to climate science, one aspect he still resists is the linking of climate change to extreme weather events like hurricanes or wildfires. Muller says none of these phenomena are evidence of human-caused climate change, and linking them with climate change is only harmful to the cause because it’s too easy for skeptics to debunk these connections and undercuts the rest of the science that’s sound (caution that’s no doubt learned from the ‘Climate-gate‘ incident of scientists hiding discordant data and the 2007 IPCC report that incorrectly stated that Himalayan glaciers might melt from global warming). Muller’s position is we should simply rest on the temperature data as evidence, since those data are solid. The issue I take with this is that the effects of climate change on extreme weather events are important to know and consider when looking at the full gamut of motivations to stop climate change. While it is true that you cannot yet link specific hurricanes or other weather events to climate change, the science behind climate change driving an increase in extreme weather events has been growing in recent years. Ignoring these impacts, as Muller is doing, does a disservice to the entirety of climate science and the efforts to contain these extreme events.

Source

  • Oil Prices: Towards the end of the section on liquid oil products, Muller asks “how high can the price of oil go? In the long term, it should not be able to stay above the synfuel price of $60 per barrel…That period of limbo is where we are now and the Saudis are worried.” This was an interesting point given I was reading it six years after the book was published, and I could look at what happened to the oil prices since then. In the short-term, it does show that Muller was right to question whether oil would be able to stay above $60 per barrel, as by 2015 the prices of both West Texas Intermediate and Brent crude oil fell below $60 per barrel again. So in that respect, Muller appeared prescient. However, it’s the idea that the oil prices would just continue unhampered in that trend that I had to nitpick, as Muller didn’t include any consideration of collective action of the Organization of Petroleum Exporting Countries (OPEC). OPEC largely operates as a cartel of countries who depend on the high prices of oil and attempt to control the supply of oil in order to control the prices. As Muller noted, the Saudis were worried and so they (with the rest of OPEC) took action. In November 2016, OPEC agreed on a quota of oil production among its members and a couple non-member nations, with that agreement being extended at this point to last through the end of 2018. The collective action of these oil producing nations, as well as the response of countries outside of OPEC (namely the United States), will have significant impact on the future of oil prices in the coming years and decades. Any assumptions on energy prices that don’t consider the power that OPEC yields aren’t telling the entire picture.

  • Electric Vehicles: The nitpick I found with this book that was the most vexing, which Muller himself identified as the part of the book likely to ruffle the most feathers, was his outlook on electric vehicles and how important (or rather, not terribly important) reducing emissions from the transportation sector is. The point Muller kept circling back to was the assertion that U.S. automobiles have contributed about 1/40 of a degree Celsius to global warming and that in the next 50 years the United States would likely be able to keep the additional warming to another 1/40 of a degree Celsius with reasonable efficiency standards. What I found frustrating about Muller’s take on what he called the ‘fad’ of electric cars is that he seemed so dismissive of their potential impact. First, discussing the climate change impact of vehicles in the United States seems intentionally narrow, as U.S. car sales only accounted for about 19% of global car sales (the below chart shows the top eight countries in terms of percentage of vehicle fleet made up of electric vehicles). While U.S. policy regarding vehicle efficiency would only impact the cars that can and cannot be sold domestically, the advancement of electric vehicles worldwide (particularly in China, India, and Europe where the desire for long-range electric vehicles is less important to consumers than it is in America) will have an even more significant climate impact. Policies that assist companies develop the technology will help electric vehicle sales worldwide and will have much more of a climate impact than the 1/40 of a degree Celsius that Muller predicts. Further, his rundown of the costs of electric cars vs. traditional internal combustion engine vehicles seems overly pessimistic about the technology and how costs will drop as mass production increases and as battery technology exceeds its current capabilities. I agree with Muller the hybrid-electric vehicles are going to be immensely important in the nearer term, but dismissing electric cars in the long term seems overtly shortsighted.

Source

Rating

  • Content- 4/5: This book serves as a great primer on a satisfactorily wide swath of energy topics, while providing useful new insights for people who are already familiar with the basics. You will certainly come away having learned something that surprises and interests you. However, the nitpicks that I previously listed are too strong for me to assign a 5/5 for the content– but the highlights are all great enough that no less than a 4/5 felt appropriate.
  • Readability- 5/5: Muller goes out of his way to explain the various topics to an audience that might not be technically literate in a way that makes reading and learning from the book a breeze. Each individual chapter and section isn’t terribly long, so not only do you feel accomplished as you make your way through, but it also serves to be a useful reference later on if you want to brush-up on any specific topic.
  • Authority- 4/5: As noted earlier, one of the difficulties I had with this book is not the fault of the author at all, simply that it was published six years ago. The landscape of energy technologies and markets is rapidly evolving, so while the basics all still apply, there were issues here and there that appeared to simply be caused by it not being the most updated book. But on the technologies and the politics, Muller commands a strong authority from his background as a physics and his work in climate science.
  • FINAL RATING- 4.3/5: If you’re seeking a single book to give you a broad background on energy technologies, policies, and markets to inform your reading of the headlines of the day, this book is a terrific one to pick up. As Muller advises in the book, everybody comes to the table with their own set of biases– and the only criticism I find with this book is that sometimes Muller’s own biases become apparent (though surely that’s also just me reading the book with my own biases as well!). Energy for Future Presidents can serve both as a thorough read or as a type of reference for various technologies, so for that reason it’s a worthy book to add to your personal library.

If you’re interested in following what else I’m reading, even outside of energy-related topics, feel free to follow me on Goodreads. Should this review compel you to pick up Energy for Future Presidents by Richard A. Muller, please consider buying on Amazon through the links on this page. I’m also going to run a giveaway for this book– if you want to enter for a chance to receive a copy of this book, there are two ways: 1) Subscribe to this blog and leave a comment on this page and 2) go to my Twitter account and retweet the tweet that links to this review. Feel free to enter both ways in order to double your chances of winning! The winner will be contacted by the end of February. 

About the author: Matt Chester is an energy analyst in Washington DC, studied engineering and science & technology policy at the University of Virginia, and operates this blog and website to share news, insights, and advice in the fields of energy policy, energy technology, and more. For more quick hits in addition to posts on this blog, follow him on Twitter @ChesterEnergy.  

DOE Spotlight: Federal Energy Regulatory Commission

The Federal Energy Regulatory Commission, or FERC, is an independent agency charged with specific regulatory oversight in the energy industry, specifically the interstate trade of energy (i.e., natural gas, oil, and electricity) and reviewing proposals for certain energy infrastructure, including liquefied natural gas (LNG) terminals, interstate natural gas pipelines, and hydropower projects. Since its inception, FERC has played a critical role in the regulation (and deregulation) of the energy industry, though its public profile has varied between somewhat in the background, where only those in the industry ever paid it attention, to a notable political presence in the news, when the issues at hand became more mainstream.

Recently, FERC has made headlines after being tasked by the Trump Administration to investigate grid reliability concerns and whether coal and nuclear plants should be propped up monetarily for their ability to store fuel on site. While that proposal was ultimately rejected (as will be discussed later), it did bring FERC to the forefront of many headlines and debates, while also illuminating how little FERC is really understood in the mainstream.



With that in mind, what follows is a primer on what you need to know about FERC to understand its history and role in energy markets for the next time the Commission pops up in a front page news article.

History of FERC

In its current form, FERC was established in 1977, the same year as the Department of Energy (DOE). However, the Commission traces its lineage back to the 1920s with the establishment of the Federal Power Commission (FPC). The federal government established the FPC as an independent commission to oversee hydropower dams that were located on federally owned land or affected federal waters. Hydropower, which had been around in the form of rudimentary water wheels for over 2,000 years, had started to become more industrialized and critical in the United States with the increased demand for wartime electricity, so the FPC was the first government regulatory agency of energy in the United States, seeking to encourage hydropower projects while protecting federal lands, waterways, and water sources.

In the next decade, President Franklin Roosevelt took on the cause of dismantling the monopolies of the electric companies. With that goal, Congress passed the Federal Power Act in 1935. This legislation expanded the power of the FPC, which was still composed of the Secretaries of War, Agriculture, and Interior, to set wholesale electricity prices at levels they deemed ‘just and reasonable.’ President Roosevelt’s next legislative push was the 1938 Natural Gas Act, giving the FPC the additional authority to regulate the sale and transport of natural gas.

Source

FDR’s initial plan to expand the regulatory power of the FPC and neutralize the monopolies in the electricity sector continued on for the next few decades after he left office, with FPC’s role gradually expanding to include regulation of natural gas facilities, transmission of power over state lines, and more. The next instance of drastic change came in the wake of the oil crisis of 1973, which highlighted the need to consolidate the energy functions in government, which were at that time being conducted by over 30 different government agencies, under one umbrella. That umbrella was the U.S. Department of Energy, formed in 1977 by the Department of Energy Organization Act. Included in this establishment of DOE was the founding of the Federal Energy Regulatory Commission to replace the FPC. The mission of FERC was very similar to the mission that evolved at the FPC: to ensure the wholesale prices being paid for electricity were fair.

Following in the footsteps of its predecessor agency, FERC continued to gather new responsibilities over the years:

  • The Public Utilities Regulatory Policy Act of 1978 tasked FERC with the responsibility of managing a program to develop new co-generation and small power production, as well as the regulation of wellhead gas sales;
  • In the 1980s, FERC began to deregulate the natural gas markets;
  • The Energy Policy Act of 1992 attempted to liberalize the electricity market and gave FERC the ability to oversee wholesale competition in the newly open energy markets; and
  • The Energy Policy Act of 2005 also expanded FERC’s responsibilities to include power regulation in interstate commerce of energy, i.e. the transmission of electricity through power lines and oil and gas that crossed state lines via pipeline.

As energy markets, have gotten more and more deregulated in the 20th century, FERC’s powers and responsibilities to oversee those deregulated markets have grown to meet the additional complexities of such markets. This gradual evolution of responsibilities of FERC explains why the Commission has increasingly found itself and its decisions a topic of debate in the public sphere, where initially the work being done was niche and mundane enough that it did not cause many waves.

Purpose

As stated on FERC’s website, the mission of FERC is to ‘assist consumers in obtaining reliable, efficient and sustainable energy services at a reasonable cost through appropriate regulatory and market means.’ This mission is achieved through the guiding principles of organizational excellence, due process and transparency, regulatory certainty, stakeholder involvement, and timeliness.

FERC, after the decades of evolution, has come to have a litany of responsibilities working towards that main mission. However FERC does not simply have carte blanche for all energy and electricity oversight in the United States. The Commission instead gradually gained certain powers, while others were intentionally left to the states or to the open market. As a guide, the below table identifies what FERC does and what FERC does not do:

Click to enlarge

How FERC works

Given the variety of responsibilities that fall under FERC, understanding how the Commission actually works is critically important to understanding its place in the energy industry. In terms of makeup, FERC is composed of up to five Commissioners who are all appointed by the President, with one of the Commissioners serving as the Chairman (also designated by the President). Of the five Commissioners, no more than three may belong to the same political party, and each Commissioner is appointed for a five-year term (the Commissioners’ terms are staggered so they don’t all need to be replaced at once). Each Commissioner of FERC has an equal vote on regulatory issues, with the Chairman being the one to break any ties.

Despite being under the DOE umbrella, FERC operates independently and its decisions are not subject to further review by DOE– a vital component of it functioning as it is intended. The requirement that no more than three Commissioners come from one party is to keep it independent from politics. Despite the individuals being nominated by the President and confirmed by the Senate, FERC operates independently from the influence of the Executive and Legislative Branches, as the courts are the only entities that can review FERC decisions.

Beyond the five Commissioners, FERC is a large operation with over 1,200 employees and an agency budget of over $300 million. These figures may sound like a lot, but the process appears remarkably efficient when considered in the context of FERC’s responsibilities for an electricity industry worth $250 billion and tasked with regulating the electricity used by 73% of Americans.

Source

FERC’s regulatory review can be kicked into gear in a couple of different ways. For issues with lots of stakeholders and public impact, FERC will use the rulemaking process to ensure the ability to gather information, comments, and other input before making a ruling. The notices of these rulemakings will be posted publicly in the Federal Register so the question at hand and the intended pathway is in the public record for all to read, comment on, and follow. These rulemaking processes can be initiated by a petition from the energy industry, specific companies, stakeholders, or anyone in the public .  DOE can even initiate a FERC rulemaking, as it did recently with the grid resilience Notice of Proposed Rulemaking (NOPR), but FERC comes to the conclusion of that rulemaking independently, without being subject to DOE review.

For more specific topics undertaken by FERC, such as licensing of a hydropower project, FERC will also post notices of this activity in the Federal Register (in fact, this type of licensing proposal is among the most common notice FERC, or DOE as a whole, will post in the Federal Register– see graphic below). These actions are initiated by the entity looking for a license or other approval that FERC is authorized to give. For any stakeholders who seek to participate in FERC’s processes, these notices also provide an opportunity for any stakeholders to review the action and participate through protesting or filing a complaint.

Outside of a rulemkaing from FERC requested by an outside entity, FERC also continually reviews the aspects of the energy industry of which it has oversight, such as interstate electricity transmission and wholesale electricity sales, and can initiate investigation and action against any utility found to be in violation of any regulations. In the event of a violation, FERC has the authority to impose fines and other punitive measures. While these violations can also be flagged by outside entities (e.g., states, customers, companies), FERC alone has authority to determine fault and punishment, subject to review only by the courts.

FERC in the news today

As previously noted, FERC oversees an electricity industry worth hundreds of billions of dollars, and as the energy industry becomes increasingly in the focus of politicians and large corporations, so too do the collective actions of FERC. Below are several of the higher profile incidents that brought FERC to the front page of newspapers in recent years.

California utilities overcharging customers

In 2001, California began scrutinizing its power prices that had recently skyrocketed after the state electric grid was deregulated and opened up to competition. The state accused wholesalers of overcharging customers by $6.2 billion for electricity sold during acute power shortages, and California filed charges with FERC. As a result, FERC ordered refunds, though for only $124 million. The issue did not end there, with California then accusing FERC of stripping billions of dollars from potential refunds and failing  to properly ensure that prices set in California were ‘just and reasonable.’ Much has been written about this event, deemed California’s energy crisis– read about the entire timeline and actions surrounding the crisis here. While FERC faced criticism for potentially not doing enough, a 2016 federal court decision upheld FERC’s findings and actions.

Role in approving pipelines

A recurrent theme that brings FERC into the thick of controversy is its role in approving certain pipelines, as these projects are typically protested and strongly opposed by environmental groups. All major natural gas pipelines FERC has approved are listed on the Commission’s website (remember that while FERC regulates interstate commerce of gas and oil through pipelines, it only approves the siting and construction of natural gas pipelines and not oil pipelines). Such involvement in the approval of pipelines leads to FERC being a lightning rod for criticism by pipeline opponents of any environmental incidents and accidents that may occur. FERC sees numerous protests when it is debating the approval of specific pipelines by citizens who oppose the building of pipelines in their regions (such as the Transco pipeline in New Jersey, the Millennium Pipeline in New York, and the Marc 1 Hub Pipeline in Pennsylvania, just to name a few). Opponents of gas pipeline projects accuse FERC of approving too many pipelines, issuing approvals too easily without enough environmental analysis, and not taking opposition of locals seriously enough. On the other hand, those supporting natural gas infrastructure point out that FERC is required to allow developers to build gas pipelines as long as they comply with laws and regulations, and even stress that ‘it’s harder to build a pipeline today than it was 10 years ago…it takes more time and it’s more expensive.”

Source

These type of projects inspire such passion on both sides, but assuming FERC works as intended then the Commission is independent of partisan causes and political leanings. Instead, FERC accounts for all public comments and stakeholder concerns and ensures their rulings are based on existing laws, regulations, and stipulation.

Trump’s FERC without quorum

When President Trump took office in January 2017, he expressed a desire for Cheryl A. LaFleur (a sitting Commissioner and former Chairman of FERC) to be elevated to Chairman. However, this snub of the sitting Chairman, Norman Bay, led to Bay’s resignation from the Commission altogether. As FERC already had two vacant seats at this time, the resulting third vacancy left FERC with only two Commissioners and thus a lack of a quorum with which to take any action. For an administration that had promised to be a friend to the oil and gas pipeline industry, this sudden non-quorum meant that all pipeline projects that needed approval from FERC remained at a standstill until a quorum of Commissioners could be nominated and approved. While the three Commissioners Trump nominated were awaiting Senate confirmation, a fourth Commissioner announced her imminent departure and left FERC with just one sitting Commissioner.

The lack of a FERC quorum lasted six months, ending in August with the swearing in of two newly confirmed Commissioners. Those six months left various infrastructure and energy projects in limbo, the first time FERC had been without a quorum in its history. Eventually all of President Trump’s nominees were confirmed, and the fiver-person FERC now consists of Kevin J. McIntyre (the Chairman), Cheryl A. LaFleur, Neil Chatterjee, Robert F. Powelson, and Richard Glick.

DOE Grid Resilience Proposal

In September 2017, DOE formally proposed that FERC take action to implement reforms that would provide a financial boost to power providers who kept on site a 90-day fuels supply. This proposal was intended to give an edge to coal and nuclear generation facilities to provide a baseline degree of resilience and reliability to the electrical grid, as those are the only fuel sources where such a fuel supply is readily able to be stored on site.

This proposal was met with intense opposition from providers of renewable energy and natural gas, as well as from grid operators and former FERC Commissioners from both political parties. Those opposed accused DOE of unjustly trying to pick winners and prop up coal and nuclear, citing authorities like the North American Electrical Reliability Corporation (NERC) that have found that the reliability of the bulk power system is not at risk due to the recent closures of coal and nuclear plants.

FERC ultimately decided in January 2018 that the actions DOE proposed failed to meet the requirement that such actions be just, reasonable, and non-preferential of specific fuel types, doing so with a unanimous vote. FERC explained their decision by noting that the proposal was not shown to ‘not be unduly discriminatory or preferential,’ and that the 90-day fuel supply requirement would ‘appear to permit only certain resources to be eligible for the rate, thereby excluding other resources that may have resilience attributes.’ The decision by FERC was celebrated by many in the energy industry as demonstrating the independence of FERC and the process working as it should, with the Commissioners not simply voting based on party-lines and implementing whatever the Executive Branch (through the President and DOE) wanted– no doubt an important reminder in the increasingly partisan environment of U.S. policy-making.

 

 

These are just some of the recent highlights, as FERC always has its plate full with issues that bring passionate debate from multiple sides. For a list of some more controversial issues FERC has been tasked with addressing, see the ‘Controversies’ section of this article on FERC.

Sources and additional reading

About FERC: FERC.gov

An Overview of the Federal Energy Regulatory Commission and Federal Regulation of Public Utilities in the United States: FERC

Federal Energy Regulatory Commission (FERC): AllGov

Hydropower Regulatory History: U.S. Fish & Wildlife Service

What FERC Is and Why It Matters: Huffington Post

What is FERC? PBS

 

About the author: Matt Chester is an energy analyst in Washington DC, studied engineering and science & technology policy at the University of Virginia, and operates this blog and website to share news, insights, and advice in the fields of energy policy, energy technology, and more. For more quick hits in addition to posts on this blog, follow him on Twitter @ChesterEnergy.  

Federal Government Shutdown: Analyzing Electricity Demand When Government Workers Get Furloughed in Washington DC

In a dance that’s become a bit too commonplace in the federal government, threats of a government shutdown over political differences and budget issues are looming once again. After multiple continuing resolutions agreed to between Democrats and Republicans, the latest deadline for appropriation bills to fund the government is fast approaching. While a potential government shutdown would put my 9-5 job on hold until a resolution was reached, a frustrating prospect for all families who rely upon paychecks from their government jobs, there’s not much to do for those of us outside of the White House and Congress. What I can do with that nervousness, though, is ask energy-related questions!

The fact that energy and electricity use changes at regular intervals throughout the day and week is well established, and these trends are reliably correlated with the day of the week, time of the day, and weather. Knowing this led me to the question of how a government shutdown would effect the electricity demand in the Washington DC area, where over 14% of the workforce is made up of federal employees. Would a government shutdown lead to an electricity demand closer to a typical weekend day than a weekday because of the large amount of people who would no longer be reporting for work? Would the overall electricity demand go up or down? Is any of this even noticeable, given that about 86% of the workforce would be going to work as normal? We are only four years removed from the last federal government shutdown, so looking at the electricity demand surrounding the 2013 shutdown can provide some insight as to what might happen if there is a shutdown this time around.



Background

The 2013 federal government shutdown lasted from October 1 through October 16, with President Obama signing a bill to reopen the government shortly after midnight on October 17. The political football at stake in 2013 was the Affordable Care Act, as Republicans in Congress sought to defund the program while the Democrats refused to pass funding bills that would do so. As a result, nearly 800,000 non-essential federal employees across the country were out of work without pay, while about 1.3 million essential employees reported to work as normal (though they saw their paychecks delayed). At the heart of the potential 2018 shutdown is the political debate surrounding immigration policy, though the effects on government workers would likely be largely the same as in 2013.

Source

While these numbers account for the vast amount of federal employees furloughed outside of Washington DC (such as employees in National Parks across the country), they still included a large number of DC residents. Further, employees of government contractors were reportedly sent home and furloughed without pay as well, though the data surrounding exactly how many government contracts were affected is unclear. So while there are other metropolitan areas that have a larger percentage of their workforce employed by the federal government, the prominence of federal contractor workers in DC still makes it an obvious choice for examining how the electricity demand changed in the wake of the 2013 federal government shutdown. More importantly, though, this analysis will focus on Washington DC because the data from the power companies is available in a sufficiently granular way for the region. The Potomac Electric Power Company, or PEPCO, is the electric power company that serves the entire city of Washington DC, as well as the surrounding communities in Maryland, so looking at PEPCO’s data over the shutdown dates will enable insights into the effect of the shutdown. Federal workers in other regions are typically served by much larger power companies (such as Dominion Energy in Virginia serving many of the Northern Virginia communities of federal workers in addition to the rest of Virginia and parts of North Carolina), making the potential effect on the power delivery data from the shutdown less significant on a relative scale.

Data and graphics

PJM, the regional transmission organization that coordinates the movement of wholesale electricity in 13 states and DC, makes available PEPCO’s metered electricity load data on an hourly basis. This type of data is available for most U.S. power companies, which is fun to play with to get an idea of how Americans behave during certain events like holidays, the Super Bowl, or any other large-scale event. In order to get a baseline of what the weekly electricity distributed by PEPCO, we can first look at the two weeks leading up to the government shutdown of 2013:


A couple trends become clear looking at these two seemingly normal weeks. First, the weekends (with Saturday and Sunday graphed using a dashed line instead of the solid line for weekdays) appear to have less electricity demand compared with weekdays. This trend is noted everywhere, not just DC, as weekends are when typical commerce activity drops. Additionally, there are clearly patterns of high and low electricity use by time of day, regardless of weekend or weekday. Demand appears to be at the lowest late at night and early in the morning when most people are sleeping, ramp up in the morning as people wake up to begin their day, and peaks around 5 PM when people are coming back home, making dinner, turning on the TV, putting laundry in the washing machine, etc. But did any of these trends change during the 2013 federal government shutdown? Here is the same data for the three calendar weeks during which the government was shut down:


When comparing these graphs with the two weeks prior, there does seem to be some noticeable differences– though the differences vary between the three weeks the shutdown was effective:

First Week

  • To start, the peak and cumulative power use appears to have increased a significant amount during the first week of the shutdown– though that could always be caused by the weather and a need to increase air conditioning or heating in a home. Indeed, looking at the temperature (discussed more later), the average temperature during the week climbed from about 66 degrees Fahrenheit the week before to about 73 degrees Fahrenheit. A possible explanation is the higher power use coming from people turning on their AC for the first time in a while due to unseasonably warm temperatures.
  • The overall ‘shape’ of the curves remain constant, so the furloughed employees and contractors did not appear to change their daily patterns enough to shift the timing of peak and minimum electricity loads.
  • Also interesting to note is that the Sunday before the shutdown (Sep. 29) stays lower than the weekdays, as was noted to be typical of weekend days, but the Saturday following the shutdown (Oct 5) then shifts to be among the days with the greatest electricity demand. I wasn’t expecting the furloughing of employees to have much of an effect on the weekend electricity demand, as most of the furloughed federal employees presumably did not typically work on weekends, but the answer can likely be attributed to weather as the weekend of Oct 5-6 had the warmest temperatures (79 and 80 degrees Fahrenheit, respectively) of the whole analysis period.

Second Week

  • The second week is the most anomalous of the three, with Sunday and Monday having the shape of the curve significantly affected and also having much higher peaks than the rest of the week (whereas the first week increased the peaks more comparably among the days of the week). In terms of why Sunday might have shifted so significantly, a search of what might have happened in Washington DC to cause this change on October 6, 2013 turned up an article about an explosion accident on the Metro. Perhaps the emergency response to this incident caused significant effects to the electricity demand?
  • Outside of Sunday and Monday, the peaks and shapes of the demand curves were back to being comparable to pre-shutdown levels. As will be shown shortly, though, this trend looks to be attributable to the returning of temperatures to an average of 65 degrees Fahrenheit.

Third Week

  • By the time of the third and final week of the shutdown, the electricity demand curve looks to be mostly back to normal. The last Sunday of the shutdown and the first Saturday after the shutdown look like normal weekend days, while the weekday curves look normal all week, even though the furloughed government employees and contractors did not head back to work until Thursday.

Just to be complete and ensure the trends we saw before and during the 2013 federal government shutdown were not just random week-to-week variations, below are the same graphs for the two weeks following the shutdown:

These two weeks show somewhat the same general trends we saw prior to the shutdown, with the main changes being that the peak demand for each day appears to be shifted to first thing in the morning when people are waking up and the morning of Saturday Oct 26 showing a higher peak than is typically expected of a weekend day. The peak electricity demand shifting to the morning likely comes from the weather getting colder (down to average temperatures of 53 and 59 degrees Fahrenheit, respectively), while the early peak electricity demand on Saturday Oct 26 might have been caused by a rally protesting mass surveillance that attracted thousands of people to Washington DC (though it too is likely in part due to the fact that it was the first day of the season where the average daily temperature dipped to 46 degrees Fahrenheit and people cranked the heat up when they woke up shivering that Saturday morning).

In addition to the demand curves, it’s important to look at the total daily electricity consumed by day over these previously discussed weeks, while also comparing these totals to the average daily temperatures in DC as I’ve done through the previous analysis:

As these two graphics demonstrate, the total electricity demand mostly moves step-in-step with the daily weather regardless of whether or not the federal government is open. If it gets too warm or too cold, that is when you see the spikes in electricity demand– and that will always be the most significant factor.

Conclusions

In the end, there does not appear to be a significant effect on Washington DC’s electricity demand during a federal government shutdown. While having thousands of employees and contractors stay at home is certainly not trivial, there are still even more government employees who would be deemed ‘essential’ and would be in the federal buildings (who would still be operating their heating/cooling systems). Beyond that, a vast majority of PEPCO customers are not in the federal workforce, so the change in daily habits of the unfortunately furloughed employees does not move the needle in a noticeable manner in terms of electricity demand. What’s more important to consider is the weather, and perhaps any daily events such as the Metro accident or the anti-surveillance rally. So while no one, especially in DC, is rooting for a federal government shutdown this week (the 2013 shutdown cost the country $24 billion and disrupted Veterans Affairs benefits from being sent out), we can take incredibly small solace that it won’t disrupt the expected electricity demand. Despite liquor sales increasing during the 2013 shutdown, the thousands of workers who would find themselves temporarily out of work would not have their change in daily routine threatening the electrical grid’s behavior.

If this type of data is of interest to you, by the way, the Energy Information Administration has an amazing tool that allows you to track electrical demand across the country in real-time. Are there any other events you think would be interesting to investigate for their effect on electricity demand? Let me know in the comments!

Sources and additional reading

Absolutely everything you need to know about how the government shutdown will work: Washington Post

Customer Base Line: When do you use the most electricity? Search for Energy

Demand for electricity changes through the day: Energy Information Administration

Democrats face make-or-break moment on shutdown, Dreamers: Politico

Electricity demand patterns matter for valuing electricity supply resources: Energy Information Administration

Electricity supply and demand for beginners

Everything You Need to Know About the Government-Shutdown Fight: New York Magazine

Here’s What Happened the Last Time the Government Shut Down: ABC News

How Many Federal Government Employees Are in Alexandria? Patch

Metered Load Data: PJM

U.S. Government Shutdown Looms Amid Immigration Battle: Reuters

Which Metro Area Has the Highest Share of Federal Employees? Hint: Not Washington: Government Executive

About the author: Matt Chester is an energy analyst in Washington DC, studied engineering and science & technology policy at the University of Virginia, and operates this blog and website to share news, insights, and advice in the fields of energy policy, energy technology, and more. For more quick hits in addition to posts on this blog, follow him on Twitter @ChesterEnergy.  

Drilling in the Alaskan Arctic National Wildlife Reserve vs. Renewable Energy: The Drilling Debate, Economic and Environmental Effects, and How Solar and Wind Energy Investment Would Compare

In a first for this blog, the focus of this post comes directly from a reader request– so I’ll let this person’s words speak for themselves:

With Congress recently passing a bill allowing for drilling of oil and gas in Alaska’s Arctic National Wilidlife Refuge (ANWR), it got me curious (as a citizen of the sun-rich American Southwest) how much land would need to be covered in solar panels in order to generate the same amount of energy that would be found in these potential new oil and gas drilling sites. Obviously each energy source would have their individual costs to consider, but I am curious as to how efficient and cost-effective it would be to drill in the Alaskan arctic if there are cleaner and cheaper alternatives– it seems covering up the deserts of New Mexico and Arizona could be preferable to potentially harming some of the Alaskan environment and wildlife. Is drilling in this new area even an efficient and safe way for us to get additional oil and gas?
– Case

I loved the thoughtfulness and importance of this question and was inspired to immediately jump into research (also I was so happy to have a suggestion from an outside perspective– so if you read this or any of my other posts and you get inspired or curious, please do reach out to me!). From my perspective, this overall inquiry can be broken down into five questions to be answered individually:

  1. What is ANWR and what exactly did Congress authorize with regards to drilling in ANWR?
  2. How much potential oil and gas would be produced from the drilling?
  3. What are the economics associated with extracting and using oil and gas from ANWR?
  4. What are the environmental effects of that drilling?
  5. Can we do better to just install renewable energy resources instead of drilling in ANWR? How much capacity in renewable sources would be needed? How would the costs of renewable installations compare with the ANWR drilling?



Question 1: What is ANWR and what exactly did Congress authorize with regards to drilling in ANWR?

The Arctic National Wildlife Refuge, or ANWR, has long been a flash point topic of debate, viewed by proponents of oil and gas drilling as a key waiting to unlock fuel and energy independence in the United States, while opponents argue that such drilling unnecessarily threatens the habitat of hundreds of species of wildlife and the pristine environment that’s been protected for decades. ANWR is a 19.6-million-acre section of northeastern Alaska, long considered one of the most pristine and preserved nature refuges in the United States. Having stayed untouched for so long has allowed the native population of polar bears, caribou, moose, wolverines, and more to flourish. ANWR was only able to remain pristine due to oil and gas drilling in the refuge being banned in 1980 by the Alaskan National Interest Conservation Act, with Section 1002 of that act deferring decision on the management of oil and gas exploration on a 1.5-million-acre coastal plane area of ANWR known to have the greatest potential for fossil fuels. This stretch of ANWR has since become known as the ‘1002 Area.’

Source

This 1002 Area of ANWR is at the center of the ANWR debate, as Presidents and Congresses have had to fight various bills over the past couple decades that sought to lift those drilling bans, doing so successfully until recently. At the end of 2017, with Republicans (who have long been pushing to allow such oil and gas exploration in ANWR) controlling the White House and both Houses of Congress, decisive action was finally made. The Senate Energy and Natural Resources Committee, led by Lisa Murkowski of Alaska, voted in November to approve a bill that would allow oil and gas exploration, with that bill ultimately getting attached to and approved along with the Senate’s tax-reform package in December, with the justification for that attachment being that the drilling would help pay for the proposed tax cuts.

Specifically, the legislation that ended the ban on oil and gas drilling in ANWR did so by mandating two lease sales (of at least 400,000 acres each) in the 1002 Area over the next 10 years. The government’s royalties on these leases are expected to generate over $2 billion, half of which would go to Alaska and the other half to the federal government.

Source

Question 2: How much potential oil and gas would be produced from the drilling?

This really is the million dollar (or, rather, billion dollar) question, because part of the issue is that no one really knows how much fossil fuel is hidden deep under ANWR. The situation is a bit of a catch-22, as you cannot get a good idea for how much oil there is without drilling, but under the drilling ban you cannot explore how much there is. A number of surface geology and seismic exploration surveys have been conducted, and the one exploratory drilling project by oil companies was allowed in the mid-1980s, but the results of that study remain a heavily guarded secret to this day (although National Geographic has previously reported that the results of the test were disappointing). In contrast even to regions bordering ANWR in Alaska that have the benefit of exploratory drilling, any analysis of the 1002 Area is restricted to field studies, well data, and analysis of seismic data.

The publicly available estimates from the 1998 U.S. Geological Survey (USGS) (the most recent one done on the 1002 Area) indicate there are between 4.3 billion and 11.8 billion barrels of technically recoverable crude oil products and between 3.48 and 10.02 trillion cubic feet (TCF) of technically recoverable natural gas in the coastal plain of ANWR. Even though there is that much oil and gas that is technically recoverable, though, does not mean that all of it would be economical to recover. A 2008 report by the Department of Energy (DOE), based on the 1998 USGS survey and acknowledging the uncertainty in the USGS numbers given that the technology for the USGS survey is now outdated, estimates that development of the 1002 Area would actually result in 1.9 to 4.3 billion barrels of crude oil extracted over a 13-year period (while the rest of the oil would not be cost effective to extract). The report also estimates that peak oil production would range from 510,000 barrels per day (b/d) to 1.45 million b/d. These estimates must be taken with a grain of salt, however, as not only are they based on the use of now-outdated technology, but the technology to extract oil is also greatly improved. These technology improvements mean the USGS estimates could be low, but on the other side, oil exploration is always a lottery and recent exploration near ANWR has been disappointing. That’s all to say, current estimate are just that, estimates– which makes the weighing of pros and cons of drilling all the more complicated.

Source

The 2008 DOE report did not assess the potential extraction of natural gas reserves (note that much of the analysis and debate surrounding ANWR drilling focuses mainly on the oil reserves and not the natural gas reserves, likely because the oil is more valuable, cost-effective to extract, and in demand. Where relevant, I will include the facts and figures of natural gas in addition to the oil, but note that certain parts of this analysis will have to center just on the oil based the the availability of data).

To put that in context, the total U.S. proved crude oil reserves at the end of 2015 were 35.2 billion barrels, so the technically recoverable oil in the 1002 Area would account for 12 to 34% of total U.S. oil reserves. At the end of 2015 the U.S. proved reserves of natural gas were 324.3 TCF, making the technically recoverable natural gas in the 1002 Area equal to 1 to 3% of total U.S. natural gas reserves. Put another way, the the technically recoverable oil reserves would equal 218 to 599 days worth of U.S. oil consumption (using the 2016 daily average), while the natural gas reserves would equal 47 to 134 days worth of U.S. natural gas consumption (using the 2016 daily average).

Question 3: What are the economics associated with extracting and using oil and gas from ANWR?

In addition to the push towards ‘energy independence’ (i.e., minimizing the need for oil imports from foreign nations where prices and availability can be volatile), a main motivation for drilling in the 1002 Area of ANWR is the economic benefits it could bring. In addition to the $1 billion for the Alaskan government and $1 billion for the federal government from the leasing of the land, Senator Murkowski boasted that the eventual oil and gas production would bring in more than $100 billion for the federal treasury through federal royalties on the oil extracted from the land.

Source

However, these theorized economic benefits to drilling is strongly disputed by the plan’s opponents, with president of the Wilderness Society noting that ‘the whole notion that you are going to trim a trillion-dollar deficit with phony oil revenue is just a cynical political ploy.’ When digging into the numbers more closely, the $1 billion to the federal government from leasing the land would end up offsetting less than 0.1% of the $1.5 trillion in tax  cuts to which the drilling provision was attached (while some analyses question whether the land would gather that much in reality, noting the estimates assume oil leases selling for 10 times what they sold for a year ago when domestic oil was scarcer and more expensive).

Outside of the federal revenue, the money coming to the Alaskan government would be even more influential, which is why the charge to open ANWR to drilling is often led by Alaskan policymakers. In fact, while a majority of Americans oppose drilling in ANWR, most Alaskans are cited as supporting responsible oil exploration. While that may seem counterintuitive, the Arctic Slope Regional Corporation explains that “a clear majority of the North Slope support responsible development in ANWR; they should have the same rights to economic self-determination as people in the rest of the United States.

In addition to the money raised by the government is the potential economic benefit to the country from the extraction of the oil. According to the previously mentioned 2008 DOE report, the extraction of the ANWR oil would reduce the need for the United States to import $135 to $327 billion of oil. This shift would have a positive benefit to the U.S. balance of trade by that same amount, but the reduction of reliance on imported foreign oil would only drop from 54% to 49%, and the effect on global oil prices would be small enough to be neutralized by modest collective action by the Organization of Petroleum Exporting Countries (OPEC), meaning U.S. consumers would likely not see an effect on their energy prices.

The last economic consideration would be the worth of the oil and the cost to the companies doing the drilling to extract and bring to market the oil products. A study done by the researchers at Elsevier found that the worth of the oil in the 1002 Area of ANWR is $374 billion, while the cost to extract and bring to market would be $123 billion. The difference, $251 billion, would be the profits to the companies— which theoretically would generate social/economic benefits through means such as industry rents, tax revenues, and jobs created and sustained.

So in short, the decision about whether or not to drill in ANWR has the potential to cause a significant economic effect for the federal and Alaskan state governments, the oil companies who win the leasing auctions, and those who might be directly impacted from increased profits to the oil and gas companies. As with all analytical aspects of ANWR drilling, though, the exact scale of that effect is hotly debated and subject to the great uncertainty surrounding how much oil and gas are technically recoverable from the 1002 Area. Further, the amount of oil that is economically sound to recover and put into the market (not to mention the price oil and gas companies would be willing to spend on leasing this land) is entirely depending on the ever-fluctuating and difficult to forecast price of crude oil, adding further potential variability to the estimates.

Question 4: What are the environmental effects of that drilling?

As previously noted, drilling in ANWR is an especially sensitive environmental  subject because it is one of the very few places left on Earth that remains pristine and untouched by humanity’s polluted fingerprint. The vast and beautiful land has been described by National Geographic as ‘primordial wilderness that stretches from spruce forests in the south, over the jagged Brooks Range, onto gently sloping wetlands that flow into the ice-curdled Beaufort Sea’ and is often called ‘America’s Serengeti.’ In terms of wildlife, ANWR is noted as fertile ground for its dozens of species of land and marine mammals (notably caribou and polar bears) and hundreds of species of migratory birds from six continents and each of the 48 contiguous United States.

Source

While the exact environmental effects of oil exploration and drilling are not known for certain, the potential ills that can befall the environment and wildlife in ANWR include the following:

  • Oil development is found to be very disruptive to the area’s famed porcupine caribou, potentially threatening their existence (an existence which the native Gwich’in people depend upon for survival), with the Canadian government even issuing a statement in the wake of the ANWR drilling bill reminding the U.S. government of the 1987 bilateral agreement to conserve the caribou and their habitat;
  • ANWR consists of a biodiversity that’s so unique globally that the opportunity for scientific study is huge, and any development of that land is a threat to that existing natural biodiversity in irreparable way;
  • The National Academy of Sciences has concluded that once oil and gas infrastructure are built in the Alaskan arctic region, it would be unlikely for that infrastructure to ever be removed or have the land be fully restored, as doing so would be immensely difficult and costly;
  • Anywhere that oil and gas drilling occurs opens up the threat of further environmental damage from oil spills, such as the recent BP oil leak in the North Slopes of Alaska that was caused by thawing permafrost; and
  • Not only do the direct effects of drilling for oil in ANWR need to be considered, but also the compounding effects that the eventual burning of that oil must be weighed. The use of the oil contained underground in Alaska will only serve to increase the effects of climate change in the Arctic, where temperatures already rise twice as quickly as the world average. The shores of Alaska are ground zero for the effects of climate change, with melting sea ice and rising sea levels causing additional concerns for survival of both wildlife and human populations that call Alaska home. The most climate-friendly way to treat the oil underneath ANWR would be to leave it in the ground.

Question 5: Can we do better to just install renewable energy resources instead of drilling in ANWR? How much capacity in renewable sources would be needed? How would the costs of renewable installations compare with ANWR drilling?

Part 1: Can we just install renewable energy instead of drilling?

At the crux of the original question was whether the country would be better off if we diverted resources away from ANWR drilling and instead developed comparable renewable energy sources. While this question is rooted in noble intent, the reality of the situation is that it would not always work in practice to swap the energy sources one-for-one.

Looking at the way in which petroleum (which includes all oils and liquid fuels derived from oil drilling) was used in the United States in 2016 using the below graphic that is created every year by the Lawrence Livermore National Laboratory (a DOE national lab), we find that 35.9 quadrillion Btus (or quads) of petroleum were consumed. This massive sum of oil energy (more than the total primary energy, regardless of fuel type, consumed by any single country other than the United States and Canada in 2015) is broken down as 25.7 quads (72%) in the transportation sector, 8.12 quads (23%) in the industrial sector, 1.02 quads (3% in the residential sector, 0.88 quads (2%) in the commercial sector, and 0.24 quads (1%) in the electric power sector. Meanwhile, the 28.5 quads of natural gas goes 36% to the electric power sector, 34% to the industrial sector, 16% to the residential sector, 11% to the commercial sector, and 3% to the transportation sector.

Source

(side note– I think this is one of the most useful graphics created to understand the U.S. energy landscape every year. I have it printed and hanging at my desk and if you are trying to learn more about the different energy types and relative sizes of the energy sector then I recommend this as a great graphic to always have handy)

Compare this breakdown with some of the non-fossil fuels:

  • 100% of wind power (2.11 quads) goes to the electric power sector;
  • 99% of hydropower (2.48 quads) goes to the electric power sector, with the rest going to the industrial sector;
  • 70% of geothermal power (0.16 quads) goes to the electric power sector, with the rest going to the residential and commercial sectors (using geothermal as a heat source as a direct substitute for the electric power sector); and
  • 58% of solar power (0.34 quads) goes to the electric power sector, while 27% goes to residential sector (in the form of residential solar generation or solar heating, essentially a direct substitute for the electric power sector), 12% goes to the commercial sector (also basically a direct substitute for the electric power sector), and less than 1% goes to the industrial sector.

We see that renewable energy sources are capable of displacing a large chunk of the electric power sector, particularly the types of renewable sources like wind and solar that could be installed in vast open land like the original question asked. However, the oil and gas resources that are the subject of the ANWR debate are largely not powering electricity generation, and as such renewable energy sources cannot easily displace most of the uses of the oil and gas.

The issue with thinking ‘why don’t we not drill and instead just invest in renewable energy’ is that in today’s world, there are lots of uses of energy that can only be served, or at least can only be served optimally, by oil products. For example, renewable fuel replacements for jet fuel are not very promising on a one or two generation timescale and 43% of industrial heating applications require temperatures (above 750 degrees Fahrenheit) that cannot be met by electric means or renewable heating technologies. And regarding the millions of cars on the road, the most pervasive and entrenched oil use in daily life, the looming transition to electric vehicles is taking a long time for a reason– not the least of which is that gasoline’s energy density remains unmatched to deliver power in such a safe, economical, and space-efficient manner. Indeed when analysts or journalists speculate about the world using up all of the oil, what they’re really talking about is the transportation sector because other sectors already largely utilize other fuel types. So when considering where renewable energy can replace fossil fuels, it is important to note that the transportation sector and industrial sector are powered 95% and 72%, respectively, by oil and gas, and that there are sometimes technological, institutional, and infrastructure-related reasons for that that go beyond price and availability.

That said, we are experiencing the eventual shift of some energy uses away from fossil fuels– notably in the transportation sector– but many of these shifts will take time and money to convert infrastructure. Many continue to study and debate whether we’ll be able to convert to 100% renewable energy without the aid of fossil fuels (with some concluding it’s possible, others that it’s not), and if so how far away are we from such an energy landscape. Even considering that it will take 10 years from passing of legislation to beginning of actual ANWR oil production, the American energy mix is only expected to change so much in the next few decades (see the Energy Information Administration forecast for renewable energy, natural gas, and liquid oil fuels below), and for better or worse fossil fuels look to be a part of that mix.

Source

The most significant area in which renewable energy can continue to make headway is the electricity generation sector, the sector that is most suited for renewables even though they only account for 17% of total generation as of 2017. In the meantime, though, fossil fuels like oil and gas will play a crucial role in the energy markets and the potential windfall of resources laying readily underground will continue to be seen as valuable to oil and gas companies (though it is important to ask whether, in the midst of increasing availability of shale oil, will the energy markets need the ANWR oil or will the oil companies even want to gamble on the risky and expensive play).

Part 2: But theoretically, how much renewable energy would need to be installed to account for the energy that would be extracted from ANWR?

All that said, though, for the sake of the academic exercise originally asked, let’s ignore the differences between fuel types and assume that by leaving all the oil and gas from the 1002 Area in the ground and instead installing renewable energy sources (i.e., wind and solar farms) we can extract the same amount of energy for the same needs.

The 2008 DOE report estimated between 1.9 and 4.3 billion barrels of crude oil would be extracted in a developed ANWR. This amount of oil can be converted to between 10.5 and 23.9 quads. The peak extraction according to the DOE report would end up being between 867 and 2,464 gigawatt-hours (GWh) per day.

The 1998 USGS Survey pegged the technically recoverable pegged the technically recoverable natural gas at between 3.48 and 10.02 TCF, which easily converts to between 3.48 and 10.02 quads. Because the DOE report did not break down how much of the technically recoverable natural gas would actually be economical to extract, we’ll assume for simplicity’s sake that it all will be extracted (there’s enough uncertainty in the estimates in all of the USGS and DOE numbers that we need not worry about exactness, but rather make the estimates needed to get an order of magnitude estimate). Without any estimates about the rate of extraction expected from the natural gas, we’ll make a very back-of-the-envelope estimate that it will peak proportionally with oil and reach a maximum rate of 274 to 990 GWh per day.

Adding the cumulative crude oil and natural gas extracted from the 1002 Area would be between 14.0 and 33.9 quads— an amount of energy that would find itself somewhere between the total 2016 U.S. consumption of coal (14.2 quads) and petroleum (35.9 quads). Adding the peak rate of oil and gas extracted from ANWR would imply the total peak of oil plus natural gas of between 1,140 and 3,454 GWh per day (we’re again playing fast and loose with some natural gas assumptions here). This range of rates for the peak energy being pumped into the total U.S. energy supply will be the numbers used to compare with renewable energy rather than the cumulative energy extracted.*

*The reason for this is because it is the best basis of comparison we have to the renewable nature of solar and wind. Why is that? At first glance it would seem that once the cumulative fossil fuels are used up that the installed renewables would then really shine as their fuel is theoretically limitless. However that would be an oversimplification, as every solar panel and wind turbine is made from largely non-renewable sources and the technologies behind them have a limited lifespan (about 25 years for solar panels and 12 to 15 years for wind turbines). As such, every utility-scale renewable energy plant will need replacing in the future, likely repeatedly over the decades. So while the renewable energy sources will not dry up, it is still important to look at the sources from a daily or yearly capacity basis instead of cumulative energy production. Additionally, energy (whether oil or renewable energy) is not extracted and transported all at once, that process takes time. Because of this, energy markets center around the rate of energy delivery and not the cumulative energy delivery.

So given our target range of 1,140 to 3,454 GWh/day, how much solar or wind would need to be installed?

Solar

The reader who asked this question comes from prime solar power territory, so let’s start there. In 2013, the National Renewable Energy Laboratory (NREL) released a report on how much land was used by solar power plants across the United States. With regards to the total area (meaning not just the solar panels but all of the required equipment, buildings, etc.), the generation-weighted average land use was between 2.8 and 5.3 acres per GWh per year, depending on the type of solar technology used. Using the most land-efficient technology (2.8 acres per GWh per year using increasingly common technology that tilts the solar panels to track the sun throughout the day), this amount of solar power would require about 1,166,000 to 3,530,000 acres, or about 4,700 to 14,300 square kilometers, of land.

Source

For reference, in the sun-bathed state of New Mexico, the largest city by land area is Albuquerque at 469 square kilometers. Given that, to equal peak potential oil output from the 1002 Area of ANWR woudl required solar power plant installations with land area about 10 to 30 times greater than Albuquerque. With the whole state of New Mexico totaling 314,258 square kilometers, the amount of land required for solar installations would be between 2 to 5% of New Mexico’s entire land area (put another way, the lower end of the land-requirement range is the size of Rhode Island and the upper end of the land-requirement range is the size of Connecticut).

Wind

Wind energy is set to take over as the number one American source of renewable energy by the end of 2019, a trend that is likely to continue in the future. One reason for the increasing capacity of U.S. wind power in the electric power sector is its ability to be installed both on land and in the water (i.e., onshore wind and offshore wind). Depending on whether the wind power installed is onshore or offshore, the efficiency, cost, and land-use requirements will vary.

NREL also conducted studies of the land-use requirements of wind energy. For both onshore and offshore wind installations, based on the existing wind projects studied, the wind power generating capacity per area (i.e., the capacity density) comes out to an average of 3.0 megawatts (MW) per square kilometer. As with the solar power land-use requirements, note that this figure goes beyond the theoretical space required by physics but includes all required equipment and land-use averaged across all projects.

Source

Operating at 100% capacity, that 3.0 MW per square kilometer would translate to 72 megatwatt-hours (MWh) produced per square kilometer each day. However utility scale wind power does not operate anywhere near 100% due to the prevalence of low wind speeds and changing directionality of winds, among other reasons. NREL’s Transparent Cost Database indicates that offshore wind operates at a median capacity factor of 43.00%, while onshore wind operates at a median of 40.35% capacity. Accounting for these figures, the land use of offshore wind energy comes out to 31.0 MWh per square kilometer per day, with onshore wind energy averaging 29.1 MWh per square kilometer per day. To reach the 1,140 and 3,454 GWh per day from peak-ANWR-oil would thus require about 33,000 to 100,000 square kilometers of area for offshore wind energy and about 35,000 to 107,000 square kilometers of land for onshore wind energy.

Using the same references points as with solar, wind energy resources would require an area roughly between 71 to 228 times the size of Albuquerque, between 11 and 34% the size of New Mexico, or a land-use requirement between the sizes of Maryland and Kentucky. It might seem jarring to realize just how much more land would be required for wind energy than solar energy, but multiple papers appear to support the notion that total land needed for utility-scale wind energy requires as much as six to eight times more land area than utility-scale solar energy on average. Indeed, the land-use required by renewable sources is one of the biggest costs of the energy at this time. If we’re willing to accept nuclear power as a source of clean, though not renewable, energy, then the technology currently outperforms them all by leaps and bounds– requiring 7 to 12 times less land than the same amount of solar power. But obviously nuclear power comes with its own set of political and environmental challenges, furthering the sentiment that there is not one and only one energy that will ever check all of the boxes and meet all of our needs.

Part 3: How would the costs of that scale of renewable energy sources compare with the previously discussed costs of drilling in ANWR?

Considering these results for the amount of land required by solar or wind energy resource to equal the peak oil and gas output of drilling in ANWR, the true scale of the potential energy resources underground the Alaska region really becomes clear. Further, it becomes clear just how difficult it would be to offset all of that potential energy by building utility-scale renewable energy generation. But the remaining question is how would the costs (both financial and environmental) of drilling in ANWR compare with the costs of the same capacity of renewable energy generation?

Source

 

Economically, the government (both state and federal) is only set to really profit from the drilling in ANWR because the area is government-owned and the money paid by the oil companies to lease the land for oil exploration would go directly to the government and because the government would also take a royalty on the profits made from said oil (a method to raise revenue also looking to be repeated in the sale of offshore drilling in almost all U.S. coastal waters). So while there will always be some degree of money provided to the government from renewable energy sources (e.g, through taxes), the land being used for our hypothetical vast solar or wind farms must come from the sale of government-owned land to provide the same sort of government revenue injection as drilling in ANWR. With wind power, at least, federally leasing for offshore wind farming has started to become somewhat common, though from 2013 to 2016 that only generated $16 million for the leasing of more than one million acres.

In terms of the noted benefits of helping U.S. energy trade by reducing the amount of oil that would need to be imported, the same can be said for a comparable amount of renewable energy– if that renewable energy is offsetting the import of fossil fuels, say for the electric power sector, then an equal effect on U.S. energy trade would be achieved.

In terms of the rough cost to install that amount of renewable energy, we can estimate total costs based on the levelized costs of energy (LCOE), which compares different methods of electricity generation based on costs to build, maintain, and fuel the plant over the lifetime. If we ignore the economic benefits that renewable energy sources enjoy from tax credits, the regionally-weighted LCOE’s of solar and wind power generation sources entering service in 2022 are 73.7 cents per MWh and 55.8 cents per MWh, respectively (compared with 96.2 cents per MWh for nuclear and 53.8 to 100.7 cents per MWh for natural gas, depending on the type of technology used). Compared with the total ANWR costs to extract of $123 billion to reach the 14.0 and 33.9 quads equivalent, the cost for solar would be between $3.0 billion and $7.3 billion and the cost for wind would be between $2.3 billion and $5.5 billion (again emphasizing the uncertainty in how much oil/gas is actually under ANWR as well as the very rough-estimate nature of these cost estimates). These numbers are just for the generation, not to mention the cost for transmission and distribution. However, with state-of-the-art renewable energy technology, it’s important to note that the costs are constantly decreasing and these estimates ignored the current tax credits allotted for renewable energy installations.

While renewable energy sources are seen as more environmentally friendly due to being carbon neutral, there are some environmental effects that cannot be ignored. Any energy source that takes up land is potentially displacing wildlife and using water and other resources. Further, just because the energy source is carbon neutral does not mean that the manufacturing, materials transportation, installation, or maintenance of those renewable plants are without emissions. Solar cells are also known to use some hazardous materials in their manufacturing. Regarding wind energy, extensive studies have had to be conducted on the danger wind turbines pose to birds, bats, and marine wildlife, though largely the conclusions of those studies has been that the impacts to such wildlife is low. Large wind turbines have also caused some concerns of public health regarding their sound and visual impact, but careful siting and planning is able to mitigate these concerns. So while the environmental effects of these renewable source are not nonexistent, they do appear to be much more manageable and avoidable than those of drilling for oil and gas.

Source

Conclusion

Even with the caveat that’s necessary to repeat throughout this post that all the numbers and calculations this analysis is based on are best-guess estimates and averages, much can be gleaned from looking at the results all together. Especially when you consider that the technologies involved for all discussed energy sources are constantly improving and each can be optimized for a particular region (such as using solar energy in lieu of wind energy in particularly sunny areas), the answer of how to best answer the energy future questions of the United States and the world is always going to be a strong mix of energy sources. There is no silver bullet, even among renewable energy resources, but rather heavy doses of appropriate renewable energy sources and nuclear energy sources will need to be mixed with the responsible use of fossil fuels for immediately visible future. Since the United States is quite unlikely to go cold turkey on fossil fuels overnight, the continued supply of crude oil products is going to be necessary for the time being. And the potential costs of largely relying on foreign imports to meet that demand are going to be feared by government and industry leaders alike. As such, it can be of no surprise that the massive resources of oil and gas underneath ANWR have been a continued focus of politicians and the oil industry for decades. However, none of that is to dismiss the legitimate environmental concerns the opponents have with sacrificing one of the last true areas of untouched wilderness in the United States to the predominantly-financial-based goals of drilling proponents, and if indeed the U.S. oil markets can prosper without drilling then that needs to be seriously considered.

The debate of whether or not to drill in ANWR is surrounded with so much uncertainty, along with passion on both sides. Because of this, the answer of what to do is not clear cut to many. The best thing you can do is educate yourself on the issues (I highly recommend a thorough read of the links in the ‘sources and additional reading’ section, as so much has been written about this topic that there is an unbelievable amount of information to learn) and stay informed as it evolves. Like it or not, drilling in ANWR is an inherently political debate and that affords all U.S. citizens the right, even the duty, to take your informed opinion and be active with it– call your Congressional representatives, join in the debate, donate to action groups. While the opening ANWR land for leasing to oil companies in the recently passed tax bill was the most significant action in this policy debate in years, the lengthy nature of the legislature and leasing process assures that the matter is anything but settled.

Sources and additional reading

About the author: Matt Chester is an energy analyst in Washington DC, studied engineering and science & technology policy at the University of Virginia, and operates this blog and website to share news, insights, and advice in the fields of energy policy, energy technology, and more. For more quick hits in addition to posts on this blog, follow him on Twitter @ChesterEnergy.  

Petroleum Administration for Defense Districts (PADDs): Past and Present

If you’re an energy-statistics nerd (which you probably are if you’ve found your way to this blog), you’ve no doubt seen various regional data expressed by PADD, or Petroleum Administration for Defense District. Referring to barrels of oil sent from one PADD to another or which PADD uses certain fuel types for home heating  allows for a useful shorthand for regions of the United States and their energy related statistics. Many people who come across the PADD system might already understand PADDs to be a bygone classification system from the country’s fuel rationing days, but most people’s understanding of the PADD system stops here and the history of PADDs are not explored any further.

That’s where this article comes in! This piece will serve to explain what the PADDs are, where they originated, how they evolved over the years, and how they are relevant today.



What are PADDs?

Petroleum Administration for Defense Districts, or PADDs, are quite simply the breaking down of the United States into different districts.

PADD 1 is referred to as the East Coast region and, because of its size, is further divided into three subdistricts:

  • PADD 1A, or New England, comprises Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island, and Vermont;
  • PADD 1B, or Central Atlantic, comprises Delaware, the District of Columbia, Maryland, New Jersey, New York, and Pennsylvania; and
  • PADD 1C, or Lower Atlantic, comprises Florida, Georgia, North Carolina, South Carolina, Virginia, and West Virginia.

PADD 2 is referred to as the Midwest region and comprises Illinois, Indiana, Iowa, Kansas, Kentucky, Michigan, Minnesota, Missouri, Nebraska, North Dakota, South Dakota, Ohio, Oklahoma, Tennessee, and Wisconsin.

PADD 3 is referred to as the Gulf Coast region and comprises Alabama, Arkansas, Louisiana, Mississippi, New Mexico, and Texas.

PADD4 is referred to as the Rocky Mountain region and comprises Colorado, Idaho, Montana, Utah, and Wyoming.

PADD5 is referred to as the West Coast region and comprises Alaska, Arizona, California, Hawaii, Nevada, Oregon, and Washington.

New PADDs

There are also two additional PADDs after the original five PADDs that rarely get mentioned, likely because they are much newer and the volume of oil products going in and/or out of them are minimal compared with the rest. Despite a mention of them in the Energy Information Administration‘s (EIA) write up of the PADD system,  PADDs 6 and 7 (meant to cover U.S. territories around the world) do not have data on them included on the prominent, publicly-facing EIA data sets. However, some digging shows that PADD 6 was added in 2015 in order to properly report needed information to the International Energy Agency and comprises the U.S. Virgin Islands and Puerto Rico, while PADD 7 includes GuamAmerican Samoa, and the Northern Mariana Islands Territory. You will commonly find sources citing just five total PADDs, but don’t let that throw you off. Simply impress those you meet at energy cocktail parties by memorizing what territories are in PADDs 6 and 7.

Origin of PADDs

The federal government first established the regions that would become the five PADDs during World War II. Specifically, the Petroleum Administration for War was established as an independent agency by Executive Order 9276 in 1942 in order to organize and ration the various oil and petroleum products to ensure the military had all the fuel it needed. Part of that organization process was the establishment of these five districts as a tool for that goal. The Petroleum Administration for War ended in 1946 after the war efforts were over, but these five original districts were quickly reestablished by the successor Petroleum Administration for Defense that was created by Congress in 1950 in response to the Korean War. This Administration provided these districts with the name Petroleum Administration for Defense Districts.


Source

Changes over time

As stated, the original function of the PADDs was to ensure proper distribution of oil supplies during World War II. In fact, the Department of Defense made use of the PADD system to redirect oil resources to specific PADDs  in response to Nazi attacks on U.S. tankers. These oil distribution efforts were the largest and most intricate such efforts yet, leading to the realization that interstate pipelines would soon become necessary to connect oil refineries with distant U.S. markets. But once World War II ended, the government determined there was no more need for the Petroleum Administration for War, and gone with the Administration were the districts.

After the Petroleum Administration for Defense revived the five districts, they were then under the management of the Department of Interior’s Oil and Gas Division, with the continued function to ensure the oil needs of the military, government, industry, and civilians of the United States were met. As with the Petroleum Administration for War, the Petroleum Administration for Defense was short-lived and was abolished just four years later by the Secretary of the Interior’s Order 2755 in April of 1954. Even though the government agency was eliminated, the names and organization of the various PADDs continued to be used ever since.

One significant change over the history of PADDs that is important to note is that there are no present day ‘official’ government keepers. While the PADDs served an official function and thus had official definitions set out by government agencies during World War II and the Korean War, that is no longer the case today– but that does not mean they are no longer significant. Within the Department of Energy (DOE), EIA uses the PADDs extensively in its aggregation and dissemination of data (discussed in more detail next). Further, government agencies have defined PADDs for use within specific regulations. For example, the Environmental Protection Agency (EPA) codified PADDs in the Code of Federal Regulations (CFR) when regulating motor vehicle diesel fuel sulfur use (though it explicitly dictates that the definition is only applicable as codified for that specific regulation) and specified total benchmarks and reductions that were to be met PADD-wide, as well as in reporting requirements regarding fuel additives so that they get published by PADD.

Use of PADDs today

With the government being out of the business rationing oil and petroleum since the end of the Korean War, the PADDs have found new purpose. The same PADDs have survived to allow analysis of data and patterns of crude oil and petroleum product movements within (and outside) the United States. Using these PADDs, government and industry players are able to ensure they are using the same regional collection of states and shorthand language to analyze and spot trends within regions instead of being confined to looking at the nation as a whole or analyzing on a more state-by-state basis.

Further, the PADDs are separated in a way that makes analysis straightforward. For example, following the crude supply in PADDs 2 and 3 are the most important to crude prices because they contain the largest number of refineries. Heating oil demand is mostly concentrated in PADD 1, making that the region to look at when investigating heating oil prices. Additionally, using the language of PADDs enable quick insights into data such as EIA noting the impact of Hurricane Harvey on flow of propane from PADD 2 to PADD 3 or detailing how PADD 1C needed to supplement its gasoline inventories with foreign imports when there was an accident that shutdown the pipeline that typically supplies the area with gasoline from PADD 2.

Examples of trends, statistics, and PADD characteristics

There are plenty of other examples of the usefulness of dealing with oil-related data within PADDs. A common example is to delineate from where different PADDs receive their oil. For example, with the knowledge that almost half of U.S. refining capacity is on the Gulf Coast (i.e., PADD 3) while less than 10% of refining capacity is on the East Coast (PADD 1) (though PADD 1 contains about one third of the U.S. population), an obvious conclusion is that there must be a lot of intra-PADD oil shipments everyday. In fact, about half of the oil consumed everyday by PADD 1 is supplied from PADD 3 over pipeline, rail, truck, and barge.

Going further, much of the commonly distributed data from EIA (click here to learn about the vast data available from EIA and how to navigate it all) utilizes PADDs. For example, EIA allows you to look at the following:

and much more.

So hopefully the next time you read a table from EIA that deals with oil movement specific to PADD 3 or read a news article citing the disruption of a pipeline that serves PADD 1, this article will come to mind and you’ll be better served to speak to it– and remember to try and win some bets with your knowledge of the seldom-mentioned PADDs 6 and 7!

Sources and additional reading:

Crash Overview of U.S. PADDs and Why They’re Important

Do You Know What the PADD Does for the Oil & Gas Industry? Croft Production Systems

PADD regions enable regional analysis of petroleum product supply and movements: Energy Information Administration

Records of the Petroleum Administration for War: National Archives

Refined Products Connection: Know Your PADD

 

About the author: Matt Chester is an energy analyst in Washington DC, studied engineering and science & technology policy at the University of Virginia, and operates this blog and website to share news, insights, and advice in the fields of energy policy, energy technology, and more. For more quick hits in addition to posts on this blog, follow him on Twitter @ChesterEnergy.  

Solar Power and Wineries: A Match Made in Heaven…and California

As the amount of power generation from solar energy continues to rise in the United States, more and more businesses are realizing the benefits of utilizing solar energy on their own properties. This type of small-scale solar generation is rising across industrial and commercial sectors, and no where is it more prevalent than in California, home of 43% of the nation’s small-scale solar output in 2016. California also leads the nation in another crucial area– wine production! If California were its own country, it would be the fourth largest producer of wine, accounting for 90% of wine produced in the United States.

Seeing as California tops the list in solar power and wineries, it only makes sense that vineyards in the state have been rapidly adopting the renewable energy source on their properties. Exactly how much solar power is being captured on these wineries, and what wineries are doing the most to implement solar systems? This article will answer those questions. Also, I’ll be the first to admit that I’m more of a beer drinker than a wine connoisseur (see this write up on which breweries use the most renewable energy), but the last part of this article outlines a California wine road trip that hits the top 10 wineries by solar energy capacity that has me already looking at flights to the West Coast.



Why solar and why California wineries?

Many wineries across the country and the world, not just in California, have realized the benefits of solar power and installed solar systems to meet part of or all of their energy needs. For example, Lakewood Vineyards in New York,Tenuta Delleterre Nere in Sicily, and Domaine d Nidoleres in France have all installed solar power systems on their wineries.
But this article focuses just on those wineries with solar power in California, as it is the region foremost afforded with the scale, climate, and policy to really promote both the solar and wine industries.

Solar power in California

California is not the only state to be embracing solar power at breakneck speeds, but there are a number of reasons why the state was always primed to become the nation’s leader. California tops the United States as a solar energy generator  so much, in fact, that it’s had to pay other states to take the excess generated power off its hands. California’s dominance in solar power can be attributed to the following:

Wineries in California

California is obviously also not the only state in the wine business, but it completely dominates the U.S wine industry in terms of volume of wine produced, as well as reputation for quality. Not only does 90% of total U.S. wine come from California, but the quality of California wine is considered today to be at it’s highest ever stature in quality according to many experts. The modern boom of the California wine industry has a number of causes, including the following:

Putting the solar and wine industries together

When you look at the massive advantages California has when it comes to cultivating a solar power sector and a wine industry, having the two fields overlap appears to be an obvious marriage throughout the state. Fortunately, the integration of solar power into winemaking is a natural fit.

With California being such a hospitable region for both solar power and winery, the logical question becomes how can the two be combined into a symbiotic and fruitful relationship. Wineries have been installing and taking advantage of solar power for years now due to the various benefits it provides the winery business. Fetzer Vineyards has run on 100% renewable energy since 1999, while Shafer Vineyards have fulfilled all their energy needs with solar power since 2004.
In terms of why solar power works perfectly as a energy source at wineries and related facilities, there are a number of reasons. For one, solar panel technology is at its most efficient at about 77 degrees Fahrenheit and can absorb sunlight even on cloudy days— this warm/temperate climate that optimizes solar technologies also happens to be the right weather in which to grow wine grapes. Beyond that, wineries are operations that typically have a large footprint, making it easier to find area on roofs or in fields on which to place solar panels compared with non-agricultural industries. This abundant availability of solar panels at wineries means that the energy gathered from the sun can be used to power all sorts of facilities of wineries– the primary residence, workshops, tasting rooms, offices, industrial equipment, and more.
Not only does solar work better on wineries than many other industries, but it also provides some unique benefits to those wineries that go out of the way to install solar power systems. The technology itself is reliable for extended periods of time (warranties last 20 to 25 years, while the life of service is 40 to 50 years), with economics so good that wineries have the ability to earn a 20% return on investment in solar panels. In fact, the solar power haul at some wineries can sometimes be even more than is needed to run the winery, allowing these lucky business-owners to sell it back to utilities (though this type of net metering finds itself the subject of heated policy debate these days). Because of this, the technology is even being developed for on-site microgrids designed for self-consumption, load shifting, and peak shaving.
Beyond all that, those who work in the wine business have a personal stake in increasing the use of renewable energy sources in order to reduce the greenhouse gas emissions that are causing climate change. Wine grape vines are very sensitive to changes in temperature that climate change would bring, not to mention the difficulty faced by all agricultural businesses as a result of extreme weather and droughts, while the recent wildfires in California (which are more prone to happen as climate change continues) show the devastation that such fires can cause to the wine industry. It behooves the wine industry to embrace clean technologies wherever and whenever possible.

List of California wineries using solar power

Because of all these stated advantages, California wineries are absolute leaders in embracing solar technology. After extensive research and reaching out to individual wineries, I’ve put together the below list of 132 wineries across the state taking advantage of solar power. The capacity of these solar systems range from 2 kilowatts (kW) to well over 1 megawatt (MW), showing that all ranges of sizes are options depending on the level of commitment a winery is ready to make. Taken together, these wineries have a total peak solar capacity of 27.8 MW– which is a greater capacity of solar power than the total electric power industry in 15 different states as of 2015!
So if you’re like me and you have a difficult time at the wine store knowing what wine to buy because you don’t really know what to look for, you can now keep this list handy to support a winery that incorporates clean and renewable solar energy into its operations!
It’s worth noting that there are sure to be plenty of California wineries using solar power that are not included in the above table. Any winery that is listed in one of the cited resources as having an installed solar system but did not include its capacity was not included in the list, as these capacities are crucial to the later analysis of this article (this includes any wineries I reached out to but didn’t hear back from). There are also surely wineries that are using solar that don’t advertise it anywhere, or they do advertise it and my search failed to find it. If you’re aware of any wineries that should be included on this list but are not, please leave a comment below!

Quality and price of wines from California solar wineries

Beyond just finding and ranking the capacity of solar energy systems at various wineries, I thought it would be interesting to take each solar winery and compare them based on a noteworthy wine they produce. With that in mind, each solar winery in the previous list was paired up with the best wine it has (according to the top rating a wine of theirs received from Wine Enthusiast Magazine) along with that wine’s rating and price (both also according to Wine Enthusiast Magazine). That process led to the below table (note that some wineries from the first list are not included in this list because none of their wines showed up in Wine Enthusiast Magazine’s ratings).

 

It’s hard to really abstract anything by looking at that in list form. Instead, we can then take that list and look graphically at the solar capacity of a winery and the rating of it’s best wine:
The same can be done to compare the solar capacity of a winery and the cost of it’s best wine:

Looking at these graphical representations, you can see that its not just niche wineries that are embracing solar energy. Every sort of price range and a whole range of sophistication and repute of wine has a wine that comes from wineries with solar installations, both large and small in capacity. The solar capacity of the wineries does not say anything about the wine produced at that winery– the installation of solar cuts across all sorts of vineyards. This shows that there should be no reason solar power at wineries cannot continue to grow to new wineries and expand capacity at wineries already with solar.

Where are solar wineries located in California?

Another interesting data point for each of these wineries is the region of California they are in. The separation of the various areas of California into its wine regions is sometimes a bit of a tricky exercise, with some well-known regions being sub-regions to others, the existence of some gray areas, and different wine region names depending being used depending on the resource being referenced. For the sake of this exercise, I will be using the following five main wine regions of California (recognizing they can and often do get broken down even further into smaller regions):
  • North Coast
  • Sierra Foothills
  • Central Coast
  • Central Valley
  • South Coast
These five regions are found in the following maps:

Source 1 Source 2

Before analyzing each region as a whole, the below graphic shows each city/town in California where the cumulative solar capacity at wineries is above 500 kW. The size of the circles are proportional to the total capacity. Using this visualization, you can already see where the most solar capacity is concentrated, in the North Coast and Central Coast.
If you then total up the capacity for each of the five major wine regions in California, you get the following graph:
This could be a misrepresentation of how dedicated each region is to solar, however, as all the regions are not the same size. It could just be that the North Coast has the most wineries (which it does), but a lower percentage of them are utilizing solar. To test this, the total solar capacity of wineries in each region is divided by the total acreage of planted wine grape vines in that region:
The result is that the North Coast is still the region with the greatest concentration of solar capacity per acreage of winery, still followed by Central Coast (though it’s a more distant second), and then the Sierra Foothills get a boost (while still remaining in third place). In either graph, Central Valley and South Coast lag way behind in fourth and fifth, respectively.

Road Trip

The last piece of analyzing the solar wineries in California I wanted to look at was putting together an epic road trip of California wine country that enables you to hit up the wineries in the state that use the most solar power. Thanks to Google Maps, I was able to find a route that takes you across 372 miles over the course of 6 hours and 47 minutes and visits the top ten wineries in terms of solar power capacity. If you’ve always wanted to tour the best wineries and vineyards that California has to offer, but didn’t know where to start, then look no further!
The first day of the trip can take you to Meridian Vineyards, Estancia Estates Winery, and Carmel Road Monterey with only a bit over two hours of driving total, enabling you to see over 3 MW of solar powered winery. On the next day, after driving about three hours to get to the next batch of wineries, you’ll find yourself at the remaining seven wineries– total capacity exceeding 7 MW– that are within an hour and a half total drive from each other.
If you’re interested in driving this solar winery route (or maybe paying someone to drive you on this winery route– it is TEN wineries, after all), see the Google Maps route linked below.
 

Sources and additional reading

Solar Energy in the Winemaking Industry: Green Energy and Technology (Preview of book herelink to purchase book here)
About the author: Matt Chester is an energy analyst in Washington DC, studied engineering and science & technology policy at the University of Virginia, and operates this blog and website to share news, insights, and advice in the fields of energy policy, energy technology, and more. For more quick hits in addition to posts on this blog, follow him on Twitter @ChesterEnergy.  

Advice for Effective Public Comments in the Federal Rulemaking Process

Having spent a few years earlier in my career entrenched in the rulemaking process behind a number of regulations from the Department of Energy concerning appliance standards, I am able to empathize with the teams of analysts at federal agencies that are tasked with receiving and addressing the feedback that comes in during public comment periods. During every rulemaking process, there are real humans reading every single public comment received (even when those comments number in the thousands), cataloging the specific concerns from the stakeholders, conducting research and analysis regarding the points that were brought up, and ultimately responding to those comments– either by detailing why the existing analysis already addresses the comment or, if the stakeholder comment has successfully done its job, adjusting the analysis during the next round of the rulemaking to account for the issues brought up in the comment.

While submitting a comment in response to the federal rulemaking process can seem intimidating, the truth is that every rulemaking process receives comments from every sort of stakeholder, large and small, with the widest range of expertise on the topics possible (see previous article on how the rulemaking process works and what the function of the public comment period is here). Those involved in the regulatory process know to expect multi-page comment submissions with loads of data and testimonials from powerful trade associations or advocacy groups, but it is also common to receive more pointed and specific comments from concerned private citizens who don’t have any experience in the relevant industry, but simply have their own opinions and concerns. The beauty of the public rulemaking process, however, is that every single comment must be summarized in the next step of the rulemaking, along with a response as to how the new analysis addresses the concerns, no matter who submits it. With that in mind, regardless of whether you are representing a larger organization or just your personal interests as a citizen, what follows are six methods you can employ that will ensure your comment most effectively influence the federal rulemaking process.



1. Be accurate

This piece of advice should go without saying, but rest assured I have found that it needs to be said. If a comment submitted to the federal agency is found to have a basic inaccuracy in it, then the rest of the comment on that topic will be called into question and it can potentially carry less weight. An underlying inaccuracy in the comment will make responding to, or dismissing, the whole comment all too easy. So while it may be overly obvious, if you hope to make an impact on a regulatory rulemaking then be sure to verify the accuracy in everything you say.

 

2. Be specific with issues and provide alternatives

If you want your comment to be addressed specifically in the analysis, be sure to include specifics in the comment. Don’t say that something would be detrimental to businesses– state exactly what the detriment would be and why. Don’t state that a discussed technology would not be technologically or economically feasible– state what technology would be feasible and note what exactly is preventing the original technology from being so. Don’t state that a pricing analysis is unrepresentative of the market– describe how and why the analysis is off.

The point is that if the federal agency is given a vague reason for why the analysis is ‘bad’ or ‘off,’ but not given any specifics, then there is nothing tangible to address. The rebuttal to the non-specific comment can simply be to restate the original analysis and reasons behind it. However if a specific reasoning and alternative is instead provided, then you are giving the federal agency something meaty to address. The subsequent analysis must either move towards your alternative or give details about why that alternative is incorrect. But if your alternative is airtight and there are not holes to poke in it, then you will likely find success in shifting the analysis behind the rulemaking.

 

3. Address the issues the rulemaking asks about– but don’t be restricted to those topics

When reviewing a rulemaking document, whether in the early stages with a Request for Information (RFI) or later during the Notice of Proposed Rulemaking (NOPR) stage, you will often find specific issues called out on which the agency behind the rulemaking is seeking comment. These issues are numbered for ease of finding them, and sometimes (but not always!) listed in a single place at the end of the notice. If you do not see a list at the end of the notice, be sure to go through the document carefully to find them all in-line, where they’ll appear as in the example below.

Source

When the agency is pointing out these specific issues on which it requests comment, that shows where the most impact of a comment might be received. These are the issues that they might have the least amount of information (or they have information but recognize it’s outdated) concerning, or where they recognize there is considerable debate. Regardless of the reason, all comments on each of these numbered issues end up getting aggregated to get a clear picture of the available information and data before a decision on the direction of the rulemaking is made (though it is important to note that it is not decided by what received the most comments, but rather the accuracy and quality of the comments outweigh the quantity of comments received on an issue). If your position on the rulemaking is related to any of these specifically identified issues, make sure to frame your comment in direct response to the question asked (it even helps to note by number which issue your comment is addressing).

With all of that said, you should not feel that the identified issues are the only ones eligible for response or that the agency will not put equal weight behind comments regarding other aspects of the analysis. You might have comment or information on a topic on which the agency wasn’t focused or didn’t realize was controversial. So while it is important to fit your comments into the box of the issues identified by the notice if they are relevant to those issues, do not feel restricted to those topics. You just might be the only one to bring up this new issue, influencing the next stage of the rulemaking to address it more specifically.

 

4. Include hard data

The best way to back up your comment and encourage a specific response in the next stage in the analysis is to include your own data as evidence. Perhaps you think this data was overlooked by original analysis, or maybe you think the data that was included originally does not tell the whole story. Either way, if your data supports a change in the analysis and a different conclusion, then providing the full set of that data in your public comment is the best way to influence the rulemaking. Doing so will force the next stage of the analysis to either include that data (and thus changing the course of the analysis towards your desired outcome) or at the very least will require the next stage of the analysis to refute your data.

 

5. Include sources

Similar to including your own hard data, crucial to an effective public comment is providing evidence towards your points. Providing a comment that breaks down to be essentially subjective is unlikely to be effective, but if you can demonstrate your points with sources– e.g., scientific studies, experimental results, industry information, or marketing analysis, then the comment will make a bigger splash. The more you can ‘show’ your point rather than ‘tell’ it, the more substance and weight your comment will have.

 

6. Offer to follow up

An under-utilized strategy with regards to public comments on the public rulemaking process is making yourself available to the federal agency. The public comment stands on the record as a written statement of your thoughts and concerns on the rulemaking, but in commenting you can also offer to discuss the points further with the agency pursuing the rulemaking. Doing so may result in you being interviewed in the next fact-finding stage of the analysis, or you might also be invited to the next public meeting on the rulemaking to discuss your concerns further. These conversations can be the most valuable tool for really getting your point across and making sure the agency understands the basis of your viewpoint. Written comments only have the opportunity for a single back and forth between commenter and government agency, but conversations allow for the complete back-and-forth required for full understanding between the parties.

 

Conclusion

While there is no guarantee any single public comment will change the course of a particular rulemaking, if you follow these six guidelines then there is a greater chance that your comment will be well-received by the agency and carry the weight of consideration it deserves. If you have any additional questions on this process, don’t hesitate to reach out in the comments below or by contacting me directly.

Additional Reading

A Guide to the Public Rulemaking Process: Office of the Federal Register

Frequently Asked Questions: Office of Information and Regulatory Affairs

Notice and Comment: Justia

Notice-and-Comment Rulemaking: Center for Effective Government

Policy Rulemaking for Dummies

Rulemaking Process and Steps to Comment: The Network for Public Health Law

Tips for Submitting Effective Comments: Regulations.gov

 

 

About the author: Matt Chester is an energy analyst in Washington DC, studied engineering and science & technology policy at the University of Virginia, and operates this blog and website to share news, insights, and advice in the fields of energy policy, energy technology, and more. For more quick hits in addition to posts on this blog, follow him on Twitter @ChesterEnergy.  

Policy Rulemaking Process for Dummies

Calling this article “for dummies” is tongue-in-cheek, because the inner workings of government and the development of public policy are shrouded in mystery for most people. However this mystery does not persist because the process is too difficult for the average person to understand (my rant about how foolish it is that this part of the policy process is not taught in middle schools or high schools is for another day). In fact, the beauty of the rulemaking process is that it is designed to engage those outside the government world.

After getting personally involved in the rulemaking process to determine energy efficiency standards for various electronic products, I learned what occurred behind the curtains for these federal energy regulations– just how involved the process of determining these regulations were, how many different parties came into play, and how backed in data, testing, analysis, and public feedback these final regulations were. Many resources exist that explain the whole rulemaking process in more detail and completeness than I will, and a few of those resources can be found at the end of this post, but as someone who spent several years on the inside I will provide a brief overview of the process and a few insights I picked up along the way.



What is a Rulemaking?

A rulemaking is the process that is mandated for creating federal regulations, including the analysis of the effects of a potential regulation and the solicitation of public feedback along the way. Rather than having lawmakers themselves create the specific regulations for certain topics, Congress instead authorizes federal agencies to dive into the details and research, analyze, and dictate the final details of those rules. The regulations produced at the end of a rulemaking have all the effect of a law, and all existing federal regulations are listed in the Code of Federal Regulations (CFR). For anyone looking to find out the particulars of any federal regulation, the CFR is the repository to reference. Federal regulations cover a broad range of topics—from energy to telecommunications to patents and more, with these topics being listed in the CFR’s Table of Contents.

The Beginning of the Rulemaking Process

While the federal agencies, such as the Department of Energy (DOE) or the Environmental Protection Agency (EPA), are the main entities that control the rulemaking process, no regulations can be issued without proper statutory authority being first granted. Even though the regulations posted in the Federal Register (FR) are attached to Executive Branch agencies, the authority to issue these regulations comes from Congress. Each regulation proposed and ultimately issued has an authority section somewhere in the beginning so the reader can trace its history and why it was initiated.

Two types of authority—Left is where Congress passed a law to initiate a specific rulemaking proces (DOE’s regulation of the energy efficiency of metal halide lamp fixtures, 79 FR 7746); Right references the broad authority granted by Congress to regulate certain areas (EPA’s regulation of air pollutants, 82 FR 39712)

The Congressional authority for a rulemaking can either come from the law that first created the federal agency and dictated which areas it had jurisdiction to regulate (such as the above right, where EPA references the authority to regulate air pollutants from the general powers granted to EPA by Congress), or Congress can pass a law that specifically directs an existing agency to go through the rulemaking process and set regulations for a particular topic of interest (such as the above left, where DOE references the authority from a law that Congress passed instructing DOE to establish energy conservation standards for certain appliances by a given date).

Stepping Through the Rulemaking Process

Make no mistake about it—the rulemaking process for federal regulations is very long and in depth. Nothing is done haphazardly, with the people behind it conducting extremely extensive factfinding and analysis. The amount of cumulative effort that goes into regulating, for example, the energy efficiency of a lightbulb or ceiling fan is mindboggling. While the process behind each rulemaking could differ depending on the regulation’s history, complexity, urgency, importance, or politics, the generally expected process is outlined as follows:

Notice of Proposed Rulemaking

The Notice of Proposed Rulemaking (abbreviated as either NOPR or NPRM) is often the first official document published that announces the beginning of the rulemaking process. Included in the NOPR is a preamble detailing the goal of the rulemaking, the authority granting the agency the power, and the relevant dates and contact information for the rulemaking; the supplemental information section that discusses the initial framework, background data, preliminary analysis, and merits of the proposal; and a preview of what the regulation language would look like in the CFR. The NOPR is not the final regulation, but rather serves to inform the public about the initial findings of the analysis based on the preliminary information collected and provide the public stakeholders the opportunity to provide feedback on those findings (more on that later).

There do exist a couple of exceptions where the NOPR won’t be the first notice from an agency regarding the rulemaking process:

  • An agency might receive a petition for rulemaking from an interest group or member of the public, making the case for why a specific regulation is needed. The agency might then publish that petition in the FR to solicit comments on whether a rulemaking on that topic should be pursued.
  • Alternatively, an agency may, for particularly complex or critical rulemaking, choose to publish a preliminary document in the FR, such as an Advanced Notice of Proposed Rulemaking (ANOPR) or a Framework Document. The goal of publishing either of these documents would be to solicit public feedback earlier on in the information gathering and analysis to ensure the initial framework set up for analysis is headed in the right direction. Neither of these documents are mandatory, but when a potential regulation might have additional complications then the use of these early publications ensure those issues can be addressed thoroughly.
  • Lastly, there are times where an agency initiates a negotiated rulemaking. When this happens, the agency will invite the stakeholders and major players to meetings to try and reach an agreement on the terms of a proposed rule. These meetings will include representatives from multiple viewpoints on the topic, and if a consensus can be reached then the agency may endorse those terms as a basis for the proposed rule.

Comment Period

After a NOPR (or earlier preliminary document) is published in the FR, the agency will request comments from the public during an official comment period. These comments can be either in agreement or in opposition, and they can pertain to the rulemaking generally or to a specific part of the analysis. The typical comment period will last 30 to 60 days, though it can vary. More complex rules might have longer comment periods to allow stakeholders enough time to digest and respond to the proposed rulemaking, or the public can even request the comment period be extended if there are extenuating circumstances (though if the agency does not find there to be good reason to do so, they do not have to grant this request). Additionally, if the agency finds that comments received were not of the type and quality needed to move forward with the next stage of the analysis, the comment period can be re-opened. The agency might also find that the initial round of comments brought up new and complicated issues that requires further public comment. In these instances, the agency can open a second comment period to allow reply comments on the newly arisen issues, or alternatively the agency might publish a second NOPR instead of moving onto the Final Rule.

What is most important to remember about the comment period is that this is one of the best opportunities for you, as either a private citizen or a member of an organization, to directly impact and influence the regulations that will affect you.

Final Rule

After the completion of the NOPR, the agency will ultimately publish a Final Rule in the FR. The format of the Final Rule will look very similar to that of the NOPR, with the same general sections and analyses. However, the ‘Dates’ section will no longer dictate when the comment period will close—rather it will indicate the date that the new regulation is effective (generally within 30 days of publication).  The Final Rule now represents the new law of the land and will include a section of what changes need to be made to the CFR (as well as the effective date)—these changes can be a whole new section to the CFR, removing existing sections, or piece-by-piece edits to CFR text. This changing of the CFR text is the final step in the rulemaking process.

Congressional Review

In accordance with the Congressional Review Act, all Final Rules are subsequently reviewed by Congress. If both the House and Senate pass a resolution of disapproval of a regulation within 60 days (without Presidential veto), the Final Rule becomes void and cannot be republished in its existing state. Such overturning of Final Rules is typically uncommon, however, as Congress only successfully exerted this power one time from its inception in 1996 through 2016 (though the unique political climate in 2017 led to the Congressional Review Act to be successfully invoked 14 times, leading some to debate the merits of retaining this Congressional power and a bill proposed in the Senate to repeal it).

Despite the Congressional Review Act, the role of Congress in the rulemaking process is typically to simply grant the regulatory powers to the agencies, leaving the details and analysis to the experts employed by the agencies.  It would be naïve to think, however, that the final direction ultimately chosen at the end of a rulemaking was not influenced by politics. After analyzing and presenting all the facts on the table, the final direction resides with the priorities and policy preferences of the leaders of the agency and, by extension, the Executive Branch.

How you can participate

As stated earlier, one of the key components of a federal regulation that separates it from a law is the built-in mechanism to solicit feedback from the public. There are a few different ways this feedback is collected, each with their own advantages.

Public Comments

As mentioned earlier, after each NOPR, private citizens and interest groups alike are engaged to comment on the proposed rule, enabling them to directly affect the final regulations in a way not typical of all public policy. There are several primary goals of collecting these public comments:

  • They give citizens, interest groups, companies, and any other affected group the opportunity voice their position on the potential rule and how it might affect them by providing information the agency might not have been able to gather on its own;
  • They help the agency to improve the final regulations by considering this previously undiscovered information or vetting the information it did gather; and
  • They reduce the likelihood that stakeholders find issue with the regulations and bring those complaints to the courts.

To accomplish these goals, the continued engagement (both formal and informal) of the public is critical. Those who are submitting comments, though, should take note that the process dictated by what position receives the greatest volume of comments. Rather, the content of each comment is added into the public record of the rulemaking along with the data, expert opinions, and other facts. The comments are your opportunity to convince the agency that there is additional data to consider or new arguments to address. These comments can shift the direction of the rulemaking if they are factual, demonstrable, and convincing, as all comments made on the public record are then mentioned and specifically addressed in the subsequent publication stage—either agreeing with them or presenting reason to refute them. Later, I plan on writing a blog post that will give some tips and tricks on how to make public comments as effective as possible at influencing the rulemaking (update: read blog post on how to make effective public comments here).

Interviews

While preparing any of the public notices, the federal analysts might contact stakeholders (interest groups, affected companies, etc.) to be interviewed about the facts behind the regulation. Engaging in these conversations before the publication of the NOPR and Final Rule allows for a constructive and in-depth back-and-forth where the stakeholder can work to convince the analysts about their point-of-view. When these interviews occur, they are often the best opportunity for a stakeholder to convey their arguments and influence the rulemaking.

Public Hearings

Another way these key stakeholders are engaged by the rulemaking process is through public hearings. These hearings often occur after each NOPR or preliminary notice, though they are only required by certain agencies. The agency will specifically invite the key stakeholders to attend, though they are open to the public for anyone with an interest in the rulemaking. Public hearings are another opportunity where back-and-forth discussions can occur, both for the agency to explain the proposal and answer questions about the analysis that has been put forth and for the stakeholders to argue their cases in person. The other unique aspect of these public meetings is they allow for an on-the-record dialogue between the different stakeholders themselves, should there be disagreement among them.

Keeping Informed

If you want to keep up with potential new rulemakings that are of interest to you, the FR allows you to subscribe to customized daily updates. You can have all FR notices from a specific agency emailed to you every day they become available, or you can even subscribe based on keywords—regardless of which agency it comes from. To subscribe, create an account on federalregister.gov and then add subscriptions at this link.

Example of what an email from the Federal Register will look like if you subscribe to DOE notices

Additionally, at least once a year every agency publishes a regulatory agenda. This agenda will outline the planned regulatory and deregulatory actions for the coming season or year. If there is a particular agency whose regulations are of interest to you, follow this link to read the list of current regulatory items on the agenda.

 

Sources and Additional Reading:

A Guide to the Rulemaking Process- Prepared by the Office of the Federal Register

The Federal Rulemaking Process: An Overview– Congressional Research Service

Regulations and Rulemaking Process FAQ- Office of Information and Regulatory Affairs

Learn the Steps in the Federal Rulemaking Process

About the Rulemaking Process- United States Courts

Flowchart of the Federal Rulemaking Process- Citizen.org (this resource is more in depth than the title ‘flowchart’ will make you think—but a great, thorough resource)

From two specific federal agencies of interest to the topics in this blog:

Appliance & Equipment Standards: Department of Energy Regulatory Process

The Basics of the Regulatory Process: United States Environmental Protection Agency

 

Related Posts:

Federal Register Notice: Costs and Benefits of Net Energy Metering: Request for Information

Federal Register Notice: Test Procedure for Distribution Transformers: Request for Information

Article updated on October 10, 2017

 

 

About the author: Matt Chester is an energy analyst in Washington DC, studied engineering and science & technology policy at the University of Virginia, and operates this blog and website to share news, insights, and advice in the fields of energy policy, energy technology, and more. For more quick hits in addition to posts on this blog, follow him on Twitter @ChesterEnergy.