The Green March Madness Tournament: Setting the 2018 NCAA Tournament Field According to Sustainability

With the Super Bowl in the rearview window and the calendar about to turn to March, the attention of the sports world is about to be completely focused on college basketball and the annual NCAA Basketball Tournament (March Madness). Every year, this 68-team tournament captures the attention of people across the country, whether they are diehard fans or non-sports fans who  are simply participating in the office pool.

Not only does the NCAA Basketball Tournament serve as fodder around the water cooler, with billions of dollars of productivity lost in the American workplace every year, not only in watching the games but also in the various (sometimes unconventional) methods people use to pick the winners in their bracket. You may have seen people choose winners based on which team’s mascot would win in a fight, by choosing the schools with the superior academics, or even by choosing winners based on who has the most attractive head coach (shout out to my alma mater, University of Virginia, that AOL astutely points out would win in this last scenario).

Source

So with the Selection Committee currently watching the last few games of the regular season as teams try to bolster their chances of making the NCAA Basketball Tournament, I thought I’d take a look at how March Madness would look if the field was selected based on each school’s efforts towards sustainability, energy efficiency, and environmentalism– call it the 2018 Green March Madness Tournament!

This article will take all eligible NCAA schools and create the field of 68 for a tournament, but playing it out won’t be all that interesting because the top seeds will obviously ‘win’ each match up until the Final Four. So keep reading to see the 68 teams that make the tournament and find out which top seed comes out on top– but stay tuned once the NCAA puts out the actual bracket for the NCAA Basketball Tournament because I’ll do a follow-up article and revisit this concept to see who would win each of those real-life matchups based on who rated higher on sustainability!



Metrics used

After extensive research, I found three different measurements and rankings that look at the efforts of colleges and universities across the United States to incorporate sustainable practices, energy-saving measures, and environmentally-friendly practices. The latest version of the data for these measures, which are explained in detail below, were pulled to serve as the metrics of who would participate in the 2018 Green March Madness Tournament.

The Sustainability Tracking, Assessment & Rating System

The Association for the Advancement of Sustainability in Higher Education (AASHE) uses its Sustainability Tracking, Assessment & Rating System (STARS) to measure how successfully institutions have been performing in sustainability matters. The mission statement of STARS details how it “is intended to engage and recognize the full spectrum of colleges and universities- from community colleges to research universities- and encompasses long-term sustainability goals for already high-achieving institutions as well as entry points of recognition for institutions that are taking first steps towards sustainability.”

Source

STARS is completely voluntary, transparent, and based on self reporting. Dozens of different metrics are included in the STARS measurements, including in the categories of curriculum (e.g., whether the institution offers sustainability-focused degree programs), campus engagement (e.g., whether sustainability-related outreach campaigns are held on campus), energy use (e.g., availability of clean and renewable energy sources on campus), transportation (e.g., inclusion of alternative fuel or hybrid electric vehicles in the institution’s fleet), and many more that are found in the credit checklist.

Based on performance based on these metrics, each school can earn up to 100 points and a corresponding rating of STARS Reporter, STARS Bronze, STARS Silver, STARS Gold, or STARS Platinum. Because STARS is self-reported, institutions can continually make improvements and resubmit for a higher score. However for the sake of this Green March Madness Tournament, the latest scores for all schools playing Division I NCAA basketball were pulled as of the beginning of February 2018, with any schools not participating in the STARS program receiving a score of zero.

The Cool Schools Ranking

The Sierra Club publishes an annual ranking called the Cool Schools Ranking to measure which schools are doing the most towards the Sierra Club’s broader sustainability priorities. The data for the Cool Schools Ranking largely comes from the STARS submissions as well, though with some key changes— the Sierra Club identifies the 62 questions of the STARS survey that they consider the most crucial to their definition of sustainability and put that data in a custom-built formula, they only use information submitted or updated to STARS within the past year, and they asked institutions to also detail what moves they have made to divest their endowment from fossil fuel companies (a question not asked by STARS).

Source

As with STARS, participation in the Sierra Club’s rankings is completely voluntary and transparent, ultimately resulting in a numeric value on the 1000-point scale to use for the rankings.  For the scoring towards the Green March Madness Tournament, all eligible teams had their Cool Schools Ranking score pulled and divided by 10 (so it would be on a 100-point scale like the STARS rating), while schools that were not included in the ranking were given a score of zero.

SaveOnEnergy Green Score

The last of the three rating systems used for the Green March Madness Tournament is the 2017 Green Score given by SaveOnEnergy.com. The goal of this scoring system is to give credit to institutions making “noteworthy progress in eco-friendliness and sustainability.” The SaveOnEnergy Green Score takes the top 100 schools in the U.S. News & World Report and awards them scores based on their Princeton Review Green Score, as well as state data on farmers markets, local public transportation options and walkability scores, density of parks in the area of the school, state data on clean and renewable energy options, and availability of green jobs.

Source

The data for the SaveOnEnergy Green Score is a mix of voluntary data (e.g., data submitted to the Princeton Review Green Score) and mandatory statistics (e.g., state data on energy options and green jobs). In the end, SaveOnEnergy takes all of these factors to create a final score out of 100– though the score is only published for the top 25 schools, and the remaining schools are ranked without their score displayed. To account for this, a best-fit equation was used to correlate ranking with the score of the top 25 schools and extrapolated that equation to determine a score for the remaining ranked schools. Schools that did not make the SaveOnEnergy Green Score list were given a score of zero.

Final Green March Madness Tournament score

In the end, all 351 schools that participate in Division I basketball (representing 32 different athletic conferences) were given a final score that was the average of the STARS score, the Cool Schools Ranking score divided by 10, and the SaveOnEnergy Green Score, so that the final score is also on a 100-point scale (the final scores for all schools can be found in this article’s accompanying Google Spreadsheet).

Before moving forward, let’s make clear that this ranking system is mostly just for an overview of sustainability scores among schools based on publicly available data, and it should by no means be considered comprehensive. Indeed, each of the three ranking systems make clear that there are many more schools that care about energy and the environment and are also making great strides that do not appear on these lists. These schools might not have the time or resources to submit their data, the submission of the data to these third parties was not a priority, or they simply weren’t included on the U.S. News & World Report Top 100 Universities list and so their data was not included in the SaveOnEnergy Green Score list.

Source

That being said, schools that take the time to report their sustainability are showing that doing so is a priority to them and demonstrating a commitment to the cause that should be applauded and recognized. While there are many schools that didn’t report their data that are certainly still environmentally friendly (indeed, about half of the schools in Division I basketball ended up with a score of zero for not appearing in any of the three lists, but it would be foolish to believe that none of those 178 schools are working towards energy efficiency and sustainability), the submission of data can be considered a sign that transparency regarding sustainability is important to those in charge and thus the reporting schools earn a well-deserved place in the Green March Madness Tournament scoring. For that reason, the rest of this article will unapologetically use the Green March Madness Tournament Score as the definitive factor to determine sustainability rankings of the schools.

Quick facts and figures

Before moving on to selecting which teams made the prestigious Green March Madness Tournament, let’s take a look at a few quick facts from the scoring:

  • 173 out of 351 teams registered a score greater than zero on the Green March Madness Tournament Score, meaning over 100 schools who registered a non-zero score will still find themselves on the outside looking in.
  • Even rarer, though, are teams that have scores in all three scoring metrics used. Only 33 teams have a non-zero score in all three metrics, while only 112 teams have a non-zero score in two or more metrics.
  • As shown below in the table of conferences and conference champions, the highest score went to American University of the Patriot League with 73.4, while the lowest non-zero score went to South Dakota State of the Summit League with 9.2.
  • Looking at each of the 32 conferences:
    • 4 conferences (Pacific-12, Big Ten, Ivy League, and Atlantic Coast) had 100% of their teams score greater than zero.
    • 2 conferences (Atlantic Sun and Northeast) had only a single team score greater than zero, thus making the crowning of a conference champion rather easy.
    • 5 conferences (Big South, Metro Atlantic Athletic, Mid-Eastern Athletic, Southland, and Southwestern Athletic) didn’t have any teams score greater than zero.

Selecting the field

Even though this is mostly a silly exercise, I still wanted to follow the protocol of the real NCAA Basketball Tournament Selection Committee when determining who should make this ‘Big Green Dance’ (and, in doing so, gained some respect for the massive amount of puzzle pieces they must juggle!). The process is famously intense, with 10 committee members spending countless hours keeping up with the college basketball landscape during the year, only to convene for a five-day selection process that requires hundreds of secret ballots.

The entire process is very detailed, but it can be boiled down as follows:

  1. All 32 conference champions receive an automatic bid into the tournament
  2. The next best 36 teams are then chosen as ‘at-large bids’ to bring the total field to 68 teams
  3. All 68 teams are ranked from top to bottom, regardless of their status as a conference champion
  4. The top four teams are ranked as number one seeds in each of the four regions, then the next four are two seeds, the next four are three seeds, etc.
  5. While placing teams into each region, care is taken to ensure that each of the four regions is fairly equally balanced and that teams that played each other during the season are prevented from  having a rematch in the tournament until the later rounds (teams can be bumped up/down by a seed or two to assist in these requirements)
  6. The last four teams to make the tournament in at-large bids and the last four teams to make the field altogether are paired off to compete in the First Four games, with the winners advancing to the remaining field of 64.

While the criteria used to rank teams for the Selection Committee include resources such as the Rating Percentage Index (RPI), evaluations of quality wins based on where the game took place and how good the opponent was, and various computer metrics, things are easier in the Green March Madness Tournament Selection Committee as we only need to use the single number result of the Green March Madness Tournament Score.

The 68-team field

The bracket

For the full suite of teams, conferences, and scores, refer to the accompanying Google Spreadsheet of final figures. Using these numbers and sticking to the above selection guidelines as much as possible, the following bracket is the official result for the 2018 Green March Madness Tournament Bracket:

Click to enlarge

Breaking it down by each region for ease of reading:

The East region

The West region

The Midwest region

The South region

Note that the five conferences that didn’t produce a single team with a non-zero score would still get the automatic bids for their conference champion (four as play-in teams for the First Four and one more as a 16 seed without a play-in game), so perhaps they’ll draw straws to see who gets to go into the tournament. Regardless, they are in the bracket and labeled as that conference’s champion (placed in no particular order), just waiting to be beaten soundly by their respective sustainable opponents.

Analysis of the field

In terms of conferences, we see big winners come from the Pacific-12 (8 tournament teams) and the Big Ten (7 tournament teams), but in third is the surprise conference of the Ivy League (6 tournament teams) who is rarely in the conversation for getting more than a single team in the NCAA Basketball Tournament. On the other end of the surprises, the Big East and the Southeastern Conference (both major conferences that typically nab a handful of bids each) were kept to only one team each in the tournament.

For individual teams, we find some other surprises. A number of perennial stalwarts of the college basketball scene find themselves in the unfamiliar position of being on the outside looking in– 7 out of the 10 teams with the most NCAA Tournament appearances failed to receive a Green March Madness Tournament big (Kentucky, Kansas, UCLA, Louisville, Duke, Notre Dame, and Syracuse). On the other side of the coin, five teams (Denver, New Hampshire, William & Mary, UC Riverside, and Bryant University) that have never made the NCAA Tournament have finally found success with the Green March Madness Tournament.

Another common exercise leading up to the announcement of the NCAA Basketball Tournament teams is looking at the bubble teams, those that are just on the edge of making the tournament but find themselves potentially falling just short.  The most painfully close bubble teams for the 2018 Green March Madness Tournament were the five teams that fell less than one point shy of an at-large bid: Louisville, Northern Arizona, Ohio State, IUPUI, and Arkansas. Most painful was Louisville who fell just 0.12 points shy of being the last team in (though maybe it was serendipity– who knows if Louisville would have had to vacate that appearance, too).

What did the top performing schools have in common?

Looking at the teams that scored particularly high and scored the best seeds in the Green March Madness Tournament, a couple of trends appear:

  • Sustainability-focused schools: It’s worth noting that every team that was ranked in all three metrics ended up with a good enough score to make the tournament. As previously noted, such commitment to ensuring data is delivered for all three metrics shows the cause of sustainability is a priority and these schools are naturally rewarded by being guaranteed to make the Green March Madness Tournament.
  • City schools: A common theme found in the upper half of the schools that made the Green March Madness Tournament is that the are located in or near major U.S. cities (including one seeds American University and George Washington, three seed Northwestern, four seed Columbia, six seed Boston University, seven seed Denver, and eight seed Miami (FL)). The reason an urban setting might help schools score well in these rankings is because cities are more likely to have local sustainability organizations to partner with the school, access to effective public transportation, high walkability scores, and other nearby resources from the community that can be used for the school as well. Each of these factors positively effects the ratings that go into the Final Green March Madness Tournament Scores.
  • Green states:  Outside of the city in which a school is located, the state a school is in (and the state’s relative ‘green-ness’) has significant impact. The top of the tournament seeding is populated with teams from states often considered particularly green by various metrics. For example, the annual state scorecard rankings from the American Council for an Energy-Efficient Economy (ACEEE)  shows heavy representation from the top five states in the ACEEE scorecard in the Green March Madness Tournament: Massachusetts (Boston University, Harvard, Massachusetts), California (UC Santa Barbara, Santa Clara, UC Riverside, San Jose State, UC Irvine, Cal State Northridge, California, San Diego), Rhode Island (Brown, Bryant University), Vermont (Vermont), and Oregon (Oregon State, Portland State, Oregon, Pacific). Together, those five states account for over a quarter of the teams that made the Green March Madness Tournament, reflecting the benefits to institutions in states that commit to green jobs, renewable energy development, and other sustainability initiatives.

The National Champion

The downside of filling out our bracket based on the Green March Madness Tournament Scores is that by continuing through with the tournament, we won’t find any upsets and the top seeds will always win (again, we’ll revisit once the real NCAA Basketball Tournament bracket is released to see which of those teams would win based on sustainability). In the end, our Final Four is made up of all one seeds, as shown below, with the final champion being…

Drumroll…

 

American University! In the three times appearing in the NCAA Basketball Tournament, the Eagles have gone winless– but once the Green March Madness Tournament comes along they go all the way! Congratulations to them, and best of luck to all schools in the ‘real’ tournament in March, to all schools looking to improve their sustainability scores before next year’s Green March Madness Tournament, and to all of you in finding the best way to fill out the brackets for you office pool this year!

Sources and additional reading

Cool Schools 2017 Full Ranking: Sierra Club

March Madness bracket: How the 68 teams are selected for the Division I Men’s Basketball Tournament: NCAA

SaveOnEnergy 2017 Green Report: Top Universities in the U.S.: SaveOnEnergy

The Sustainable Tracking, Assessment & Rating System: Association for the Advancement of Sustainability in Higher Education

About the author: Matt Chester is an energy analyst in Washington DC, studied engineering and science & technology policy at the University of Virginia, and operates this blog and website to share news, insights, and advice in the fields of energy policy, energy technology, and more. For more quick hits in addition to posts on this blog, follow him on Twitter @ChesterEnergy.  

About That Tesla Roadster Flying Through Space– What Kind of Gas Mileage Is It Getting?

Elon Musk and his SpaceX team made huge news last week when they successfully completed the maiden launch of the Falcon Heavy on the afternoon of February 6, 2018. This launch was such a monumental accomplishment because the private company venture (the heaviest commercial rocket ever launched) could one day be used to take astronauts to the Moon and Mars, and it demonstrated the ability to do so with the ability to guide the rocket boosters back to Earth for reuse.

While all of this news was one of the most amazing accomplishments by a private sector company in terms of scale and implications for humanity, one of the most gripping aspects of the project ended up being the fact that the test payload Musk chose to attach to the rocket was his personal Tesla Roadster, painted cherry red to represent the launch’s step towards getting to Mars. The reason behind launching this $100,000 car into space (never to return) was purely to capture people’s attention and imagination, a goal that was undeniably achieved as Musk was able to give the world this image that mindbogglingly is real and not using any sort of Photoshop and was compelling enough to get everyone to take notice of this amazing accomplishment.

Source

Given that the mission statement of Tesla is “to accelerate the advent of sustainable transport by bringing compelling mass market electric cars to market as soon as possible,” I found it cheekily ironic that fossil fuel– rocket fuel, no less– had to be used to get this Tesla mobile. This not entirely serious thinking led me to the tongue-in-cheek line of questioning– how did the fuel economy of this space-bound Tesla compare with the fuel economies of cars that are restricted to a terrestrial existence? What about the relative carbon dioxide (CO2) emissions?

Let’s bust out that handy back-of-the-envelope to scratch out some (very) approximate estimates!



The Tesla Roadster

The car that was sent into an elliptical orbit around the Sun was Elon Musk’s personal 2008 Tesla Roadster, ‘piloted’ by a mannequin in a SpaceX flight suit named Starman. This model of Tesla electric cars weighs in at 2,723 pounds, went for a base price of $98,000, sold 2,400 units before production was stopped, and was notable as the first highway legal serial production all-electric car using lithium-ion batteries and the first all-electric car to travel more than 200 miles per charge.

Source

Fuel Economy

The official fuel economy rating of the Tesla Roadster from the Environmental Protection Agency (EPA) is 119 miles per gallon equivalent (MPGe), being able to travel 245 miles on an eight-hour charge (the MPGe value compares the amount of electricity needed to move an electric car with the amount of gasoline needed to move a gasoline-powered car using the energy equivalence of one gallon of gas matching 33.7 kilowatt-hours of electricity).

As a comparison for the fuel economy of a Tesla Roadster:

The following table summarizes this range of fuel economies of the Earth-restricted vehicles:

Carbon dioxide emissions

While the use of electricity when driving a Tesla (or any electric car) is indeed carbon neutral in that no CO2 is being emitted from a tailpipe, it is not entirely true to rate the CO2 emissions per mile driven as zero. The simple reason behind that is that the generation of electricity that ends up in the vehicles come tied to the CO2 emissions at the electric power generation plants. While the portion of the U.S. power sector that is driven by carbon neutral sources like wind, solar, and nuclear is growing, fossil fuels like coal, natural gas, and petroleum still accounted for over 60% of U.S. electricity generation in 2017. As such, whenever a Tesla gets plugged into the grid it is likely receiving electricity that comes from CO2-emitting sources (not to mention the inefficiencies that come from the transmission & distribution of the electricity, the charging losses of the batteries, and the ‘vampire losses’ of charge when the car is not plugged in and not in use). Because of this, the CO2 footprint of driving a Tesla, or any electric vehicle, is intrinsically tied with the energy makeup of the particular electricity supplier.

The Nissan Leaf, another all-electric vehicle, accounts for about 200 grams of CO2 per mile (g CO2/mile) on average across the United States, while California (with one of the highest proportions of clean electricity in the country) comes in at 100 g CO2/mile and Minnesota (a state that is very dependent on fossil fuel) comes in at 300 g CO2/mile. For the sake of this exercise we’ll use these readily available Nissan Leaf numbers as the benchmark CO2 emissions per mile of an electric car, even though the Tesla Roadster is likely slightly different due to different charging rates and battery technologies.

As a comparison for this rate of CO2 emissions of an electric car:

The following table summarizes this range of CO2 emissions for non-rocket fueled vehicles:

Launching Starman’s Roadster

At pre-launch, Musk noted that ultimately the payload (i.e., Starman’s Tesla Roadster) would get 400,000 million kilometers (almost 250,000 million miles) away from Earth, traveling at 11 kilometers per second (almost 7 miles per second), and would orbit for hundreds of millions, or even billions of years (see below graphic of the initial orbit that Musk tweeted out after the launch). To accomplish this, the Falcon Heavy generated 5 million pounds of thrust at liftoff (making it the most powerful liftoff since Nasa’s Saturn V). Generating this amount of power is no small feat.

Source

To estimate exactly how much fuel was used (and how much that would be in the equivalent gallons of motor gasoline) requires some estimates, but we have enough information to get at least in the ballpark.

When fueling its rockets, SpaceX uses a highly refined type of kerosene (also known as RP-1) because of its high energy per gallon, in addition to liquid oxygen (LOX) needed for combustion (the amount of LOX required is about double the amount of RP-1). The first stage of a Falcon 9 rocket (another type of rocket used by SpaceX) uses 119,100 kilograms (kg) of RP-1 and 276,600 kg of LOX, while the second stage uses 27,850 kg of RP-1 and 64,820 kg of LOX (see graphic below for what that multi-stage launch sequence looks like). A simplified explanation of the Falcon Heavy is really that it’s composed of three Falcon 9 rockets merged into the first stage and the second stage consisting of disconnecting from the three Falcon 9 rockets and a single stage 2 rocket (along with the payload) continuing on. Making rough estimates, this means the Falcon Heavy required three times the fuel of the first stage and one times to fuel of the second stage of the Falcon 9, or a total of 385,150 kg of RP-1 and 894,620 kg of LOX (this is admittedly a simplification of the fueling process, but I’m also admittedly not a rocket scientist. In attempting to keep these estimates as rigorous as possible, see the citations and links contained here and let me know in the comments if I got something wrong– particularly if you are a rocket scientist!).

Source

Musk, when discussing the potential dangers of the Falcon Heavy launch, noted that the fuel on board was 4 million pounds of TNT equivalent. In fact, the energy contained within looks like it could be over double that (whether this is a sign of Musk simplifying for the sake of giving the press a quote, speaking approximately without reference to the exact calculations beforehand, or missteps in my calculations, I’ll let you decide). While the total weight of the LOX is over double the weight of the RP-1, the LOX is simply there to allow for combustion and maximize the efficiency with which the rocket is burned. As such, the energy density of RP-1 is what we care about. Using an energy density of 43.2 Megajoules (MJ) per kg, we find that the energy contained in the Falcon Heavy’s fuel tanks was over 16.6 million MJ, which is equal to about 126,000 gallons of gasoline equivalent (or over 8.7 million pounds of TNT— so while our estimate is over double Musk’s offhand remark, we can take solace that we’re in the same order of magnitude!).

In terms of the CO2 released by burning this much fuel, we can use the “well to wake” emissions number of RP-1 of 85 grams of CO2-equivalent per MJ to estimate that the total CO2 emissions were over 1.4 million kg (or 1,400 metric tons) of CO2.

Comparing Starman’s Tesla with Earth vehicles

First things first– that’s definitely the most fossil fuel used and CO2 emitted ever in getting a car from point A to point B. But that doesn’t necessarily mean that Starman’s Tesla is the least efficient or most harmful to the environment. That’s because once the fuel is burned and Starman’s Tesla  was set into orbit in perpetual motion, logging millions of miles on the odometer while traveling 25,000 miles per hour, the rest of its journey was all without additional energy input.  Even the camera and communication equipment on board were attached to a battery with 12 hours of life with no other sources of energy, so after the 12 hours the equipment went dark and there was no more energy input to Starman’s Tesla– just momentum and gravity working their magic. So despite this initial abundance of fossil fuel and related CO2 emissions to set the Tesla in motion, on a per mile basis (which is how fuel economy and emissions are calculated) it will inevitably becomes the most efficient and clean car of all time!

But how long will it take for this to be true?

Fuel economy

In terms of fuel economy, the MPGe of Starman’s Tesla improves linearly with every mile traversed through space. After 1,200 miles, the Falcon Heavy and its payload of Starman and his Tesla left Low Earth Orbit, but the massive amount of fuel means it barely even registers as a blip on this graph at about 0.0095 MPGe.

After two days when Starman’s Tesla had traveled 450,000 miles, the fuel economy had risen to a little less than half that of the freight truck. You can also note in the graph that at the point of the 36,000 mile warranty of the Tesla Roadster the fuel economy aws still less than 0.3 MPGe– you’d certainly have a lot of angry Tesla owners if that’s all they were able to recoup on gasoline costs by the end of their warranty!

Lastly, after teasing out how far Starman’s Tesla would have to travel to become the most fuel efficient car (that is or ever was) on Earth, we find that it would take:

  • About 900,000 miles to beat the fuel economy of freight trucks;
  • About 2.9 million miles to beat the average of the U.S. light-duty stock fuel economy;
  • About 3.7 million miles to meet the 2018 light truck standards;
  • About 5.0 million miles to meet the 2018 car standards;
  • About 7.3 million miles to meet the most efficient gas powered car available;
  • About 15 million miles to meet the efficiency of an Earthly Tesla Roadster; and
  • About 17.2 million miles traveled to equal the 136 MPGe of the Hyundai Ioniq Electric, the most efficient car available.

As previously mentioned, the equipment on board Starman’s Tesla was attached to a battery that only had 12 hours of life, after which there was no functioning equipment on the Roadster. As such, there is no inherent tracking or communicating with Starman’s vehicle as it continues on its journey, making its exact tracking through space difficult.

But fear not– a great tool was launched after the Roadster was launched into orbit called ‘Where is Roadster?‘ Using the knowledge available regarding the position, orbit, and speed of the Tesla, this tool shows approximately where in its orbit the Roadster is and how far it has traveled in aggregate. This tool does not allow going back to see when exactly certain distances were passed, but from watching the site myself I can attest that Starman’s Roadster passed 17.2 million miles on the afternoon of February 14, 2018– meaning it only took eight days for this Tesla Roadster to become the most efficient car ever! Any distance it continues to travel will only increase the overall fuel economy (if you want to calculate this for yourself at any given moment, divide the current miles from ‘Where is Roadster?‘ by 126,279 gallons of gasoline equivalent).

CO2 emissions

In terms of CO2 emissions per mile, Starman’s Tesla improves according to a power equation– meaning in this case that there are drastic improvements in CO2 emissions per mile initially that flatten out over time. By the time Starman’s Tesla leaves Low Earth Orbit, not nearly enough miles have been traveled to offset the massive amount of CO2 emissions from the rocket launch, with Starman’s Tesla coming in at a mindblowing 1.2 million g CO2/mile at 1,200 miles– the equivalent of 182 freight trucks moving a mile at a time.

After two days and 450,000 miles traveled, the CO2 emissions per mile had dropped to 3,143 g CO2/mile, blowing way past the average freight truck emissions after about 219,000 miles. After the 36,000 mile warranty, the emissions still averaged over 39,000 g CO2/mile– another tidbit that would enrage an environmentally conscious electric car owner if it happened to them.

Again projecting out how far Starman’s Tesla would have to travel to become the cleanest car in existence, we find that it would take:

  • About 3.4 million miles to be cleaner than the average passenger vehicle;
  • About 4.7 million miles to be cleaner than an electric vehicle charged in fossil-fuel-dependent Minnesota;
  • About 5.0 million miles to meet the emissions standards for light trucks in 2018;
  • About 7.0 million miles to meet the emissions standards for cars in 2018;
  • About 7.1 million miles to be cleaner than the average electric vehicle in the United States; and
  • About 14.1 million miles to be cleaner than an electric vehicle charged in renewable-energy-heavy California.

Again by watching the ‘Where is the Roadster?‘ tool, I found that Starman’s Tesla also became the cleanest car ever (on a g CO2/mile basis) on February 14, only 8 days after launch. As with the fuel economy, this figure will only get better and better as Starman racks up the limitless miles circling the Sun for millions or billions of years (to calculate an updated emissions per mile, divide 1,414,270,800 grams of CO2 emissions by the updated miles traveled from ‘Where is Roadster?‘).

Conclusion

So there you have it, despite the massive amounts of fuel and resultant CO2 emissions required to launch the Tesla Roadster in space, it only took eight days of traveling faster than any car ever before to become the most fuel efficient and least CO2-emitting (on a per mile basis) ever made. But that fact was inevitable given that it’s in orbit around the Sun and will likely be for the rest of humanity’s existence– so what really is the point of crunching the numbers like this? Hopefully you’ll come away from this article with a handful of takeaways and topics/issues on which to do some more reading and learning:

  1. The impressiveness of this feat accomplished by Musk adn the whole team at SpaceX cannot be overstated. The Tesla Roadster weighs just 2,723 pounds, but this launch was testing a rocket system whose ultimate payload capacity extends to almost 141,000 pounds sent to Low Earth Orbit, 37,000 pounds sent to Mars, and 7,700 pounds sent to Pluto– all at decreased cost compared with historical launches that really opens up doors. That is the most important takeaway from the Falcon Heavy launch, a huge step towards what Musk hopes to be the next great space race.
  2. Beyond that, running through these tongue-in-cheek calculations should hopefully serve to pique your interest and give some information on the relative fuel efficiency electric cars are able to achieve, but also some of the current shortcomings in terms of using them as a way to reduce CO2 emissions. A lot of interesting pieces have been written on the true environmental impact of electric cars, as well as how that might evolve in the future. I’ll recommend a couple (from Green Car Reports, Wired, The Union of Concerned Scientists, and Scientific American, just to name a few), but it’s an important topic with much more out there to be read and debated.
  3. In addition, given the relative fuel economies and CO2 emissions of various vehicles (as wella s regulations covering these measurements), let that be a reason to look more into the efficiencies and emissions of your vehicles. In particular, you’ll note the average passenger vehicle has twice the emissions per mile as a new Model Year 2018 car that complies with EPA regulations, while the new cars will also get up to 74% more MPG compared with the average for the U.S. fleet of light-duty vehicles. Keep these types of figures in mind the next time you’re in the market for a vehicle, and consider how much fuel and emissions savings are being protected and increased by these existing regulations (both fuel economy and car emissions regulations are being considered for rollbacks by the Trump administration) as automotive regulations and policies continue to make the news.

Sources and additional reading

Can Driving a Tesla Offset the Impact Of A SpaceX Launch? Clean Technica

Electric Cars Are Not Necessarily Clean: Scientific American

Elon Musk says SpaceX has ‘done everything you can think of’ to prepare Falcon Heavy for launch today: Business Insider

Falcon 9 v1.1 & F9R Launch Vehicle Overview: Spaceflight 101

Falcon Heavy: SpaceX

Falcon Heavy: SpaceX stages an amazing launch — but what about the environmental impact? The Conversation

How Much Fuel Does It Take To Get To The Moon? Huffington Post

Musk’s Falcon Heavy Packs a Huge Payload: Forbes

SpaceX’s Falcon Heavy Rocket: By the Numbers: Space.com

SpaceX’s Falcon Heavy rocket nails its maiden test flight: NBC News

SpaceX launch: Why is there a Starman spacesuit in the Tesla Roadster? Express

The Falcon Heavy Packs A Huge Payload: Statista

Where is Elon Musk’s Tesla Roadster with Starman?

About the author: Matt Chester is an energy analyst in Washington DC, studied engineering and science & technology policy at the University of Virginia, and operates this blog and website to share news, insights, and advice in the fields of energy policy, energy technology, and more. For more quick hits in addition to posts on this blog, follow him on Twitter @ChesterEnergy.  

Playing Politics with Energy Security: How the Latest Congressional Budget Deal Raids the Strategic Petroleum Reserve

Looking to finally reach a longer-term agreement to avoid an extended federal government shutdown last week, a bipartisan deal was reached in Congress in the early morning of February 9 that would fund the government for the next two years. As the details of the deal get combed over there is plenty to digest, even in just energy-related topics (such as the inclusion of climate-related policy), but one notable part of the budget agreement was the mandate to sell 100 million barrels of oil from the Strategic Petroleum Reserve (SPR). The stated goal of this move was to help pay for tax cuts and budgetary items elsewhere in the deal, but will that goal be realized or is Congress paying lip service to the idea of fiscal responsibility at the expense of future energy security?



Purpose and typical operation of the SPR

In a previous post, I covered more extensively the background and purpose of the SPR. In short, the SPR is the largest reserve supply of crude oil in the world and is operated by the U.S. Department of Energy (DOE). The SPR was established in the wake of the oil crisis of the late 1970s with the goal of providing a strategic fail-safe for the country’s energy sector– ensuring that oil is reliably available in times of emergency, protecting against foreign threats to cut off trade, and minimizing the effect to the U.S. economy that drastic oil price fluctuations might cause.

In general, decisions regarding SPR withdrawals are made by the President when he or she 1) “has found drawdown and sale are required by a severe energy supply interruption or by obligations of the United States under the international energy program,” 2) determines that an emergency has significantly reduced the worldwide oil supply available and increased the market price of oil in such a way that would cause “major adverse impact on the national economy,” 3) sees the need to resolve internal U.S. disruptions without the need to declare “a severe energy supply interruption, or 4) sees it a suitable way to comply with international energy agreements. These drawdowns, following the intended purpose of the SPR, are limited to a maximum of 30 million barrels at a time.

Outside of these standard withdrawals, the Secretary of the DOE can also direct test sales of up to 5 million barrels, SPR oil can be sold out on a loan to companies attempting to respond to small supply disruptions, or Congress can enact laws to authorize SPR non-emergency sales intended to respond to small supply disruptions and/or raise funds for the government. This last type of sale is what Congress authorized with the passing of the budget deal (see the previous article on the SPR to read more about how the SPR oil actually gets sold).

Source

While selling SPR oil to raise funds is legislatively permitted, this announced sale of 100 million barrels (15% of the balance of the SPR) is an unprecedented amount– the biggest non-emergency sale in history according to ClearView Energy Partners. More concerning than the amount of oil to be sold, though, is the ambiguity behind what exactly the sale of SPR oil will fund. Historically, an unwritten and bipartisan rule was that the SPR was not to be used as a ‘piggy bank’ to fund political measures. However, that resistance to using the SPR as a convenient way to raise money (for causes like infrastructure or medical research) was waned as Congress has faced the perennial opposition to raising taxes and the need for new sources of income.

Lisa Murkowski, Chairwoman of the Senate Energy and Natural Resources Committee, has echoed these frustrations about how the funds from the SPR sell-off will be used. When asked how Congress would spend the money, she simply replied it would be spent on “whatever they want. That’s why I get annoyed.” Despite the history of the SPR being an insurance policy for the U.S. energy sector and economy from threats of embargo from foreign nations, natural disasters, and unexpected and drastic changes in the market, the inclusion of SPR sales in this budget is just further indication of Congress trading out energy security and buying into other priorities. Taking the issue a step further, once the oil from the SPR is sold off, it likely becomes that much harder to convince Congress in the future to find the money to rebuild stocks with any additional oil stocks that might become necessary, both because the trajectory of oil prices is always climbing and thus naturally becomes more expensive to do so over time and because getting Congressional approval for new spending will always be more difficult politically than ‘doing nothing’ and just keeping SPR stocks at their current levels.

But is this selling of the SPR oil really in the name of deficit reduction and fiscal responsibility? Will the sale of this oil make an appreciable difference and help balance out the budget that Congress agreed to at (or, rather, past) the eleventh hour?

Crunching the numbers

Ignoring the previously authorized SPR sales, this budget deal alone included directive for DOE to sell 100 million barrels of oil from the SPR. What level of funds would this actually raise, and would it be enough to make a dent in the deficit? At current prices of crude oil that have hovered in the $60 per barrel (b) range, the sale would translate to about $6 billion– but the actual number depends on the price at which the oil gets sold, an uncertain number because the oil is being sold over the next 10 years and oil prices are notoriously variable.

We can make a certain degree of estimates based on the outlook of crude oil prices going forward (acknowledging at the outset the significant uncertainty that any forecast inherently assumes, especially in the oil markets that are affected by outside factors like government policy and geopolitical relations). To get a rough idea, though, we can look at the recently released 2018 Annual Energy Outlook (AEO2018) from the Energy Information Administration (EIA) which projects energy production, consumption, and prices under a variety of different scenarios (such as high vs. low investment in oil and gas technology, high vs. low oil prices, and high vs. low economic growth).

Source (Click to enlarge)

Brent crude oil (representative of oil on the European markets) starts at about $53/b in 2018 and goes up to about $89/b by 2027 in the ‘reference case’ (going from $27/b to $36/b in the low oil price scenario and $80/b to $174/b in the high oil price scenario). Similarly, West Texas Intermediate (WTI) oil (representative of the U.S. markets) starts at about $50/b in 2018 and goes to $85/b in 2027 in the ‘reference case’ ($243/b to $33/b in the low oil price scenario and $48/b to $168/b in the high oil price scenario). These figures present a pretty wide range of possibilities, but that is unfortunately the nature of oil prices in today’s climate. Further, EIA does unofficially consider these ranges to be akin to the 95% confidence intervals between which the actual prices are almost assured to be found, so we can still find value in these prices as the ‘best’ and ‘worst’ case scenarios.

For simplicity’s sake, we can assume this 100 million barrels sold will be sold in equal chunks of 10 million barrels per year from 2018 to 2027 (though the actual sale will certainly not follow this neat order, but the assumption will get us in the approximate range). In the below charts, see the amount of funds raised from this SPR sale assuming the actual sale price is the average of Brent and WTI prices in the AEO2018 reference case compared with using the price of Brent in the high oil price scenario (the largest total oil price in any side case) and the price of WTI in the low oil price scenario (the lowest oil price in all of the side cases). The top chart tracks the amount of money raised in each of the 10 years while the bottom chart then shows the cumulative money raised in these three scenarios over the course of the decade.

Click to enlarge

As shown, the low oil price scenario raises between $226 million and $326 million every year for a decade, totaling just shy of $3 billion in funds. In the high price scenario, the annual amount brought in is between $800 million and $1.7 billion per year, totaling about $14 billion in funds. In the reference case, the one that is most likely (though not at all assured) to be representative, each year the selling of SPR oil would bring in between $512 million and $868 million for a total of $7.5 billion in funds.

Now let’s be clear about one thing–raising somewhere between $3 billion and $14 billion is a lot of money. But in the context of this budget that was passed and the rising deficit of the federal government, how much of a dent will this fundraising through the sale of SPR oil really make?

The budget deal will add $320 billion to deficits over the next decade, which is almost $420 billion when factoring in interest according to the Congressional Budget Office. That massive increase in spending, an average of $42 billion per year, makes the funds from the SPR sale look like pocket change:

 Click to enlarge

Both the sale of SPR oil and the impact of this budget will be felt over the next 10 years, meaning these dollar figure present very apt comparisons. At the end of the decade, the high oil price scenario shows that SPR oil sales will only account for 3.4% of the deficit increase, while the reference case would account for 1.8% of the deficit increase and the low oil price scenario would only account for 0.7% of the deficit increase. Since the deficit would increase over the course of 10 years, another way to think of it is that the selling of SPR oil would account for 124 days of the deficit increase in the high oil price scenario, while the reference case would account for 65 days of the deficit increase and the low oil price scenario would account for 26 days of the deficit increase.

Outside of the increase to the deficit, the discretionary spending from the budget increase are to be $296 billion over the next two years (not including money given immediately to disaster spending, healthcare, and tax cuts). The SPR oil sale translates to between 1.0 and 4.8% of the discretionary spending increase or 7 to 35 days of the two years worth of spending increases.

Lastly, after accounting for this latest Congressional budget agreement, the CBO projects the federal deficit will increase to $1.2 trillion in 2019. If the sale of SPR oil is attempted to be pushed as a degree of fiscal responsibility in the wake of this budget deal, it is worth noting that the authorized sale of the SPR oil would only account for 1.2% of the total federal deficit in the best case scenario of high oil prices (0.2% in the low oil price scenario)– a metaphorical drop in the bucket (though for those curious, it’s actually significantly more than a literal drop in the bucket!).

What’s it all mean?

Buckets get filled drop by drop all the time, and it inherently requires many drops to fill up that bucket. So in this metaphor, each drop need not be disparaged for not being larger and doing more to fill up the bucket as it is the aggregate effect we should care about. Despite that truth, it is still fair to bring up whether the sacrifices required to gather that ‘drop’ were worthwhile. Going back to the origin and history of the SPR, Congress selling off large portions of the stocks of oil was never meant to fund ambiguous budgetary measures.

This 100 million barrels to be sold should also not be taken without the context of the sales already authorized by Congress last year that will also become reality in the next decade. Combined with the previously mandated sales, after this budget deal the SPR will be left with just over 300 million barrels of oil— about half of what it had been. So the negative side of this is that Congress appears ready and willing to gut the SPR. However the other side is that, because of the U.S. shale oil boom and other factors, the amount of net imports of oil and oil products to the United States has been dropping significantly. In the context of decreasing net imports, the amount of SPR stock measured in terms of ‘days of supply of total petroleum net imports’ has seen a comparable rise. What this means is that because the United States has become less dependent on foreign oil, less oil needs to be stored in the SPR to provide the same amount of import coverage.

Source (Click to enlarge)

In the wake of this budget passing and the previously announced SPR oil sales, many energy analysts came out to call these moves short-sighted at best, citing the following among the many reasons:

Because the budget that was passed was over 600 pages and was voted on before most people (or anyone) would realistically have a chance to read it, it’s yet to be clear what part of the budget will cause the most noise. But in terms of this surprising move by Congress with respect to the SPR, the questions to wrestle with become the following: Is it wise to sell off our oil insurance policy that might be needed in future tough times just because things are looking good for the present U.S. oil market? Is the financial benefit of reducing SPR oil stocks by such a significant amount  worth paying off a couple of weeks to a couple of months of the increased deficit, or is it possible that such a sale is only paying lip service to fiscal responsibility that allows politicans to point to an impressive sounding source of funds (up to $14 billion!) when in reality it doesn’t move the needle much (a maximum of 3% of the increase in deficit)?

Sources and additional reading

2018 Annual Energy Outlook: Energy Information Administration

America’s (not so) Strategic Petroleum Reserve: The Hill

Budget deal envisions largest stockpile sale in history: The Hill

CBO Finds Budget Deal Will Cost $320 Billion: Congressional Budget Office

DOE in Focus: Strategic Petroleum Reserve

Harvey, Irma show value of Strategic Petroleum Reserve, energy experts say: Chron

Petroleum reserve sell-off sparks pushback: E&E Daily

U.S. Looks To Sell 15% Of Strategic Petroleum Reserve: OilPrice.com

U.S. SPR Stocks as Days of Supply of Total Petroleum Net Imports: Energy Information Administration

Weekly U.S. Ending Stocks of Crude Oil in SPR: Energy Information Administration

Why the U.S. Shouldn’t Sell Off the Strategic Petroleum Reserve: Wall Street Journal

 

About the author: Matt Chester is an energy analyst in Washington DC, studied engineering and science & technology policy at the University of Virginia, and operates this blog and website to share news, insights, and advice in the fields of energy policy, energy technology, and more. For more quick hits in addition to posts on this blog, follow him on Twitter @ChesterEnergy.  

Debunking Trump’s Claim of “War on Beautiful, Clean Coal” Using Graphs

In President Trump’s first State of the Union Address last week, a wide range of topics in the Administration’s agenda were covered extensively while energy was largely pushed to the side. Trump did include two sentences on his self-described push for “American Energy Dominance,” and these two sentences sent wonks in the energy industry into a frenzy on social media:

“We have ended the war on American energy. And we have ended the war on beautiful, clean coal.”

My Twitter feed lit up with various energy journalists and market watchers who noted the impressiveness that just 18 words over two sentences could contain so many misleading, or outright false, claims.

Source

As one of those energy reporters who immediately took to Twitter with my frustration, I thought I would follow up on these statements last week with arguments why the claims of ‘clean coal’ and the supposed ‘war’ on it do not reflect the reality the Trump Administration would have you believe, and I’ll do so with just a handful of graphs.



What is ‘clean coal’?

As a pure fuel, coal is indisputably the ‘dirtiest’ energy source in common use in the power sector, accounting for about 100 kilograms (kg) of carbon dioxide (CO2) per million British thermal unit (MMBtu) of energy output. This output is notably larger than other major energy sources, including natural gas (about 50 kg/MMBtu), petroleum products like propane and gasoline (about 60 to 70 kg/MMBtu), and carbon neutral fuels like nuclear, hydroelectric, wind, and solar. In the face of the scientific consensus on CO2’s contributions to climate change, many have noted that one of the best actions that can be taken in the energy industry is to shift away from coal to fuels that emit less CO2— which has definitively given coal a dirty reputation.

The premise of ‘clean coal’ is largely a PR push (literally invented by an advertising agency in 2008)– an ingenious marketing term, but one that does not have much in the way of legs. When you hear politicians talking about ‘clean coal,’ it is usually referring to one or more of the following suite of technologies:

  • Washing coal before it’s burned to remove soil and rock and thus reduce ash and weight of the coal;
  • Using wet scrubbers on the gas generated from burning coal to remove the sulfur dioxide from being released;
  • Various carbon capture and storage (CCS) technologies for new or existing coal plants that intervene in the coal burning process (either pre-combustion or post-combustion) to capture up to 90% of the CO2 produced from its burning and then sending it miles underground for permanent storage instead of releasing it into the atmosphere; or
  • Anything done to the coal-fired power plant to increase the efficiency of the entire process of generating electricity (e.g., the 700 Megawatt supercritical coal plant in West Virginia that is so efficient it reportedly releases 20% less CO2 than older coal plants) and reduce the overall emissions.

Source

When most in the energy industry discuss ‘clean coal’ technology, they are typically referring to CCS. However it should be noted that Trump did not mention CCS by name in this (or any) speech. Some analysts have noted that the White House’s attempts to cut CCS funding and send the Secretaries of the Department of Energy (DOE) and Environmental Protection Agency (EPA) to supercritical coal plants are not-so-subtle hints that the Trump Administration’s preferred type of ‘clean coal’ is improving the efficiency of coal-fired generation. Even Bob Murray, the influential coal magnate, has written to the President to indicate his contempt for CCS, calling it a ‘pseudonym for no coal,‘ echoing the concerns of many proponents of coal that CCS is being pushed as the only ‘clean coal’ option so that if/when it fails (due to economic impracticalities) it would be the death knell of coal-fired generation altogether.

So regardless of which ‘clean coal’ technology the Trump Administration supports, issues remain. With regard to wet scrubbers, coal washing, and general plant efficiency improvements, the reductions in CO2 emissions are not nearly enough to compete with cleaner fuels. Even if all coal plants could be made 20% more efficient (and less reduce CO2 emissions by about 20%) like the West Virginia supercritical plant, which would be a massive undertaking, it would still result in coal generation being among the dirtiest energy in the country.

With regard to CCS, not only is the cost one of the biggest issues (which will be looked at in more detail later), but it does not remove all the pollutants from burning coal. Even with the most effective CCS capturing 90% of CO2 emissions, that leaves 10% of CO2 making its way into the atmosphere along with the other notable pollutants in coal gas (including mercury, nitrogen oxide, and other poisonous contaminants). When compared with the carbon neutral energy sources increasingly gaining ground in the United States, coal plants with CCS still hardly seem clean.

Again, the Energy Information Administration’s (EIA) listing of carbon dioxide emissions coefficients shows the CO2 emissions associated with different fuel types when burned as fuel. As previously noted, coal is the far-away leader on CO2 emissions coefficients as a pure fuel. In DOE analysis of future-built generation (an analysis that focuses on the costs and values of different types of power plants to be built in the future, which will come up again in more detail later), the only type of coal generation even considered is coal with either 30% or 90% carbon sequestration, with 90% being the technological ceiling and 30% being the minimum example of new coal-fired generation that would still be compliant with the Clean Air Act. The below graph, our first in demonstrating the issues with claims of a ‘war on beautiful, clean coal,’ plots the CO2 coefficients of major fuel sources in the U.S. power sector, including coal using no CCS, 30% CCS, or 90% CCS. Existing power plants do not have the same requirements under the Clean Air Act, so they might still be producing CO2 at the far right of the ‘coal’ bar (indeed, last year almost 70% of U.S. coal was delivered to power plants that are at least 38 years old meaning they are likely far from the most efficient coal plants out there). Coal plants that are touted as ‘clean’ because of their up to 20% increases in efficiency would still find themselves in the same (or greater) range of emissions as 30% CCS coal plants, while 90% CCS coal plants appear to the be the only ones that can compete with other fuels environmentally (though it comes at a potentially prohibitive cost, which will show up in a later graph).

Note that the data for these CO2 emission coefficients come from this EIA listing. The lines for 30%/90% CCS are not just drawn 30%/90% lower, but rather account for the presence of CCS requiring more energy and thus cause a dip in efficiency– this graph uses the rough efficiency drop assumed for CCS plants in this International Energy Agency report

These numbers paint a scary picture of coal and are the source of what causes many energy prognosticators to scoff at the utterance of ‘beautiful, clean coal,’ though it is important to be clear that these numbers don’t tell the whole story. While nuclear and renewable energy sources do not emit any fuel-related CO2, they are not completely carbon neutral over their lifetimes, as the building, operation, and maintenance of nuclear and renewable generation plants (as with any utility-scale generation source) all have their own non-zero effect on the environment. However, since fuel makes up the vast majority of carbon output in the electricity generation sector, any discussion of clean vs. dirty energy must return to these numbers.

Further, the separation of dispatchable vs. non-dispatchable technologies (i.e., energy sources whose output can be varied to follow demand vs. those that are tied to the availability of an intermittent resource) shown in the above graph is important. Until batteries and other energy-storage technologies reach a point technologically and economically to assist renewable (non-dispatchable) energy sources fill in the times when the energy resource is unavailable, dispatchable technologies will always be necessary to plug the gaps. So regardless of what drawbacks might exist for each of the dispatchable technologies, CO2 emissions and overall costs included, at least some dispatchable energy  will still be critical in the coming decades.

Who is orchestrating the ‘war on coal’?

Even with the knowledge that coal will never truly be ‘clean,’ the question then becomes why haven’t the advancements in coal energy that is cleaner and more efficient than traditional coal-fired plants become more prominent in the face of climate and environmental concerns? The common talking point from the Trump Administration is that there is a biased war on coal being orchestrated, and the actions of President Trump to roll back regulation is the only way to fight back against this unjust onslaught that the coal industry is facing. But again, from where is this onslaught coming?

The answer to this question is actually pretty easy– it’s not regulation that is causing coal to lose its place as the king of the U.S. power sector, it’s competition from more affordable energy sources (that also happen to be cleaner). The two charts below demonstrate this pointedly, with the left graph showing the fuel makeup of the U.S. electric power sector since 1990 along with the relative carbon intensity of the major CO2-emitting fuel sources, while the right graph shows what’s happened to the price of each each major fuel type over the past decade. The carbon intensity shown on the left graph is even more indicative than the first graph above in detailing the actual degree to which each fuel is ‘clean’ as it factors in the efficiency of plants using the fuel and indicates the direct CO2 emissions relative to electricity delivered to customers.

Click to enlarge

Note that the costs are taken from this EIA chart, with coal taken from fossil steam, natural gas taken from gas turbine and small scale, and wind/solar taken as the gas turbine and small scale price after removing the cost of fuel. Electric power generation and carbon emission data taken from this EIA source

Just from analysing these two graphs, a number of key observations and conclusions can be made about the electric power sector and coal’s evolving place in it:

  • In 1990, coal accounted for almost 1.6 million Gigawatt-hours (GWh) of power generation, representing 52% of the sector. By 2016, that figure dropped to 1.2 million GWh or 30% of U.S. power generation.
  • Over that same time period, natural gas went from less than 400,000 GWh (12%) to almost 1.4 million GWh (34%); nuclear went from less than 600,000 GWh (19%) to over 800 GWh (20%), and combined wind and solar went from 3,000 GWh (0.1%) to over 260,000 GWh (6%).
  • While the coal sector’s carbon intensity hovered around 1.0 kg of CO2 per kilowatt-hour (kWh) of electricity produced from 1990 to 2016 (even as CCS and other ‘clean coal’ technologies began to break into the market), natural gas dropped from 0.6 kg CO2/kWh to less than 0.5 kg CO2/kWh, while nuclear, wind, and solar do not have any emissions associated with their generation (again noting that there are some emissions associated with the operation and maintenance of these technologies, but they are neglible compared with fossil fuel-related emissions). The drop in natural gas carbon intensity combined with coal losing ground to natural gas, nuclear, and renewable energy led the electric power sector’s overall average carbon intensity to drop from over 0.6 kg CO2/kWh to less than 0.5 kg CO2/kWh.
  • While the narrative some would prefer to push is that coal is getting replaced because of a regulatory ‘war on coal,’ the real answer comes from the right graph where the cost to generated a kWh of electricity for coal increased notably from 2006 to 2016. Meanwhile, natural gas (which started the decade more expensive than coal) experienced a drastic drop in price to become cheaper than coal (thanks to advances in natural gas production technologies) while the low cost of nuclear fuel and ‘free’ cost of wind and solar allowed these energy sources to start and remain well below the total cost of coal generation. This natural, free-market competition from other energy sources, thanks to increasingly widespread availability and ever decreasing prices, is what put pressure on coal and ultimately led to natural gas dethroning coal as the predominant energy source in the U.S. power sector.

What these two graphs show is that the energy market is naturally evolving, there is no conspiratorial ‘war’ on coal. The technologies behind solar and wind are improving, getting cheaper, and becoming more prolific for economic, environmental, and accessibility reasons. Nuclear power is holding strong in its corner of the electricity market. Natural gas, more than any other, is getting cheaper and much more prominent to the U.S. power sector (while having the benefit of about half the CO2 emissions of coal), which is what has made it the natural ‘enemy’ of coal of the past decade or two. All that’s to say, the only ‘war on coal’ that’s been widespread in recent memory is a capitalistic, free-market war that will naturally play out when new energy sources are available at cheaper prices and contribute significantly less to climate change.

Will Trump policies reverse the course of coal in the United States?

Going back to the statement from Trump’s State of the Union Address, he claimed that his Administration had ended the war on clean coal. As stated previously, there was never an outward war on coal that was hindering the fuel. Even still, the main policy change from the Trump Administration with regard to coal was to repeal the Clean Power Plan (CPP) that aimed to cut carbon emissions from power generation.  However, many analysts predicted that would not change the current trends, as repealing the CPP does nothing to reverse the pricing pattern of the fuels. Indeed, this week EIA released its Annual Energy Outlook for 2018 and confirmed the tough future that coal generation has compared with natural gas and renewables– both with and without the CPP. While the CPP reduces the projections of coal generation, it doesn’t move the needle all that much and natural gas and renewables are still shown to surpass coal.

Source

So the major policy decision of the Trump Administration with respect to coal generation doesn’t appear to reverse the course of coal’s future. Again, this conclusion isn’t terribly surprising considering the economics of coal compared with other fuels. EIA projects the Levelized Cost of Electricity (LCOE) for different type of new power generation (assumed to be added in 2022) which serves to show the relative costs to install new power generation. In the same analysis, EIA projects Levelized Avoided Cost of New Generation (LACE), which can be thought of as the ‘value’ of the new generation to the grid (for more detailed description in the calculations and uses of these measures, read through the full report). When the LACE is equal or greater than the LCOE, that is in indication of a financially viable type of power to build (evaluated over the lifetime of the plant). So by looking at the relative costs (LCOE) of each power type and whether or not they are exceeded by their values (LACE), we can get a clear picture of what fuel types are going to be built in the coming years (and to continue the focus on whether coal or other fuels are ‘clean,’ let’s put the economics graph side-by-side with the CO2 emissions coefficients):

Click to enlarge

Note that the source of the data on the left graph is the EIA Levelized Cost of Electricity analysis, with the ends of the boxes representing the minimum and maximum values and the line in the middle representing the average– the difference in possible values comes from variations in power plants, such as geographic differences in availability and cost of fuel. Also note that, counter-intuitively, EIA’s assumed costs for 30% CCS are actually greater than for 90% CCS because the 30% CCS coal plants would ‘still be considered a high emitter relative to other new sources and thus may continue to face potential financial risk if carbon emissions controls are further strengthened. Again, the data for the right graph takes CO2 emission coefficients from this EIA listing by fuel type

Looking at these graphs, we can see that the cost of new coal generation (regardless of CCS level) not only exceeds the value it would bring to the grid, but also largely exceeds the cost of natural gas, nuclear, geothermal, biomass, onshore wind, solar photovoltaic (PV), and hydroelectric power (all of which emit less CO2 than coal). Thus even in the scenario where 90% of carbon is captured by CCS (which allows it to be ‘cleaner’ than natural gas and biomass), it still comes at a significant cost premium compared with most of the other fuel types. These are the facts that are putting the hurt on the coal industry, not any policy-based ‘war on coal.’ Even the existing tax credits that are given to renewable energy generation are minor when looking at the big picture, as the below graph (which repeats the above graph but removes the renewable tax credits from the equation) shows. Even if these tax credits are allowed to expire, the renewable technology would still outperform coal both economically and environmentally.

Click to enlarge

The last graphical rebuttal to President Trump’s statement on energy and coal during the State of the Union that I’ll cite comes from Tyler Norris, a DOE adviser under President Obama:

Source

As pointed out by Norris and other energy journalists chiming in during the State of the Union address, if the goal were to expand ‘clean coal,’ then the Trump Administration’s budget is doing the opposite by taking money away from DOE programs that support the research and development of the technology. In fact, at the end of last week a leaked White House budget proposal indicated even further slashes to the DOE budget that would further hamper the ability of the government to give a leg up to the development of ‘clean coal’ technology. Any war on energy is coming from the Trump Administration, and any battle that coal is fighting is coming from the free market of cheaper and cleaner fuels.

Sources and additional reading

20 Years of Carbon Capture and Storage: International Energy Agency

Annual Energy Outlook 2018: Energy Information Administration

Average Power Plant Operating Expenses for Major U.S. Investor-Owned Electric Utilities, 2006 through 2016: Energy Information Administration

Carbon Dioxide Emissions Coefficients: Energy Information Administration

Did Trump End the War on Clean Coal? Fact-Checking the President’s State of the Union Claim: Newsweek

How Does Clean Coal Work? Popular Mechanics

How much carbon dioxide is produced per kilowatthour when generating electricity with fossil fuels? Energy Information Administration

Is There Really Such a Thing as Clean Coal? Big Think

Levelized Cost and Levelized Avoided Cost of New Generation Resources in the Annual Energy Outlook 2017: Energy Information Administration

Trump touts end of ‘war on beautiful, clean coal’ in State of the Union: Utility Dive

Trump’s Deceptive Energy Policy: New York Times

What is clean coal technology: How Stuff Works

About the author: Matt Chester is an energy analyst in Washington DC, studied engineering and science & technology policy at the University of Virginia, and operates this blog and website to share news, insights, and advice in the fields of energy policy, energy technology, and more. For more quick hits in addition to posts on this blog, follow him on Twitter @ChesterEnergy.  

Super Bowl Sunday and Electricity Demand: What Happens in Cities with Super Bowl Teams and Host Cities?

The Super Bowl is upon us once again, along with all the fun sideshows that go with it. There are few events in American culture that bring as many people collectively around a single event quite like the Super Bowl, with even non-football fans gorging on fatty foods, enjoying the commercials, and relishing in the excuse to attend parties on a Sunday evening. The grip that Super Bowl Sunday has on our group consciousness allows for some interesting analysis of data (how much food do we collectively eat?) and myths (despite what I heard on the schoolyard growing up, the simultaneous flushing of toilets during halftime has not actually caused damage to sewer systems).

Thinking of the ‘everyone flushing the toilet at the same time’ myth got me to wondering about how electricity demand as a whole is affected by the Super Bowl, particularly in the regions whose teams made the big game (and presumably cause even more of the population to tune it) and the region hosting the Super Bowl. Indeed, grid operators like ISO New England recognize that ‘even when the game is thousands of miles away, the Super Bowl can have a big impact on regional electricity demand with spikes and dips throughout the game,’ requiring them to monitor the demand closely throughout the day.

So just as I started this football season analyzing the sustainability-ranking of each team, I’ll end it by analyzing the championship game in energy terms. Going into this analysis, I expected that a city/region having their team in the Super Bowl, or hosting the festivities, would lead to a definitive increase in power demand– but keep reading to see why I was surprised to find that assumption was misguided.



Graphical Results

We’ll jump right into the graphical results of this analysis– if you’re interested in reading the methodology, head down to the Methodology section now. The methodology section will also answer where the data came from and why for Super Bowls from 2015 and earlier there isn’t data available for each of the three relevant cities (two participant teams and the host city).

Super Bowl 51

Starting with the most recent Super Bowl and working backwards, first up is Super Bowl 51. This game saw the New England Patriots defeat the Atlanta Falcons in Houston, Texas, in the largest comeback in Super Bowl history. Below is the graph of electricity use in the power regions that are home to Boston, Atlanta, and Houston compared with a typical Sunday of comparable weather (note that all times displayed in this and other graphs are in Eastern Standard Time even when the region in question is in a different timezone):

For this specific Super Bowl, the electricity demand in all three regions is mostly lower than a normal Sunday for the whole day, though the demand of the Atlanta fans drops even lower than normal come game time and we see the New England electricity demand increase compared with normal as the games continues. As will be discussed later, this difference in how the two cities reacted over the course of the game likely reflects the attitudes and activities each fan-base had to what was looking like a blowout victory for Atlanta.

Super Bowl 50

Super Bowl 50 featured the Denver Broncos defeating the Carolina Panthers in Santa Clara, California. Comparing the electricity use this Super Bowl Sunday with a typical winter Sunday in the power regions that contain Denver, Charlotte, and Santa Clara gives the following visual:

In what we’ll find is a more typical effect of Super Bowl Sunday, the electricity use in both Denver and Santa Clara saw an increase from normal use early in the day, only to fall below average during the time when the game was on. Panthers fans, however, set an unparalleled increase in power demand compared with a normal Sunday all day, but especially high during the afternoon lead up to the game and notably dropping during the game.

Super Bowl 49

Working backwards, Super Bowl 49 is the first instance where we find that data is not available for all three regions (see Methodology section for an explanation). In this game, which found the New England Patriots defeating the Seattle Seahawks in Glendale, AZ on a game-ending interception in the end zone, we only have data from the New England power region to consider:

In looking at the New England electricity demand, we find a peak compared with normal early in the day and a general increase compared with normal over the course of the game.

Super Bowl 48

For Super Bowl 48, where the Seattle Seahawks dominated the Denver Broncos all game in East Rutherford, NJ, the only available data was for the power system that is home to East Rutherford.

Here we again find a peak in electricity demand compared with normal early in the day, which dissipates and eventually leads to lower electricity used during the actual playing of the Super Bowl compared with a normal day.

Super Bowl 47

Last but not least is Super Bowl 47, featuring the Baltimore Ravens defeating the San Francisco 49ers in New Orleans, LA. I was particularly looking forward to gathering the data from this one (and was disappointed to find only the Baltimore area data available) because this is the game that infamously featured a power outage in the stadium that delayed the game by over half an hour. I was hoping specifically for the New Orleans data to see what the electricity demand looked like before and after the blackout, but it was not meant to be.

However we can see from the Baltimore data a peak in electricity use compared with normal early in the day and a distinct drop off as the game is set to begin and throughout the course of the game. Because the data provided is hourly, it’s not clear if there was any effect during the half hour delay in the Super Bowl, but it looks like people in Baltimore continued whatever it was they were doing during the power outage in New Orleans, rather than decide to use the break in action to start up the dishwasher or the clothes dryer.

Conclusions

General trends

Interestingly, we don’t find one iron-clad trend that weaves its way through the entire data set analyzed, though there are some patterns.

  • For regions with teams in the Super Bowl, four out of six (Baltimore in 2013, New England in 2015, Denver in 2016, and Carolina in 2016) of them show an increase in electricity use during the lead up to the game, while four out of six (Baltimore in 2013, Denver in 2016, New England in 2017, Atlanta in 2017)  of them show a decrease in electricity use during the game.
  • For the regions hosting the Super Bowl, a similar trend is found. Two out of three host regions (East Rutherford in 2014 and Santa Clara in 2016) showed an increased electricity demand in the hours preceding the game, while all three host regions showed a general decrease in electricity demand during the game.

While these data are not complete or detailed enough to make definitive conclusions (in addition to the lack of more years of historical data, the issue of controlling for the weather is difficult to do since some of the wider regions will have more varied temperatures throughout the region and make it more difficult to ensure the weather is not causing electricity fluctuations as a whole), they do generally follow the results of U.S.-wide studies. A study by Outlier found, through working with utilities during Super Bowl 46, the following:

More specifically, versus a typical Sunday afternoon/evening in the winter, home power usage was 5 percent lower during the Super Bowl, with big consequences for overall energy use:

Source

Going further, ISO New England’s minute-by-minute graphical analysis during Super Bowls 49 and 50 show the types of effects the big moments like the start, halftime, and end of the game have on the total demand load (and also serve to solidify that the effects are more pronounced when a region’s local team is in the game!)

Source

 

Explanation of the trends

The conclusion of less electricity usage over the course of the Super Bowl may sound surprising at first, given that it’s an event centered around an electronic device in the TV, but when you break it down it really makes sense. While it’s true that Americans gather around the television, they are often doing so collectively– going to parties or bars. So while the Super Bowl is often uncontested as the most watched television program of the year, that does not necessarily lead to an increase in the number of television sets being powered as people congregate around TVs together. These effects are going to be even more drastic if a local team is in the game (drawing in the more casual viewer) or if the game is being played locally (meaning more people will be in or near the stadium to enjoy the festivities).

Further, just because people are turning on their TVs does not indicate that household energy use is going up. That is because TVs require less than 400 Watts (W), and sometimes as few as 20 W, compared with most energy-sucking appliances like the vacuum (650 W), washing machine (2,500 W), or water heater (4,000 W). During the Super Bowl when the TVs are on, households are significantly less likely to be using these more electricity-consumptive appliances (not to mention many households would regularly have their TVs on during these hours anyway) and thus overall electricity demand noticeably drops.

That combination of people gathering together as opposed to being in separate households and using TVs instead of other appliances satisfyingly explains the drop in power use during the game. We could also conjecture that power use goes up before the game as people are getting the energy intensive chores (washing clothes, vacuuming, washing dishes, etc.) done earlier in the day before heading out to their Super Bowl gathering. They might also be preparing food to enjoy during the big game using their ovens/microwaves/stove tops in these early afternoon hours when they would not normally be in the kitchen.

Exceptions to the trends

Though the previously discussed trends were found in a majority of the cases analyzed in this article, there were a couple that bucked the trend. Specifically, analyzing the electricity use on Super Bowl Sunday compared with a comparable Sunday found that:

  • In 2017, both New England and Atlanta, as well as host city Houston, had lower than normal electricity demand in the hours before the game;
  • In 2016, the Carolina region saw large peak in electricity use compared with normal in the afternoon leading up to the game; and
  • In 2015, New England had increased electricity demand the morning of the Super Bowl as well as during the game.

There are a number of potential reasons that these specific instances did not meet the trends found in other places. The main one could be that while the average temperature used to find a comparable Sunday was close to the temperature on Super Bowl Sunday, there could have been wildly varying temperatures in different parts of the region or in different times of the day that prompted heating or cooling systems to be ramped up. Without the availability hourly temperature data and/or the analysis of temperature data of many cities within a region, it is impossible to know for sure. Further, grid operators also monitor aspects of weather like dew point, precipitation, cloud cover, and wind to predict electricity demand– which would be significantly more difficult for me to control for here. So for aberrations outside of the expected trends, these type of weather effects are the most likely culprit.

Another interesting explanation to look for is how captivating a particular game might have been. In its analysis of Super Bowl energy numbers, MISO notes that the more captivated and the more glued to their seats watchers are during the game, the more the demand will remain steady and low. As soon as people start to get up and do other things in the house (either because its halftime or a game is uninteresting), they notice a real uptick in electricity demand. This effect could perhaps explain why electricity use started to go closer to normal levels in New England in 2017 when the Patriots were building a seemingly insurmountable deficit, and it could also explain why electricity demand started to increase compared with normal in Carolina in 2016 about midway through the game (while never down by more than 10 until the closing minutes, more casual Panthers fans might have been frustrated with their team’s lackluster offense and inability to score more than 7 points through the third quarter and tuned out to partake in more energy-intensive activities).

Methodology

Availability of data

The availability of a region’s electricity demand depends on the entities who deliver energy and how far back in time you are looking. At the suggestion of the Federal Energy Regulatory Commission (FERC), a number of regional transmission organizations (RTOs) and independent system operators (ISOs) have been established in the United States to coordinate, control, and monitor complex and sometimes multi-state grid systems. One of the results of the use of these systems is they often make hourly electricity demand data publicly available going back a number of years, which allows for us to look back on some of the regions of the participants/hosts of the Super Bowl. The cities/years where those data are available are shown in the table below.

For regions that are not a part of RTOs or ISOs, unfortunately the electric companies rarely make public the same type of data. However a proxy we can utilize the Energy Information Administration’s (EIA) Electric System Operating Data tool. While it only goes back to the summer of 2015, it does provide the same type of hourly electricity demand data for regions and utilities outside of RTOs/ISOs. So where needed, this data is used as well as indicated in the below table.

When going back to Super Bowl 49 and earlier, some data become unavailable and those cities are not included in the analysis, shown below as ‘not available.’

 

For links to each of the electricity data sources listed in the above table, go to the ‘Sources and additional reading‘ section.

Finding a reasonable day for comparison

To determine the changes in electricity demand that are attributed to each region for Super Bowl analyzed, a reasonable day for comparison was found in each region using the following criteria:

  • As pointed out in the previous post analyzing electricity usage during a federal government shutdown, nothing will affect a region’s power demand more than the weather. If all buildings and homes are turning up either the air conditioning or the heat, that will have a greater effect on electricity usage than anything else– even an event as large as the Super Bowl. With the goal of comparing electricity demand on Super Bowl Sunday with other days and controlling for other factors, the methodology used was to assure that the comparison day chosen had an average temperature as close as possible to the average temperature on the day of the Super Bowl in that region. As a rough proxy, the average temperature on the day of the Super Bowl in the major city associated with each team/region was found on Weather Underground, and the goal was to find a comparison day with an average temperature within a few degrees Fahrenheit;
  • There are also distinct patterns to electricity demand depending on the day of the week, so the comparable day chosen was always made to be a Sunday; and
  •  Lastly, the comparable day chosen was kept to be within one to three weeks of the Super Bowl (either before or after), while avoiding any Sunday that had a playoff football game for the region’s home team, to assure any other externalities are kept as constant as possible.

With that criteria in mind, the following were the days used for comparison to Super Bowl Sunday in each region:

Click to enlarge

Graphical comparisons

Once the hourly data for each Super Bowl Sunday and chosen comparable dates were pulled, the hour-by-hour comparison is calculated using a simple percentage change from the regular non-Super Sunday. These percentages are what are ultimately graphed on an hourly basis, with the up to three regions (depending on how many available) on the same graph to see if there are any trends based on the cities. Similar comparison was not included for overall U.S. electricity trends because the large and varied geography of the United States makes controlling for the effects of weather on electricity demand much more complicated and difficult (however, as noted earlier, a study that looked at thousands of households during the 2012 Super Bowl found that an on overall basis, electricity demand increases on Super Bowl Sunday in the hours before the game and decreases once the game begins).

Sources and additional reading

5 Facts About Energy During the Big Game: MISO

Baltimore Gas & Electric: PJM RTO

California ISO: Pacific Gas & Electric electricity demand

Carolinas region electricity demand (EIA)

Energy Reliability Council of Texas (RTO) Coastal Region Electricity Data

How a Patriots Super Bowl affects the region’s power grid: ISO Newswire

How the Super Bowl saves energy: ABB

New England ISO Electricity Data

Northwestern region electricity demand (EIA)

Public Service Company of Colorado (EIA)

Public Service Electric & Gas Company: PJM RTO

Regional Transmission Organizations (RTO)/Independent System Operators (ISO): FERC

Southeastern region electricity demand (EIA)

Why people use less energy on Super Bowl Sunday: Washington Post

 

About the author: Matt Chester is an energy analyst in Washington DC, studied engineering and science & technology policy at the University of Virginia, and operates this blog and website to share news, insights, and advice in the fields of energy policy, energy technology, and more. For more quick hits in addition to posts on this blog, follow him on Twitter @ChesterEnergy.  

Energy for Future Presidents: The Science Behind the Headlines

I had come across Energy for Future Presidents: The Science Behind the Headlines by Richard A. Muller in a bookstore about a year and a half ago and immediately put it on my to-read list. Assuming I would be able to pick it up the next time I was in the store, I did not buy it that day and ended up not finding it in any bookstore I went to for the next year. However the concept of the book, giving an overview of every type of energy technology and policy that might be relevant in the coming years to future leaders with non-scientific backgrounds, is so important to me that I finally ended up caving and buying it off Amazon.

All-in-all this book provides an excellent overview of the landscape of the energy industry and associated public policies, doing so in a way that is accessible and easy enough to grasp for people who are completely unfamiliar with the topics but also goes in depth in a way that still provides useful and new insights to those who are immersed in the energy world. If there’s one main gripe I have with Energy for Future Presidents, it’s that it was published in 2012 and thus a number of its analyses and conclusions are based on data and technology from even before that year. Obviously that’s not Muller’s fault, and it only got exacerbated by my own delay in finally reading the book, but it’s worth bringing up for anyone who is seeking the latest and most up to date information.

Overall, any of the energy-related nitpicking I have with information in the book are minor compared with the overall success I think Muller found in covering a wide variety of topics for a curious but not-scientifically-based audience– from climate change to the Fukushima disaster, solar energy to synfuels, and electric vehicles to energy productivity. To frame the book on its titular goal of educating ‘future Presidents’ on energy, Muller importantly highlights that its not just important that the President be scientifically literate on energy topics, but he or she must also know the science well enough to explain it the public and Congress to inform the decisions that are ultimately made. Not only that, but often a President’s scientific advisers might disagree and the President will need to know the basics well enough to make the best decisions. In that respect, Muller spends a majority of the book providing the data and facts, but he also can’t help himself from providing his own opinion as a scientific adviser. This provides the reader a fun opportunity to try out that exact role of the President– take in what the adviser is suggesting and, knowing the facts behind it all, determine whether he or she agrees with the subsequent advice. I know I didn’t agree with every piece of advice that Muller gave, but that’s to be expected when discussing such hotly debated topics and it certainly did not take away from my enjoyment of the book.



Highlights

  • Energy Disasters: Regarding energy-related disasters, such as Fukushima and the Gulf oil spill, Muller suggests that the safe, conservative action of politicians in the immediate aftermath is to declare the incidents as extremely severe emergencies, as downplaying them and later being proven wrong would be a political and PR disaster. However, he goes through a number of these incidents and shows how, after the data is crunched, the real effects of the disasters are often much less significant than what they are initially made out to be. Not only does this do a disservice in diverting resources where they weren’t needed, but the panic caused by such grandiose declarations could end up doing more harm than good (e.g., unnecessary evacuations disrupting communities or overreactions to potential environmental effects harming tourism when it’s not warranted). His detailing of what ‘conservative’ estimates regarding disasters, and how such estimates inherently harbor the biases of those making the estimates, was particularly interesting and showed why a President should demand ‘best’ estimates in lieu of ‘conservative’ ones.


Source

  • Radiation Risks: Specifically regarding the risk for disasters at nuclear power plants and subsequent radiation, Muller details how the city of Denver already has a natural dose of radiation (0.3 rem per year) and suggests using this natural dosage of radiation as a tent pole where any nuclear incident that is found to cause this much radiation or less should not be one to cause panic or action (as has happened in previous nuclear incidents where the panicked reaction came from not understanding this type of natural radiation). Again, it’s important not only for the President to understand this but also to be able to educate and lead the public on the topic.
  • Climate Change: I appreciated Muller’s careful attention to climate change, stressing the idea that an individual cannot sense the temperature variations attributed to climate change on their own because of the difference between weather and climate, and how the part that actually matters is the subtle rise of global average temperatures (a basic distinction that frustratingly gets misunderstood and is often cited by those claiming climate change isn’t happening because it was particularly cold or snowy on a given day in a given location). Further, Muller’s detailing of how he was once labeled as a climate change skeptic was eye-opening, when in reality he did not find himself on one side or the other– rather he was just pushing for certain aspects of the data and science to be strengthened before any conclusions were made. The stories of this time in climate research illuminated just how committed he was to the science behind any policy, regardless of how it was labeled by the media or by his peers, and stresses how important it is that basic science take the lead and not any particular policy or conclusion that we might hope to be correct. Ultimately, Muller adds his voice to those scientists who have concluded that humans are causing catastrophic climate change and certain actions must be taken before it’s too late.
  • Emissions in Developing vs. Developed Countries: In terms of political solutions for climate change, Muller highlights how global of an issue it is (the United States cutting its emissions in half won’t mean much if the rest of the world doesn’t follow suit as well) and points out how a dollar spent in China can reduce carbon dioxide emissions much more than that same dollar spent in America. As a result, Muller suggests subsidizing China’s efforts– an interesting and data-backed idea, though I would be curious to see how a President would be able to sell the public on such a strategy in today’s political environment.  Further, when Muller laid out the economics of certain energy technologies and how they worked in the United States compared with a developing nation like India or China, I was surprised to learn that the cheaper cost of labor in the emerging economies actually flips the script on what solutions are viable (e.g., cheaper labor for solar panel manufacturing and installation means that solar energy can be much more competitive with natural gas in China, whereas the increased labor costs in the United States to not allow such an advantage to solar).
  • Energy Conservation: Knowing just how much information Muller was trying to cram into this book without making it too dense and cumbersome, I especially appreciated the attention he gave to topics like recycled energy and conservation. In particular, Muller’s detailing how much economic sense it makes on an individual basis and on a macro-basis to grab the low hanging fruit of energy efficiency, even to the point that it makes financial sense for a power company to invest in subsidizing the public’s energy efficiency measures in order for them to get the best return on investment (ROI) for their money. The part of the story I was previously unaware of (showing my young age) was how President Carter’s attempts to promote energy conservation during the 1979 oil embargo gave most of the public a bad taste regarding energy conservation as they equated it with a decrease in comfort and quality of life. Once the oil embargo was over, Americans turned the thermostats back up almost in defiance of the false choice the government had inadvertently presented between energy conservation and comfort. Changing people’s preconceptions of energy conservation, how it can be a personal money-maker while not affecting quality of life at all, is one of the most important tasks Muller assigns to the future Presidents reading this book. He does so himself by showing how something simple (but unfortunately not flashy) like installing insulation in attics across America would have a 17.8% annual ROI, while switching light bulbs to CFLs would have a 209% ROI.

Source

  • Natural Gas: The sections on natural gas were among some of the most immediately relevant and critical of Energy for Future Presidents, notably Muller’s discussions regarding the U.S. shale gas boom, its coming supplanting of coal as the largest fuel source for the power generating sector (which hadn’t yet occurred in 2012, but has now  happened), the importance of natural gas as a fuel that is 50% less emissive than coal as a middle ground, and the challenges but optimism surrounding natural gas vehicles. Regarding the environmental concerns of shale gas drilling, the sentence that stuck with me as a guiding principle  was the following: “companies have a financial incentive not to spend money unless their competition also has to spend money; that means the solution to fracking pollution is regulation.”

 

Nitpicks

  • Extreme Weather Events from Climate Change: Going back to the journey Muller undertook with regard to climate science, one aspect he still resists is the linking of climate change to extreme weather events like hurricanes or wildfires. Muller says none of these phenomena are evidence of human-caused climate change, and linking them with climate change is only harmful to the cause because it’s too easy for skeptics to debunk these connections and undercuts the rest of the science that’s sound (caution that’s no doubt learned from the ‘Climate-gate‘ incident of scientists hiding discordant data and the 2007 IPCC report that incorrectly stated that Himalayan glaciers might melt from global warming). Muller’s position is we should simply rest on the temperature data as evidence, since those data are solid. The issue I take with this is that the effects of climate change on extreme weather events are important to know and consider when looking at the full gamut of motivations to stop climate change. While it is true that you cannot yet link specific hurricanes or other weather events to climate change, the science behind climate change driving an increase in extreme weather events has been growing in recent years. Ignoring these impacts, as Muller is doing, does a disservice to the entirety of climate science and the efforts to contain these extreme events.

Source

  • Oil Prices: Towards the end of the section on liquid oil products, Muller asks “how high can the price of oil go? In the long term, it should not be able to stay above the synfuel price of $60 per barrel…That period of limbo is where we are now and the Saudis are worried.” This was an interesting point given I was reading it six years after the book was published, and I could look at what happened to the oil prices since then. In the short-term, it does show that Muller was right to question whether oil would be able to stay above $60 per barrel, as by 2015 the prices of both West Texas Intermediate and Brent crude oil fell below $60 per barrel again. So in that respect, Muller appeared prescient. However, it’s the idea that the oil prices would just continue unhampered in that trend that I had to nitpick, as Muller didn’t include any consideration of collective action of the Organization of Petroleum Exporting Countries (OPEC). OPEC largely operates as a cartel of countries who depend on the high prices of oil and attempt to control the supply of oil in order to control the prices. As Muller noted, the Saudis were worried and so they (with the rest of OPEC) took action. In November 2016, OPEC agreed on a quota of oil production among its members and a couple non-member nations, with that agreement being extended at this point to last through the end of 2018. The collective action of these oil producing nations, as well as the response of countries outside of OPEC (namely the United States), will have significant impact on the future of oil prices in the coming years and decades. Any assumptions on energy prices that don’t consider the power that OPEC yields aren’t telling the entire picture.

  • Electric Vehicles: The nitpick I found with this book that was the most vexing, which Muller himself identified as the part of the book likely to ruffle the most feathers, was his outlook on electric vehicles and how important (or rather, not terribly important) reducing emissions from the transportation sector is. The point Muller kept circling back to was the assertion that U.S. automobiles have contributed about 1/40 of a degree Celsius to global warming and that in the next 50 years the United States would likely be able to keep the additional warming to another 1/40 of a degree Celsius with reasonable efficiency standards. What I found frustrating about Muller’s take on what he called the ‘fad’ of electric cars is that he seemed so dismissive of their potential impact. First, discussing the climate change impact of vehicles in the United States seems intentionally narrow, as U.S. car sales only accounted for about 19% of global car sales (the below chart shows the top eight countries in terms of percentage of vehicle fleet made up of electric vehicles). While U.S. policy regarding vehicle efficiency would only impact the cars that can and cannot be sold domestically, the advancement of electric vehicles worldwide (particularly in China, India, and Europe where the desire for long-range electric vehicles is less important to consumers than it is in America) will have an even more significant climate impact. Policies that assist companies develop the technology will help electric vehicle sales worldwide and will have much more of a climate impact than the 1/40 of a degree Celsius that Muller predicts. Further, his rundown of the costs of electric cars vs. traditional internal combustion engine vehicles seems overly pessimistic about the technology and how costs will drop as mass production increases and as battery technology exceeds its current capabilities. I agree with Muller the hybrid-electric vehicles are going to be immensely important in the nearer term, but dismissing electric cars in the long term seems overtly shortsighted.

Source

Rating

  • Content- 4/5: This book serves as a great primer on a satisfactorily wide swath of energy topics, while providing useful new insights for people who are already familiar with the basics. You will certainly come away having learned something that surprises and interests you. However, the nitpicks that I previously listed are too strong for me to assign a 5/5 for the content– but the highlights are all great enough that no less than a 4/5 felt appropriate.
  • Readability- 5/5: Muller goes out of his way to explain the various topics to an audience that might not be technically literate in a way that makes reading and learning from the book a breeze. Each individual chapter and section isn’t terribly long, so not only do you feel accomplished as you make your way through, but it also serves to be a useful reference later on if you want to brush-up on any specific topic.
  • Authority- 4/5: As noted earlier, one of the difficulties I had with this book is not the fault of the author at all, simply that it was published six years ago. The landscape of energy technologies and markets is rapidly evolving, so while the basics all still apply, there were issues here and there that appeared to simply be caused by it not being the most updated book. But on the technologies and the politics, Muller commands a strong authority from his background as a physics and his work in climate science.
  • FINAL RATING- 4.3/5: If you’re seeking a single book to give you a broad background on energy technologies, policies, and markets to inform your reading of the headlines of the day, this book is a terrific one to pick up. As Muller advises in the book, everybody comes to the table with their own set of biases– and the only criticism I find with this book is that sometimes Muller’s own biases become apparent (though surely that’s also just me reading the book with my own biases as well!). Energy for Future Presidents can serve both as a thorough read or as a type of reference for various technologies, so for that reason it’s a worthy book to add to your personal library.

If you’re interested in following what else I’m reading, even outside of energy-related topics, feel free to follow me on Goodreads. Should this review compel you to pick up Energy for Future Presidents by Richard A. Muller, please consider buying on Amazon through the links on this page. I’m also going to run a giveaway for this book– if you want to enter for a chance to receive a copy of this book, there are two ways: 1) Subscribe to this blog and leave a comment on this page and 2) go to my Twitter account and retweet the tweet that links to this review. Feel free to enter both ways in order to double your chances of winning! The winner will be contacted by the end of February. 

About the author: Matt Chester is an energy analyst in Washington DC, studied engineering and science & technology policy at the University of Virginia, and operates this blog and website to share news, insights, and advice in the fields of energy policy, energy technology, and more. For more quick hits in addition to posts on this blog, follow him on Twitter @ChesterEnergy.  

DOE Spotlight: Federal Energy Regulatory Commission

The Federal Energy Regulatory Commission, or FERC, is an independent agency charged with specific regulatory oversight in the energy industry, specifically the interstate trade of energy (i.e., natural gas, oil, and electricity) and reviewing proposals for certain energy infrastructure, including liquefied natural gas (LNG) terminals, interstate natural gas pipelines, and hydropower projects. Since its inception, FERC has played a critical role in the regulation (and deregulation) of the energy industry, though its public profile has varied between somewhat in the background, where only those in the industry ever paid it attention, to a notable political presence in the news, when the issues at hand became more mainstream.

Recently, FERC has made headlines after being tasked by the Trump Administration to investigate grid reliability concerns and whether coal and nuclear plants should be propped up monetarily for their ability to store fuel on site. While that proposal was ultimately rejected (as will be discussed later), it did bring FERC to the forefront of many headlines and debates, while also illuminating how little FERC is really understood in the mainstream.



With that in mind, what follows is a primer on what you need to know about FERC to understand its history and role in energy markets for the next time the Commission pops up in a front page news article.

History of FERC

In its current form, FERC was established in 1977, the same year as the Department of Energy (DOE). However, the Commission traces its lineage back to the 1920s with the establishment of the Federal Power Commission (FPC). The federal government established the FPC as an independent commission to oversee hydropower dams that were located on federally owned land or affected federal waters. Hydropower, which had been around in the form of rudimentary water wheels for over 2,000 years, had started to become more industrialized and critical in the United States with the increased demand for wartime electricity, so the FPC was the first government regulatory agency of energy in the United States, seeking to encourage hydropower projects while protecting federal lands, waterways, and water sources.

In the next decade, President Franklin Roosevelt took on the cause of dismantling the monopolies of the electric companies. With that goal, Congress passed the Federal Power Act in 1935. This legislation expanded the power of the FPC, which was still composed of the Secretaries of War, Agriculture, and Interior, to set wholesale electricity prices at levels they deemed ‘just and reasonable.’ President Roosevelt’s next legislative push was the 1938 Natural Gas Act, giving the FPC the additional authority to regulate the sale and transport of natural gas.

Source

FDR’s initial plan to expand the regulatory power of the FPC and neutralize the monopolies in the electricity sector continued on for the next few decades after he left office, with FPC’s role gradually expanding to include regulation of natural gas facilities, transmission of power over state lines, and more. The next instance of drastic change came in the wake of the oil crisis of 1973, which highlighted the need to consolidate the energy functions in government, which were at that time being conducted by over 30 different government agencies, under one umbrella. That umbrella was the U.S. Department of Energy, formed in 1977 by the Department of Energy Organization Act. Included in this establishment of DOE was the founding of the Federal Energy Regulatory Commission to replace the FPC. The mission of FERC was very similar to the mission that evolved at the FPC: to ensure the wholesale prices being paid for electricity were fair.

Following in the footsteps of its predecessor agency, FERC continued to gather new responsibilities over the years:

  • The Public Utilities Regulatory Policy Act of 1978 tasked FERC with the responsibility of managing a program to develop new co-generation and small power production, as well as the regulation of wellhead gas sales;
  • In the 1980s, FERC began to deregulate the natural gas markets;
  • The Energy Policy Act of 1992 attempted to liberalize the electricity market and gave FERC the ability to oversee wholesale competition in the newly open energy markets; and
  • The Energy Policy Act of 2005 also expanded FERC’s responsibilities to include power regulation in interstate commerce of energy, i.e. the transmission of electricity through power lines and oil and gas that crossed state lines via pipeline.

As energy markets, have gotten more and more deregulated in the 20th century, FERC’s powers and responsibilities to oversee those deregulated markets have grown to meet the additional complexities of such markets. This gradual evolution of responsibilities of FERC explains why the Commission has increasingly found itself and its decisions a topic of debate in the public sphere, where initially the work being done was niche and mundane enough that it did not cause many waves.

Purpose

As stated on FERC’s website, the mission of FERC is to ‘assist consumers in obtaining reliable, efficient and sustainable energy services at a reasonable cost through appropriate regulatory and market means.’ This mission is achieved through the guiding principles of organizational excellence, due process and transparency, regulatory certainty, stakeholder involvement, and timeliness.

FERC, after the decades of evolution, has come to have a litany of responsibilities working towards that main mission. However FERC does not simply have carte blanche for all energy and electricity oversight in the United States. The Commission instead gradually gained certain powers, while others were intentionally left to the states or to the open market. As a guide, the below table identifies what FERC does and what FERC does not do:

Click to enlarge

How FERC works

Given the variety of responsibilities that fall under FERC, understanding how the Commission actually works is critically important to understanding its place in the energy industry. In terms of makeup, FERC is composed of up to five Commissioners who are all appointed by the President, with one of the Commissioners serving as the Chairman (also designated by the President). Of the five Commissioners, no more than three may belong to the same political party, and each Commissioner is appointed for a five-year term (the Commissioners’ terms are staggered so they don’t all need to be replaced at once). Each Commissioner of FERC has an equal vote on regulatory issues, with the Chairman being the one to break any ties.

Despite being under the DOE umbrella, FERC operates independently and its decisions are not subject to further review by DOE– a vital component of it functioning as it is intended. The requirement that no more than three Commissioners come from one party is to keep it independent from politics. Despite the individuals being nominated by the President and confirmed by the Senate, FERC operates independently from the influence of the Executive and Legislative Branches, as the courts are the only entities that can review FERC decisions.

Beyond the five Commissioners, FERC is a large operation with over 1,200 employees and an agency budget of over $300 million. These figures may sound like a lot, but the process appears remarkably efficient when considered in the context of FERC’s responsibilities for an electricity industry worth $250 billion and tasked with regulating the electricity used by 73% of Americans.

Source

FERC’s regulatory review can be kicked into gear in a couple of different ways. For issues with lots of stakeholders and public impact, FERC will use the rulemaking process to ensure the ability to gather information, comments, and other input before making a ruling. The notices of these rulemakings will be posted publicly in the Federal Register so the question at hand and the intended pathway is in the public record for all to read, comment on, and follow. These rulemaking processes can be initiated by a petition from the energy industry, specific companies, stakeholders, or anyone in the public .  DOE can even initiate a FERC rulemaking, as it did recently with the grid resilience Notice of Proposed Rulemaking (NOPR), but FERC comes to the conclusion of that rulemaking independently, without being subject to DOE review.

For more specific topics undertaken by FERC, such as licensing of a hydropower project, FERC will also post notices of this activity in the Federal Register (in fact, this type of licensing proposal is among the most common notice FERC, or DOE as a whole, will post in the Federal Register– see graphic below). These actions are initiated by the entity looking for a license or other approval that FERC is authorized to give. For any stakeholders who seek to participate in FERC’s processes, these notices also provide an opportunity for any stakeholders to review the action and participate through protesting or filing a complaint.

Outside of a rulemkaing from FERC requested by an outside entity, FERC also continually reviews the aspects of the energy industry of which it has oversight, such as interstate electricity transmission and wholesale electricity sales, and can initiate investigation and action against any utility found to be in violation of any regulations. In the event of a violation, FERC has the authority to impose fines and other punitive measures. While these violations can also be flagged by outside entities (e.g., states, customers, companies), FERC alone has authority to determine fault and punishment, subject to review only by the courts.

FERC in the news today

As previously noted, FERC oversees an electricity industry worth hundreds of billions of dollars, and as the energy industry becomes increasingly in the focus of politicians and large corporations, so too do the collective actions of FERC. Below are several of the higher profile incidents that brought FERC to the front page of newspapers in recent years.

California utilities overcharging customers

In 2001, California began scrutinizing its power prices that had recently skyrocketed after the state electric grid was deregulated and opened up to competition. The state accused wholesalers of overcharging customers by $6.2 billion for electricity sold during acute power shortages, and California filed charges with FERC. As a result, FERC ordered refunds, though for only $124 million. The issue did not end there, with California then accusing FERC of stripping billions of dollars from potential refunds and failing  to properly ensure that prices set in California were ‘just and reasonable.’ Much has been written about this event, deemed California’s energy crisis– read about the entire timeline and actions surrounding the crisis here. While FERC faced criticism for potentially not doing enough, a 2016 federal court decision upheld FERC’s findings and actions.

Role in approving pipelines

A recurrent theme that brings FERC into the thick of controversy is its role in approving certain pipelines, as these projects are typically protested and strongly opposed by environmental groups. All major natural gas pipelines FERC has approved are listed on the Commission’s website (remember that while FERC regulates interstate commerce of gas and oil through pipelines, it only approves the siting and construction of natural gas pipelines and not oil pipelines). Such involvement in the approval of pipelines leads to FERC being a lightning rod for criticism by pipeline opponents of any environmental incidents and accidents that may occur. FERC sees numerous protests when it is debating the approval of specific pipelines by citizens who oppose the building of pipelines in their regions (such as the Transco pipeline in New Jersey, the Millennium Pipeline in New York, and the Marc 1 Hub Pipeline in Pennsylvania, just to name a few). Opponents of gas pipeline projects accuse FERC of approving too many pipelines, issuing approvals too easily without enough environmental analysis, and not taking opposition of locals seriously enough. On the other hand, those supporting natural gas infrastructure point out that FERC is required to allow developers to build gas pipelines as long as they comply with laws and regulations, and even stress that ‘it’s harder to build a pipeline today than it was 10 years ago…it takes more time and it’s more expensive.”

Source

These type of projects inspire such passion on both sides, but assuming FERC works as intended then the Commission is independent of partisan causes and political leanings. Instead, FERC accounts for all public comments and stakeholder concerns and ensures their rulings are based on existing laws, regulations, and stipulation.

Trump’s FERC without quorum

When President Trump took office in January 2017, he expressed a desire for Cheryl A. LaFleur (a sitting Commissioner and former Chairman of FERC) to be elevated to Chairman. However, this snub of the sitting Chairman, Norman Bay, led to Bay’s resignation from the Commission altogether. As FERC already had two vacant seats at this time, the resulting third vacancy left FERC with only two Commissioners and thus a lack of a quorum with which to take any action. For an administration that had promised to be a friend to the oil and gas pipeline industry, this sudden non-quorum meant that all pipeline projects that needed approval from FERC remained at a standstill until a quorum of Commissioners could be nominated and approved. While the three Commissioners Trump nominated were awaiting Senate confirmation, a fourth Commissioner announced her imminent departure and left FERC with just one sitting Commissioner.

The lack of a FERC quorum lasted six months, ending in August with the swearing in of two newly confirmed Commissioners. Those six months left various infrastructure and energy projects in limbo, the first time FERC had been without a quorum in its history. Eventually all of President Trump’s nominees were confirmed, and the fiver-person FERC now consists of Kevin J. McIntyre (the Chairman), Cheryl A. LaFleur, Neil Chatterjee, Robert F. Powelson, and Richard Glick.

DOE Grid Resilience Proposal

In September 2017, DOE formally proposed that FERC take action to implement reforms that would provide a financial boost to power providers who kept on site a 90-day fuels supply. This proposal was intended to give an edge to coal and nuclear generation facilities to provide a baseline degree of resilience and reliability to the electrical grid, as those are the only fuel sources where such a fuel supply is readily able to be stored on site.

This proposal was met with intense opposition from providers of renewable energy and natural gas, as well as from grid operators and former FERC Commissioners from both political parties. Those opposed accused DOE of unjustly trying to pick winners and prop up coal and nuclear, citing authorities like the North American Electrical Reliability Corporation (NERC) that have found that the reliability of the bulk power system is not at risk due to the recent closures of coal and nuclear plants.

FERC ultimately decided in January 2018 that the actions DOE proposed failed to meet the requirement that such actions be just, reasonable, and non-preferential of specific fuel types, doing so with a unanimous vote. FERC explained their decision by noting that the proposal was not shown to ‘not be unduly discriminatory or preferential,’ and that the 90-day fuel supply requirement would ‘appear to permit only certain resources to be eligible for the rate, thereby excluding other resources that may have resilience attributes.’ The decision by FERC was celebrated by many in the energy industry as demonstrating the independence of FERC and the process working as it should, with the Commissioners not simply voting based on party-lines and implementing whatever the Executive Branch (through the President and DOE) wanted– no doubt an important reminder in the increasingly partisan environment of U.S. policy-making.

 

 

These are just some of the recent highlights, as FERC always has its plate full with issues that bring passionate debate from multiple sides. For a list of some more controversial issues FERC has been tasked with addressing, see the ‘Controversies’ section of this article on FERC.

Sources and additional reading

About FERC: FERC.gov

An Overview of the Federal Energy Regulatory Commission and Federal Regulation of Public Utilities in the United States: FERC

Federal Energy Regulatory Commission (FERC): AllGov

Hydropower Regulatory History: U.S. Fish & Wildlife Service

What FERC Is and Why It Matters: Huffington Post

What is FERC? PBS

 

About the author: Matt Chester is an energy analyst in Washington DC, studied engineering and science & technology policy at the University of Virginia, and operates this blog and website to share news, insights, and advice in the fields of energy policy, energy technology, and more. For more quick hits in addition to posts on this blog, follow him on Twitter @ChesterEnergy.  

Federal Government Shutdown: Analyzing Electricity Demand When Government Workers Get Furloughed in Washington DC

In a dance that’s become a bit too commonplace in the federal government, threats of a government shutdown over political differences and budget issues are looming once again. After multiple continuing resolutions agreed to between Democrats and Republicans, the latest deadline for appropriation bills to fund the government is fast approaching. While a potential government shutdown would put my 9-5 job on hold until a resolution was reached, a frustrating prospect for all families who rely upon paychecks from their government jobs, there’s not much to do for those of us outside of the White House and Congress. What I can do with that nervousness, though, is ask energy-related questions!

The fact that energy and electricity use changes at regular intervals throughout the day and week is well established, and these trends are reliably correlated with the day of the week, time of the day, and weather. Knowing this led me to the question of how a government shutdown would effect the electricity demand in the Washington DC area, where over 14% of the workforce is made up of federal employees. Would a government shutdown lead to an electricity demand closer to a typical weekend day than a weekday because of the large amount of people who would no longer be reporting for work? Would the overall electricity demand go up or down? Is any of this even noticeable, given that about 86% of the workforce would be going to work as normal? We are only four years removed from the last federal government shutdown, so looking at the electricity demand surrounding the 2013 shutdown can provide some insight as to what might happen if there is a shutdown this time around.



Background

The 2013 federal government shutdown lasted from October 1 through October 16, with President Obama signing a bill to reopen the government shortly after midnight on October 17. The political football at stake in 2013 was the Affordable Care Act, as Republicans in Congress sought to defund the program while the Democrats refused to pass funding bills that would do so. As a result, nearly 800,000 non-essential federal employees across the country were out of work without pay, while about 1.3 million essential employees reported to work as normal (though they saw their paychecks delayed). At the heart of the potential 2018 shutdown is the political debate surrounding immigration policy, though the effects on government workers would likely be largely the same as in 2013.

Source

While these numbers account for the vast amount of federal employees furloughed outside of Washington DC (such as employees in National Parks across the country), they still included a large number of DC residents. Further, employees of government contractors were reportedly sent home and furloughed without pay as well, though the data surrounding exactly how many government contracts were affected is unclear. So while there are other metropolitan areas that have a larger percentage of their workforce employed by the federal government, the prominence of federal contractor workers in DC still makes it an obvious choice for examining how the electricity demand changed in the wake of the 2013 federal government shutdown. More importantly, though, this analysis will focus on Washington DC because the data from the power companies is available in a sufficiently granular way for the region. The Potomac Electric Power Company, or PEPCO, is the electric power company that serves the entire city of Washington DC, as well as the surrounding communities in Maryland, so looking at PEPCO’s data over the shutdown dates will enable insights into the effect of the shutdown. Federal workers in other regions are typically served by much larger power companies (such as Dominion Energy in Virginia serving many of the Northern Virginia communities of federal workers in addition to the rest of Virginia and parts of North Carolina), making the potential effect on the power delivery data from the shutdown less significant on a relative scale.

Data and graphics

PJM, the regional transmission organization that coordinates the movement of wholesale electricity in 13 states and DC, makes available PEPCO’s metered electricity load data on an hourly basis. This type of data is available for most U.S. power companies, which is fun to play with to get an idea of how Americans behave during certain events like holidays, the Super Bowl, or any other large-scale event. In order to get a baseline of what the weekly electricity distributed by PEPCO, we can first look at the two weeks leading up to the government shutdown of 2013:


A couple trends become clear looking at these two seemingly normal weeks. First, the weekends (with Saturday and Sunday graphed using a dashed line instead of the solid line for weekdays) appear to have less electricity demand compared with weekdays. This trend is noted everywhere, not just DC, as weekends are when typical commerce activity drops. Additionally, there are clearly patterns of high and low electricity use by time of day, regardless of weekend or weekday. Demand appears to be at the lowest late at night and early in the morning when most people are sleeping, ramp up in the morning as people wake up to begin their day, and peaks around 5 PM when people are coming back home, making dinner, turning on the TV, putting laundry in the washing machine, etc. But did any of these trends change during the 2013 federal government shutdown? Here is the same data for the three calendar weeks during which the government was shut down:


When comparing these graphs with the two weeks prior, there does seem to be some noticeable differences– though the differences vary between the three weeks the shutdown was effective:

First Week

  • To start, the peak and cumulative power use appears to have increased a significant amount during the first week of the shutdown– though that could always be caused by the weather and a need to increase air conditioning or heating in a home. Indeed, looking at the temperature (discussed more later), the average temperature during the week climbed from about 66 degrees Fahrenheit the week before to about 73 degrees Fahrenheit. A possible explanation is the higher power use coming from people turning on their AC for the first time in a while due to unseasonably warm temperatures.
  • The overall ‘shape’ of the curves remain constant, so the furloughed employees and contractors did not appear to change their daily patterns enough to shift the timing of peak and minimum electricity loads.
  • Also interesting to note is that the Sunday before the shutdown (Sep. 29) stays lower than the weekdays, as was noted to be typical of weekend days, but the Saturday following the shutdown (Oct 5) then shifts to be among the days with the greatest electricity demand. I wasn’t expecting the furloughing of employees to have much of an effect on the weekend electricity demand, as most of the furloughed federal employees presumably did not typically work on weekends, but the answer can likely be attributed to weather as the weekend of Oct 5-6 had the warmest temperatures (79 and 80 degrees Fahrenheit, respectively) of the whole analysis period.

Second Week

  • The second week is the most anomalous of the three, with Sunday and Monday having the shape of the curve significantly affected and also having much higher peaks than the rest of the week (whereas the first week increased the peaks more comparably among the days of the week). In terms of why Sunday might have shifted so significantly, a search of what might have happened in Washington DC to cause this change on October 6, 2013 turned up an article about an explosion accident on the Metro. Perhaps the emergency response to this incident caused significant effects to the electricity demand?
  • Outside of Sunday and Monday, the peaks and shapes of the demand curves were back to being comparable to pre-shutdown levels. As will be shown shortly, though, this trend looks to be attributable to the returning of temperatures to an average of 65 degrees Fahrenheit.

Third Week

  • By the time of the third and final week of the shutdown, the electricity demand curve looks to be mostly back to normal. The last Sunday of the shutdown and the first Saturday after the shutdown look like normal weekend days, while the weekday curves look normal all week, even though the furloughed government employees and contractors did not head back to work until Thursday.

Just to be complete and ensure the trends we saw before and during the 2013 federal government shutdown were not just random week-to-week variations, below are the same graphs for the two weeks following the shutdown:

These two weeks show somewhat the same general trends we saw prior to the shutdown, with the main changes being that the peak demand for each day appears to be shifted to first thing in the morning when people are waking up and the morning of Saturday Oct 26 showing a higher peak than is typically expected of a weekend day. The peak electricity demand shifting to the morning likely comes from the weather getting colder (down to average temperatures of 53 and 59 degrees Fahrenheit, respectively), while the early peak electricity demand on Saturday Oct 26 might have been caused by a rally protesting mass surveillance that attracted thousands of people to Washington DC (though it too is likely in part due to the fact that it was the first day of the season where the average daily temperature dipped to 46 degrees Fahrenheit and people cranked the heat up when they woke up shivering that Saturday morning).

In addition to the demand curves, it’s important to look at the total daily electricity consumed by day over these previously discussed weeks, while also comparing these totals to the average daily temperatures in DC as I’ve done through the previous analysis:

As these two graphics demonstrate, the total electricity demand mostly moves step-in-step with the daily weather regardless of whether or not the federal government is open. If it gets too warm or too cold, that is when you see the spikes in electricity demand– and that will always be the most significant factor.

Conclusions

In the end, there does not appear to be a significant effect on Washington DC’s electricity demand during a federal government shutdown. While having thousands of employees and contractors stay at home is certainly not trivial, there are still even more government employees who would be deemed ‘essential’ and would be in the federal buildings (who would still be operating their heating/cooling systems). Beyond that, a vast majority of PEPCO customers are not in the federal workforce, so the change in daily habits of the unfortunately furloughed employees does not move the needle in a noticeable manner in terms of electricity demand. What’s more important to consider is the weather, and perhaps any daily events such as the Metro accident or the anti-surveillance rally. So while no one, especially in DC, is rooting for a federal government shutdown this week (the 2013 shutdown cost the country $24 billion and disrupted Veterans Affairs benefits from being sent out), we can take incredibly small solace that it won’t disrupt the expected electricity demand. Despite liquor sales increasing during the 2013 shutdown, the thousands of workers who would find themselves temporarily out of work would not have their change in daily routine threatening the electrical grid’s behavior.

If this type of data is of interest to you, by the way, the Energy Information Administration has an amazing tool that allows you to track electrical demand across the country in real-time. Are there any other events you think would be interesting to investigate for their effect on electricity demand? Let me know in the comments!

Sources and additional reading

Absolutely everything you need to know about how the government shutdown will work: Washington Post

Customer Base Line: When do you use the most electricity? Search for Energy

Demand for electricity changes through the day: Energy Information Administration

Democrats face make-or-break moment on shutdown, Dreamers: Politico

Electricity demand patterns matter for valuing electricity supply resources: Energy Information Administration

Electricity supply and demand for beginners

Everything You Need to Know About the Government-Shutdown Fight: New York Magazine

Here’s What Happened the Last Time the Government Shut Down: ABC News

How Many Federal Government Employees Are in Alexandria? Patch

Metered Load Data: PJM

U.S. Government Shutdown Looms Amid Immigration Battle: Reuters

Which Metro Area Has the Highest Share of Federal Employees? Hint: Not Washington: Government Executive

About the author: Matt Chester is an energy analyst in Washington DC, studied engineering and science & technology policy at the University of Virginia, and operates this blog and website to share news, insights, and advice in the fields of energy policy, energy technology, and more. For more quick hits in addition to posts on this blog, follow him on Twitter @ChesterEnergy.  

Drilling in the Alaskan Arctic National Wildlife Reserve vs. Renewable Energy: The Drilling Debate, Economic and Environmental Effects, and How Solar and Wind Energy Investment Would Compare

In a first for this blog, the focus of this post comes directly from a reader request– so I’ll let this person’s words speak for themselves:

With Congress recently passing a bill allowing for drilling of oil and gas in Alaska’s Arctic National Wilidlife Refuge (ANWR), it got me curious (as a citizen of the sun-rich American Southwest) how much land would need to be covered in solar panels in order to generate the same amount of energy that would be found in these potential new oil and gas drilling sites. Obviously each energy source would have their individual costs to consider, but I am curious as to how efficient and cost-effective it would be to drill in the Alaskan arctic if there are cleaner and cheaper alternatives– it seems covering up the deserts of New Mexico and Arizona could be preferable to potentially harming some of the Alaskan environment and wildlife. Is drilling in this new area even an efficient and safe way for us to get additional oil and gas?
– Case

I loved the thoughtfulness and importance of this question and was inspired to immediately jump into research (also I was so happy to have a suggestion from an outside perspective– so if you read this or any of my other posts and you get inspired or curious, please do reach out to me!). From my perspective, this overall inquiry can be broken down into five questions to be answered individually:

  1. What is ANWR and what exactly did Congress authorize with regards to drilling in ANWR?
  2. How much potential oil and gas would be produced from the drilling?
  3. What are the economics associated with extracting and using oil and gas from ANWR?
  4. What are the environmental effects of that drilling?
  5. Can we do better to just install renewable energy resources instead of drilling in ANWR? How much capacity in renewable sources would be needed? How would the costs of renewable installations compare with the ANWR drilling?



Question 1: What is ANWR and what exactly did Congress authorize with regards to drilling in ANWR?

The Arctic National Wildlife Refuge, or ANWR, has long been a flash point topic of debate, viewed by proponents of oil and gas drilling as a key waiting to unlock fuel and energy independence in the United States, while opponents argue that such drilling unnecessarily threatens the habitat of hundreds of species of wildlife and the pristine environment that’s been protected for decades. ANWR is a 19.6-million-acre section of northeastern Alaska, long considered one of the most pristine and preserved nature refuges in the United States. Having stayed untouched for so long has allowed the native population of polar bears, caribou, moose, wolverines, and more to flourish. ANWR was only able to remain pristine due to oil and gas drilling in the refuge being banned in 1980 by the Alaskan National Interest Conservation Act, with Section 1002 of that act deferring decision on the management of oil and gas exploration on a 1.5-million-acre coastal plane area of ANWR known to have the greatest potential for fossil fuels. This stretch of ANWR has since become known as the ‘1002 Area.’

Source

This 1002 Area of ANWR is at the center of the ANWR debate, as Presidents and Congresses have had to fight various bills over the past couple decades that sought to lift those drilling bans, doing so successfully until recently. At the end of 2017, with Republicans (who have long been pushing to allow such oil and gas exploration in ANWR) controlling the White House and both Houses of Congress, decisive action was finally made. The Senate Energy and Natural Resources Committee, led by Lisa Murkowski of Alaska, voted in November to approve a bill that would allow oil and gas exploration, with that bill ultimately getting attached to and approved along with the Senate’s tax-reform package in December, with the justification for that attachment being that the drilling would help pay for the proposed tax cuts.

Specifically, the legislation that ended the ban on oil and gas drilling in ANWR did so by mandating two lease sales (of at least 400,000 acres each) in the 1002 Area over the next 10 years. The government’s royalties on these leases are expected to generate over $2 billion, half of which would go to Alaska and the other half to the federal government.

Source

Question 2: How much potential oil and gas would be produced from the drilling?

This really is the million dollar (or, rather, billion dollar) question, because part of the issue is that no one really knows how much fossil fuel is hidden deep under ANWR. The situation is a bit of a catch-22, as you cannot get a good idea for how much oil there is without drilling, but under the drilling ban you cannot explore how much there is. A number of surface geology and seismic exploration surveys have been conducted, and the one exploratory drilling project by oil companies was allowed in the mid-1980s, but the results of that study remain a heavily guarded secret to this day (although National Geographic has previously reported that the results of the test were disappointing). In contrast even to regions bordering ANWR in Alaska that have the benefit of exploratory drilling, any analysis of the 1002 Area is restricted to field studies, well data, and analysis of seismic data.

The publicly available estimates from the 1998 U.S. Geological Survey (USGS) (the most recent one done on the 1002 Area) indicate there are between 4.3 billion and 11.8 billion barrels of technically recoverable crude oil products and between 3.48 and 10.02 trillion cubic feet (TCF) of technically recoverable natural gas in the coastal plain of ANWR. Even though there is that much oil and gas that is technically recoverable, though, does not mean that all of it would be economical to recover. A 2008 report by the Department of Energy (DOE), based on the 1998 USGS survey and acknowledging the uncertainty in the USGS numbers given that the technology for the USGS survey is now outdated, estimates that development of the 1002 Area would actually result in 1.9 to 4.3 billion barrels of crude oil extracted over a 13-year period (while the rest of the oil would not be cost effective to extract). The report also estimates that peak oil production would range from 510,000 barrels per day (b/d) to 1.45 million b/d. These estimates must be taken with a grain of salt, however, as not only are they based on the use of now-outdated technology, but the technology to extract oil is also greatly improved. These technology improvements mean the USGS estimates could be low, but on the other side, oil exploration is always a lottery and recent exploration near ANWR has been disappointing. That’s all to say, current estimate are just that, estimates– which makes the weighing of pros and cons of drilling all the more complicated.

Source

The 2008 DOE report did not assess the potential extraction of natural gas reserves (note that much of the analysis and debate surrounding ANWR drilling focuses mainly on the oil reserves and not the natural gas reserves, likely because the oil is more valuable, cost-effective to extract, and in demand. Where relevant, I will include the facts and figures of natural gas in addition to the oil, but note that certain parts of this analysis will have to center just on the oil based the the availability of data).

To put that in context, the total U.S. proved crude oil reserves at the end of 2015 were 35.2 billion barrels, so the technically recoverable oil in the 1002 Area would account for 12 to 34% of total U.S. oil reserves. At the end of 2015 the U.S. proved reserves of natural gas were 324.3 TCF, making the technically recoverable natural gas in the 1002 Area equal to 1 to 3% of total U.S. natural gas reserves. Put another way, the the technically recoverable oil reserves would equal 218 to 599 days worth of U.S. oil consumption (using the 2016 daily average), while the natural gas reserves would equal 47 to 134 days worth of U.S. natural gas consumption (using the 2016 daily average).

Question 3: What are the economics associated with extracting and using oil and gas from ANWR?

In addition to the push towards ‘energy independence’ (i.e., minimizing the need for oil imports from foreign nations where prices and availability can be volatile), a main motivation for drilling in the 1002 Area of ANWR is the economic benefits it could bring. In addition to the $1 billion for the Alaskan government and $1 billion for the federal government from the leasing of the land, Senator Murkowski boasted that the eventual oil and gas production would bring in more than $100 billion for the federal treasury through federal royalties on the oil extracted from the land.

Source

However, these theorized economic benefits to drilling is strongly disputed by the plan’s opponents, with president of the Wilderness Society noting that ‘the whole notion that you are going to trim a trillion-dollar deficit with phony oil revenue is just a cynical political ploy.’ When digging into the numbers more closely, the $1 billion to the federal government from leasing the land would end up offsetting less than 0.1% of the $1.5 trillion in tax  cuts to which the drilling provision was attached (while some analyses question whether the land would gather that much in reality, noting the estimates assume oil leases selling for 10 times what they sold for a year ago when domestic oil was scarcer and more expensive).

Outside of the federal revenue, the money coming to the Alaskan government would be even more influential, which is why the charge to open ANWR to drilling is often led by Alaskan policymakers. In fact, while a majority of Americans oppose drilling in ANWR, most Alaskans are cited as supporting responsible oil exploration. While that may seem counterintuitive, the Arctic Slope Regional Corporation explains that “a clear majority of the North Slope support responsible development in ANWR; they should have the same rights to economic self-determination as people in the rest of the United States.

In addition to the money raised by the government is the potential economic benefit to the country from the extraction of the oil. According to the previously mentioned 2008 DOE report, the extraction of the ANWR oil would reduce the need for the United States to import $135 to $327 billion of oil. This shift would have a positive benefit to the U.S. balance of trade by that same amount, but the reduction of reliance on imported foreign oil would only drop from 54% to 49%, and the effect on global oil prices would be small enough to be neutralized by modest collective action by the Organization of Petroleum Exporting Countries (OPEC), meaning U.S. consumers would likely not see an effect on their energy prices.

The last economic consideration would be the worth of the oil and the cost to the companies doing the drilling to extract and bring to market the oil products. A study done by the researchers at Elsevier found that the worth of the oil in the 1002 Area of ANWR is $374 billion, while the cost to extract and bring to market would be $123 billion. The difference, $251 billion, would be the profits to the companies— which theoretically would generate social/economic benefits through means such as industry rents, tax revenues, and jobs created and sustained.

So in short, the decision about whether or not to drill in ANWR has the potential to cause a significant economic effect for the federal and Alaskan state governments, the oil companies who win the leasing auctions, and those who might be directly impacted from increased profits to the oil and gas companies. As with all analytical aspects of ANWR drilling, though, the exact scale of that effect is hotly debated and subject to the great uncertainty surrounding how much oil and gas are technically recoverable from the 1002 Area. Further, the amount of oil that is economically sound to recover and put into the market (not to mention the price oil and gas companies would be willing to spend on leasing this land) is entirely depending on the ever-fluctuating and difficult to forecast price of crude oil, adding further potential variability to the estimates.

Question 4: What are the environmental effects of that drilling?

As previously noted, drilling in ANWR is an especially sensitive environmental  subject because it is one of the very few places left on Earth that remains pristine and untouched by humanity’s polluted fingerprint. The vast and beautiful land has been described by National Geographic as ‘primordial wilderness that stretches from spruce forests in the south, over the jagged Brooks Range, onto gently sloping wetlands that flow into the ice-curdled Beaufort Sea’ and is often called ‘America’s Serengeti.’ In terms of wildlife, ANWR is noted as fertile ground for its dozens of species of land and marine mammals (notably caribou and polar bears) and hundreds of species of migratory birds from six continents and each of the 48 contiguous United States.

Source

While the exact environmental effects of oil exploration and drilling are not known for certain, the potential ills that can befall the environment and wildlife in ANWR include the following:

  • Oil development is found to be very disruptive to the area’s famed porcupine caribou, potentially threatening their existence (an existence which the native Gwich’in people depend upon for survival), with the Canadian government even issuing a statement in the wake of the ANWR drilling bill reminding the U.S. government of the 1987 bilateral agreement to conserve the caribou and their habitat;
  • ANWR consists of a biodiversity that’s so unique globally that the opportunity for scientific study is huge, and any development of that land is a threat to that existing natural biodiversity in irreparable way;
  • The National Academy of Sciences has concluded that once oil and gas infrastructure are built in the Alaskan arctic region, it would be unlikely for that infrastructure to ever be removed or have the land be fully restored, as doing so would be immensely difficult and costly;
  • Anywhere that oil and gas drilling occurs opens up the threat of further environmental damage from oil spills, such as the recent BP oil leak in the North Slopes of Alaska that was caused by thawing permafrost; and
  • Not only do the direct effects of drilling for oil in ANWR need to be considered, but also the compounding effects that the eventual burning of that oil must be weighed. The use of the oil contained underground in Alaska will only serve to increase the effects of climate change in the Arctic, where temperatures already rise twice as quickly as the world average. The shores of Alaska are ground zero for the effects of climate change, with melting sea ice and rising sea levels causing additional concerns for survival of both wildlife and human populations that call Alaska home. The most climate-friendly way to treat the oil underneath ANWR would be to leave it in the ground.

Question 5: Can we do better to just install renewable energy resources instead of drilling in ANWR? How much capacity in renewable sources would be needed? How would the costs of renewable installations compare with ANWR drilling?

Part 1: Can we just install renewable energy instead of drilling?

At the crux of the original question was whether the country would be better off if we diverted resources away from ANWR drilling and instead developed comparable renewable energy sources. While this question is rooted in noble intent, the reality of the situation is that it would not always work in practice to swap the energy sources one-for-one.

Looking at the way in which petroleum (which includes all oils and liquid fuels derived from oil drilling) was used in the United States in 2016 using the below graphic that is created every year by the Lawrence Livermore National Laboratory (a DOE national lab), we find that 35.9 quadrillion Btus (or quads) of petroleum were consumed. This massive sum of oil energy (more than the total primary energy, regardless of fuel type, consumed by any single country other than the United States and Canada in 2015) is broken down as 25.7 quads (72%) in the transportation sector, 8.12 quads (23%) in the industrial sector, 1.02 quads (3% in the residential sector, 0.88 quads (2%) in the commercial sector, and 0.24 quads (1%) in the electric power sector. Meanwhile, the 28.5 quads of natural gas goes 36% to the electric power sector, 34% to the industrial sector, 16% to the residential sector, 11% to the commercial sector, and 3% to the transportation sector.

Source

(side note– I think this is one of the most useful graphics created to understand the U.S. energy landscape every year. I have it printed and hanging at my desk and if you are trying to learn more about the different energy types and relative sizes of the energy sector then I recommend this as a great graphic to always have handy)

Compare this breakdown with some of the non-fossil fuels:

  • 100% of wind power (2.11 quads) goes to the electric power sector;
  • 99% of hydropower (2.48 quads) goes to the electric power sector, with the rest going to the industrial sector;
  • 70% of geothermal power (0.16 quads) goes to the electric power sector, with the rest going to the residential and commercial sectors (using geothermal as a heat source as a direct substitute for the electric power sector); and
  • 58% of solar power (0.34 quads) goes to the electric power sector, while 27% goes to residential sector (in the form of residential solar generation or solar heating, essentially a direct substitute for the electric power sector), 12% goes to the commercial sector (also basically a direct substitute for the electric power sector), and less than 1% goes to the industrial sector.

We see that renewable energy sources are capable of displacing a large chunk of the electric power sector, particularly the types of renewable sources like wind and solar that could be installed in vast open land like the original question asked. However, the oil and gas resources that are the subject of the ANWR debate are largely not powering electricity generation, and as such renewable energy sources cannot easily displace most of the uses of the oil and gas.

The issue with thinking ‘why don’t we not drill and instead just invest in renewable energy’ is that in today’s world, there are lots of uses of energy that can only be served, or at least can only be served optimally, by oil products. For example, renewable fuel replacements for jet fuel are not very promising on a one or two generation timescale and 43% of industrial heating applications require temperatures (above 750 degrees Fahrenheit) that cannot be met by electric means or renewable heating technologies. And regarding the millions of cars on the road, the most pervasive and entrenched oil use in daily life, the looming transition to electric vehicles is taking a long time for a reason– not the least of which is that gasoline’s energy density remains unmatched to deliver power in such a safe, economical, and space-efficient manner. Indeed when analysts or journalists speculate about the world using up all of the oil, what they’re really talking about is the transportation sector because other sectors already largely utilize other fuel types. So when considering where renewable energy can replace fossil fuels, it is important to note that the transportation sector and industrial sector are powered 95% and 72%, respectively, by oil and gas, and that there are sometimes technological, institutional, and infrastructure-related reasons for that that go beyond price and availability.

That said, we are experiencing the eventual shift of some energy uses away from fossil fuels– notably in the transportation sector– but many of these shifts will take time and money to convert infrastructure. Many continue to study and debate whether we’ll be able to convert to 100% renewable energy without the aid of fossil fuels (with some concluding it’s possible, others that it’s not), and if so how far away are we from such an energy landscape. Even considering that it will take 10 years from passing of legislation to beginning of actual ANWR oil production, the American energy mix is only expected to change so much in the next few decades (see the Energy Information Administration forecast for renewable energy, natural gas, and liquid oil fuels below), and for better or worse fossil fuels look to be a part of that mix.

Source

The most significant area in which renewable energy can continue to make headway is the electricity generation sector, the sector that is most suited for renewables even though they only account for 17% of total generation as of 2017. In the meantime, though, fossil fuels like oil and gas will play a crucial role in the energy markets and the potential windfall of resources laying readily underground will continue to be seen as valuable to oil and gas companies (though it is important to ask whether, in the midst of increasing availability of shale oil, will the energy markets need the ANWR oil or will the oil companies even want to gamble on the risky and expensive play).

Part 2: But theoretically, how much renewable energy would need to be installed to account for the energy that would be extracted from ANWR?

All that said, though, for the sake of the academic exercise originally asked, let’s ignore the differences between fuel types and assume that by leaving all the oil and gas from the 1002 Area in the ground and instead installing renewable energy sources (i.e., wind and solar farms) we can extract the same amount of energy for the same needs.

The 2008 DOE report estimated between 1.9 and 4.3 billion barrels of crude oil would be extracted in a developed ANWR. This amount of oil can be converted to between 10.5 and 23.9 quads. The peak extraction according to the DOE report would end up being between 867 and 2,464 gigawatt-hours (GWh) per day.

The 1998 USGS Survey pegged the technically recoverable pegged the technically recoverable natural gas at between 3.48 and 10.02 TCF, which easily converts to between 3.48 and 10.02 quads. Because the DOE report did not break down how much of the technically recoverable natural gas would actually be economical to extract, we’ll assume for simplicity’s sake that it all will be extracted (there’s enough uncertainty in the estimates in all of the USGS and DOE numbers that we need not worry about exactness, but rather make the estimates needed to get an order of magnitude estimate). Without any estimates about the rate of extraction expected from the natural gas, we’ll make a very back-of-the-envelope estimate that it will peak proportionally with oil and reach a maximum rate of 274 to 990 GWh per day.

Adding the cumulative crude oil and natural gas extracted from the 1002 Area would be between 14.0 and 33.9 quads— an amount of energy that would find itself somewhere between the total 2016 U.S. consumption of coal (14.2 quads) and petroleum (35.9 quads). Adding the peak rate of oil and gas extracted from ANWR would imply the total peak of oil plus natural gas of between 1,140 and 3,454 GWh per day (we’re again playing fast and loose with some natural gas assumptions here). This range of rates for the peak energy being pumped into the total U.S. energy supply will be the numbers used to compare with renewable energy rather than the cumulative energy extracted.*

*The reason for this is because it is the best basis of comparison we have to the renewable nature of solar and wind. Why is that? At first glance it would seem that once the cumulative fossil fuels are used up that the installed renewables would then really shine as their fuel is theoretically limitless. However that would be an oversimplification, as every solar panel and wind turbine is made from largely non-renewable sources and the technologies behind them have a limited lifespan (about 25 years for solar panels and 12 to 15 years for wind turbines). As such, every utility-scale renewable energy plant will need replacing in the future, likely repeatedly over the decades. So while the renewable energy sources will not dry up, it is still important to look at the sources from a daily or yearly capacity basis instead of cumulative energy production. Additionally, energy (whether oil or renewable energy) is not extracted and transported all at once, that process takes time. Because of this, energy markets center around the rate of energy delivery and not the cumulative energy delivery.

So given our target range of 1,140 to 3,454 GWh/day, how much solar or wind would need to be installed?

Solar

The reader who asked this question comes from prime solar power territory, so let’s start there. In 2013, the National Renewable Energy Laboratory (NREL) released a report on how much land was used by solar power plants across the United States. With regards to the total area (meaning not just the solar panels but all of the required equipment, buildings, etc.), the generation-weighted average land use was between 2.8 and 5.3 acres per GWh per year, depending on the type of solar technology used. Using the most land-efficient technology (2.8 acres per GWh per year using increasingly common technology that tilts the solar panels to track the sun throughout the day), this amount of solar power would require about 1,166,000 to 3,530,000 acres, or about 4,700 to 14,300 square kilometers, of land.

Source

For reference, in the sun-bathed state of New Mexico, the largest city by land area is Albuquerque at 469 square kilometers. Given that, to equal peak potential oil output from the 1002 Area of ANWR woudl required solar power plant installations with land area about 10 to 30 times greater than Albuquerque. With the whole state of New Mexico totaling 314,258 square kilometers, the amount of land required for solar installations would be between 2 to 5% of New Mexico’s entire land area (put another way, the lower end of the land-requirement range is the size of Rhode Island and the upper end of the land-requirement range is the size of Connecticut).

Wind

Wind energy is set to take over as the number one American source of renewable energy by the end of 2019, a trend that is likely to continue in the future. One reason for the increasing capacity of U.S. wind power in the electric power sector is its ability to be installed both on land and in the water (i.e., onshore wind and offshore wind). Depending on whether the wind power installed is onshore or offshore, the efficiency, cost, and land-use requirements will vary.

NREL also conducted studies of the land-use requirements of wind energy. For both onshore and offshore wind installations, based on the existing wind projects studied, the wind power generating capacity per area (i.e., the capacity density) comes out to an average of 3.0 megawatts (MW) per square kilometer. As with the solar power land-use requirements, note that this figure goes beyond the theoretical space required by physics but includes all required equipment and land-use averaged across all projects.

Source

Operating at 100% capacity, that 3.0 MW per square kilometer would translate to 72 megatwatt-hours (MWh) produced per square kilometer each day. However utility scale wind power does not operate anywhere near 100% due to the prevalence of low wind speeds and changing directionality of winds, among other reasons. NREL’s Transparent Cost Database indicates that offshore wind operates at a median capacity factor of 43.00%, while onshore wind operates at a median of 40.35% capacity. Accounting for these figures, the land use of offshore wind energy comes out to 31.0 MWh per square kilometer per day, with onshore wind energy averaging 29.1 MWh per square kilometer per day. To reach the 1,140 and 3,454 GWh per day from peak-ANWR-oil would thus require about 33,000 to 100,000 square kilometers of area for offshore wind energy and about 35,000 to 107,000 square kilometers of land for onshore wind energy.

Using the same references points as with solar, wind energy resources would require an area roughly between 71 to 228 times the size of Albuquerque, between 11 and 34% the size of New Mexico, or a land-use requirement between the sizes of Maryland and Kentucky. It might seem jarring to realize just how much more land would be required for wind energy than solar energy, but multiple papers appear to support the notion that total land needed for utility-scale wind energy requires as much as six to eight times more land area than utility-scale solar energy on average. Indeed, the land-use required by renewable sources is one of the biggest costs of the energy at this time. If we’re willing to accept nuclear power as a source of clean, though not renewable, energy, then the technology currently outperforms them all by leaps and bounds– requiring 7 to 12 times less land than the same amount of solar power. But obviously nuclear power comes with its own set of political and environmental challenges, furthering the sentiment that there is not one and only one energy that will ever check all of the boxes and meet all of our needs.

Part 3: How would the costs of that scale of renewable energy sources compare with the previously discussed costs of drilling in ANWR?

Considering these results for the amount of land required by solar or wind energy resource to equal the peak oil and gas output of drilling in ANWR, the true scale of the potential energy resources underground the Alaska region really becomes clear. Further, it becomes clear just how difficult it would be to offset all of that potential energy by building utility-scale renewable energy generation. But the remaining question is how would the costs (both financial and environmental) of drilling in ANWR compare with the costs of the same capacity of renewable energy generation?

Source

 

Economically, the government (both state and federal) is only set to really profit from the drilling in ANWR because the area is government-owned and the money paid by the oil companies to lease the land for oil exploration would go directly to the government and because the government would also take a royalty on the profits made from said oil (a method to raise revenue also looking to be repeated in the sale of offshore drilling in almost all U.S. coastal waters). So while there will always be some degree of money provided to the government from renewable energy sources (e.g, through taxes), the land being used for our hypothetical vast solar or wind farms must come from the sale of government-owned land to provide the same sort of government revenue injection as drilling in ANWR. With wind power, at least, federally leasing for offshore wind farming has started to become somewhat common, though from 2013 to 2016 that only generated $16 million for the leasing of more than one million acres.

In terms of the noted benefits of helping U.S. energy trade by reducing the amount of oil that would need to be imported, the same can be said for a comparable amount of renewable energy– if that renewable energy is offsetting the import of fossil fuels, say for the electric power sector, then an equal effect on U.S. energy trade would be achieved.

In terms of the rough cost to install that amount of renewable energy, we can estimate total costs based on the levelized costs of energy (LCOE), which compares different methods of electricity generation based on costs to build, maintain, and fuel the plant over the lifetime. If we ignore the economic benefits that renewable energy sources enjoy from tax credits, the regionally-weighted LCOE’s of solar and wind power generation sources entering service in 2022 are 73.7 cents per MWh and 55.8 cents per MWh, respectively (compared with 96.2 cents per MWh for nuclear and 53.8 to 100.7 cents per MWh for natural gas, depending on the type of technology used). Compared with the total ANWR costs to extract of $123 billion to reach the 14.0 and 33.9 quads equivalent, the cost for solar would be between $3.0 billion and $7.3 billion and the cost for wind would be between $2.3 billion and $5.5 billion (again emphasizing the uncertainty in how much oil/gas is actually under ANWR as well as the very rough-estimate nature of these cost estimates). These numbers are just for the generation, not to mention the cost for transmission and distribution. However, with state-of-the-art renewable energy technology, it’s important to note that the costs are constantly decreasing and these estimates ignored the current tax credits allotted for renewable energy installations.

While renewable energy sources are seen as more environmentally friendly due to being carbon neutral, there are some environmental effects that cannot be ignored. Any energy source that takes up land is potentially displacing wildlife and using water and other resources. Further, just because the energy source is carbon neutral does not mean that the manufacturing, materials transportation, installation, or maintenance of those renewable plants are without emissions. Solar cells are also known to use some hazardous materials in their manufacturing. Regarding wind energy, extensive studies have had to be conducted on the danger wind turbines pose to birds, bats, and marine wildlife, though largely the conclusions of those studies has been that the impacts to such wildlife is low. Large wind turbines have also caused some concerns of public health regarding their sound and visual impact, but careful siting and planning is able to mitigate these concerns. So while the environmental effects of these renewable source are not nonexistent, they do appear to be much more manageable and avoidable than those of drilling for oil and gas.

Source

Conclusion

Even with the caveat that’s necessary to repeat throughout this post that all the numbers and calculations this analysis is based on are best-guess estimates and averages, much can be gleaned from looking at the results all together. Especially when you consider that the technologies involved for all discussed energy sources are constantly improving and each can be optimized for a particular region (such as using solar energy in lieu of wind energy in particularly sunny areas), the answer of how to best answer the energy future questions of the United States and the world is always going to be a strong mix of energy sources. There is no silver bullet, even among renewable energy resources, but rather heavy doses of appropriate renewable energy sources and nuclear energy sources will need to be mixed with the responsible use of fossil fuels for immediately visible future. Since the United States is quite unlikely to go cold turkey on fossil fuels overnight, the continued supply of crude oil products is going to be necessary for the time being. And the potential costs of largely relying on foreign imports to meet that demand are going to be feared by government and industry leaders alike. As such, it can be of no surprise that the massive resources of oil and gas underneath ANWR have been a continued focus of politicians and the oil industry for decades. However, none of that is to dismiss the legitimate environmental concerns the opponents have with sacrificing one of the last true areas of untouched wilderness in the United States to the predominantly-financial-based goals of drilling proponents, and if indeed the U.S. oil markets can prosper without drilling then that needs to be seriously considered.

The debate of whether or not to drill in ANWR is surrounded with so much uncertainty, along with passion on both sides. Because of this, the answer of what to do is not clear cut to many. The best thing you can do is educate yourself on the issues (I highly recommend a thorough read of the links in the ‘sources and additional reading’ section, as so much has been written about this topic that there is an unbelievable amount of information to learn) and stay informed as it evolves. Like it or not, drilling in ANWR is an inherently political debate and that affords all U.S. citizens the right, even the duty, to take your informed opinion and be active with it– call your Congressional representatives, join in the debate, donate to action groups. While the opening ANWR land for leasing to oil companies in the recently passed tax bill was the most significant action in this policy debate in years, the lengthy nature of the legislature and leasing process assures that the matter is anything but settled.

Sources and additional reading

About the author: Matt Chester is an energy analyst in Washington DC, studied engineering and science & technology policy at the University of Virginia, and operates this blog and website to share news, insights, and advice in the fields of energy policy, energy technology, and more. For more quick hits in addition to posts on this blog, follow him on Twitter @ChesterEnergy.  

How Much Power Is Really Generated by a Power Play?

As a huge sports fan who works in and writes about the energy industry, stumbling across this article that compared the kinetic energy produced by the high velocity projectiles in different sports got my creative juices flowing. By the estimates in that article, shooting a hockey puck produces the highest kinetic energy in all of sports.

Not only does it appear that hockey can take the ‘energy crown’ in sports, but a common occurrence during a hockey game is a ‘power play.’ A power play occurs when the referee determines that a player has committed a foul and that player is sent to spend a set number of minutes in the penalty box. During that time in the penalty box, the opposing team has the advantage of one additional player and are said to be on a power play– and if they score during that time then it is called a power play goal. While this power play has absolutely nothing to do with power plants or power generation, the idea that hockey pucks have the most kinetic energy in sports got me to wondering about what sort of power generation could be harnessed by power play goals in the National Hockey League (NHL).



If we wanted to harness the power of power plays (why would we want do do that? Maybe it’s the part of a plot by a wacky cartoon villain!), how much would that be? Why don’t we sit down and do the math!

Energy from a hockey puck

To start, we need to determine what the energy of a single hockey shot should be assumed to be (as the previously mentioned article does not include all of the necessary assumptions for academic rigor). High school physics class taught us that the kinetic energy is determined by taking one half times the mass of the object times the square of the speed of that object.

Source

Official ice hockey pucks weigh 170 grams, so we just need to figure out what to assume as the speed of the puck. Obviously every shot of the puck comes at a different speed depending on who is shooting, what type of shot is used (e.g., slap shot vs. wrist shot), how fatigued the player is, the condition of the ice, and many other factors. But for the sake of this back-of-the-envelope calculation, we can look at a couple of data points for reference:

  • The official NHL record for shot speed is 108.8 miles per hour (MPH) by Zdeno Chara in the 2012 All-Star Skills Competition;
  • Guinness World Records recognizes the hardest recorded ice hockey shot in any competition as 110.3 MPH by Denis Kulyash in the 2011 Continental Hockey League’s All-Star Skills Competition;
  • When discussing the benchmark of a particularly strong slapshot, 100 MPH is often used as the benchmark of a player getting everything behind a shot;
  • Finding benchmarks for the wrist shot is not as prevalent (people like to discuss the hardest shots possible, hence data on slap shots and not wrist shots), but some estimates show that wrist shots can reach speeds of 80 to 90 MPH; and
  • Estimates put wrist shots as accounting for 23 to 37 percent of all shots taken in professional hockey.

Given those figures, a rough estimate of average NHL shot speed can be determined by assuming slap shots are about 100 MPH and account for 70 percent of shots, while wrist shots are about 85 MPH and account for 30 percent of shots:

For the sake of this exercise, we’ll call the speed of a NHL shot 95.5 MPH, which equals about 42.7 meters per second (m/s). Plugging that speed and the 170 gram weight of the puck into our kinetic energy equation leaves us with an assumed ‘Power Play Power’ of an NHL power play goal of 154.9 Joules (J)– just over 0.04 kilowatt-hours (kWh).

For the rest of this article, we’ll refer to the energy gathered from power play goals, 154.9 J at a time, as ‘Power Play Power’– though please keep in mind the cardinal rule that power is the rate of energy over time, while the Joules and kilowatt-hours we’re talking about is total energy

Source

How much power can be harnessed from power plays?

The next step in reality would be to figure out how exactly you intend to extract ‘Power Play Power’ into actually generated energy, though that can be left up to the hypothetical cartoon villain who would be using such odd methods to create energy for his evil plots, as he did with the champagne bottles on New Year’s Eve (Side note, if I continue to write articles about the bizarre energy sources only thought up by a misguided cartoon villain, he needs a name– so in the spirit of villains like Megatron, Megamind, and Mega Shark, the energy-obsessed villain will be named Megawatt!)

But ignoring the question of how or why we would be extracting energy from ‘Power Play Power,’ let’s just look at what type of power will be generated based on 154.9 J per power play goal. Also note that there’s nothing special about the energy generated by a power play goal compared with a regular goal or even a shot that misses the goal– but where would the fun be without wordplay? POWER play goals only!

Most individual power play goals in a season

Note that all of the statistics pulled for this analysis are current as of January 1, 2018. Any power play goals scored after that date will not be accounted for in these statistics and calculations.

Pulling the top 10 individual player seasons with the most power play goals in NHL history, and assuming each of those power play goals account for 154.9 J, gives the following results:
Despite an impressive 34 power play goals in the 1985-86 season, Tim Kerr’s NHL record season would only generate enough ‘Power Play Power’ to run a large window-unit air conditioner for one hour at almost 1.5 kWh.

What about considering single players over their entire career?

Most individual power play goals over a career

As of January 1, 2018, the top 10 power play goal scorers for an entire career are as follows (note that as of writing, Alex Ovechkin is still active, as is Jaromir Jagr who is only two power play goals behind him in 11th place):
Looking at Dave Andreychuk, the individual with the most career power play goals in NHL history, his career ‘Power Play Power’ accounts for almost 11.8 kWh. Despite being an incredibly impressive number of power play goals, it’s only enough to power an energy-efficient refrigerator for about a week and a half. That’s a useful amount of energy to use in your home, but when it takes 274 career power play goals that that might be more work than it’s worth…

However looking at these first two charts, one aspect really jumps out– players who come from Canada appear to dominate ‘Power Play Power’ generation! Let’s dig into that a bit more.

Most power play goals by country of origin in the NHL

To start, Quant Hockey’s data shows that there are only 25 different home countries across all the players who have ever scored a power play goal in NHL history. Those 25 countries are listed in the below chart with their respective ‘Power Play Power’ totals generated:

Now we’re talking about some real energy. Canada, as predicted, dominates with almost 2,250 kWh of ‘Power Play Power’ since the beginning of the NHL. This amount of energy equates to about 20% of the average annual electricity used by an American household in 2016.

So that’s a pretty significant amount of energy on a micro-scale, but because we’re talking about the total ‘Power Play Power’ generated by all Canadian NHL players over nearly a century of play it is still not terribly impressive. For reference, the smallest nuclear power plant in the United States has a generation capacity of 582 Megawatts, meaning the 2,250 kWh of ‘Power Play Power’ of Canadian NHL players would be generated in under 14 seconds by the smallest U.S. nuclear plant operating at full capacity. Even if we included all power play goals scored by players of any nationality, the total ‘Power Play Power’ would only reach 3,339 kWh– or almost 21 seconds from the smallest U.S. nuclear plant.

Source 1, Source 2

Obviously the actual energy generation of each of these 25 nations will be much greater than the ‘Power Play Power’ generated by their respective NHL players– but is there some sort of correlation between ‘Power Play Power’ and actual energy production of the nations? Using the silly initial premise of this article as an example of the type of information available from the Energy Information Administration (EIA), a part of the U.S. Department of Energy, and how to find that data, we can pull the total primary energy production for these 25 countries and get a rough idea! While the NHL started recording power play goals in the 1933-34 season, EIA’s country-by-country energy production data dates back to 1980 (measured using quadrillion British thermal units, or quads), but we’ll still use these two complete time frames for the comparison’s sake. Putting the two energy figures on one graph for a relative comparison provides the following:
This graph presents a couple of interesting points:

  • Among the 25 eligible nations included in the survey, Canada, the United States, and Russia all find themselves in the top 4 countries in terms of both ‘Power Play Power’ and Total Primary Energy Produced by the nation;
  • In an interesting coincidence, when the two types of energy being measured here are put on comparative scales, Canada and the United States appear to be almost mirror images of each other, swapping relative strength in ‘Power Play Power’ and Total Primary Energy Production;
  • In another similarity between the two measures of energy, the totality is dominated by the top three nations, and the relative scale of any nation after about the halfway point shows up as barely even a blip on this graph.

But other than that, it can be considered fairly unsurprising that NHL power play success doesn’t directly translate to Total Primary Energy Produced by nation. And even if Canada saw their NHL power play prowess as their opportunity to increase energy exports (which would only serve to increase the fact that Canada is the largest energy trading partner of the United States), translating ‘Power Play Power’ into real energy, their 2,250 kWh over NHL history would only translate to 0.00000004% of Canada’s primary  energy produced in 2015 alone. Unfortunately, I do not think I’ve discovered a viable energy to be harnessed by the villainous Megawatt.

Source

More benevolently, it would also appear that ‘Power Play Power’ will not serve as a reliable new renewable energy source for hockey-crazed areas (in this scenario, are we to consider penalty minutes a source of renewable energy?? If so, Tiger Williams might be the most environmentally friendly player in major sports history). However, at 419 billion kWh of renewable generation in 2015, Canada is the fourth largest renewable energy producer worldwide (with the United States and Canada being the only nations this time to finds themselves in the top four of of renewable energy and ‘Power Play Power,’ as North America accounts for majority of NHL players and has collectively agreed to generate 50% of electricity from clean sources by 2025). Following the link for EIA international renewable energy data to bring this back to educational purposes, you’ll find other top-15 ‘Power Play Power’ nations that also account for the top-15 in global renewable energy production, including the United States, Germany, Russia, Sweden, and the United Kingdom.

Coincidence? Probably.

Interesting and informative, nonetheless? Definitely!



Sources and additional reading

Appliance Energy Use Chart: Silicon Valley Power

Comparing Sports Kinetic Energy: We are Fanatics

How much electricity does a nuclear power plant generate? Energy Information Administration

How much electricity does an American home use? Energy Information Administration

Iafrate breaks 100 mph barrier: UPI

International Energy Statistics: Energy Information Administration

Most Power-Play Goals in One Season by NHL Players: Quant Hockey

NHL & WHA Career Leaders and Records for Power Play Goals: Hockey Reference

NHL Totals by Nationality – Career Stats: Quant Hockey

Now You Know Big Book of Sports

Ranking the 10 Hardest Slap Shots in NHL History: Bleacher Report

Saving Electricity: Michael Bluejay

Scientists Reveal the Secret to Hockey’s Wrist Shot: Live Science

Score!: The Action and Artistry of Hockey’s Magnificent Moment

Sherwood Official Ice Hockey Puck: Ice Warehouse

Slap Shot Science: A Curious Fan’s Guide to Hockey

Total Renewable Electricity Net Generation 2015: Energy Information Administration

Wrist Shots: Exploratorium

About the author: Matt Chester is an energy analyst in Washington DC, studied engineering and science & technology policy at the University of Virginia, and operates this blog and website to share news, insights, and advice in the fields of energy policy, energy technology, and more. For more quick hits in addition to posts on this blog, follow him on Twitter @ChesterEnergy.