Tag Archives: Department of Energy (DOE)

Petroleum Administration for Defense Districts (PADDs): Past and Present

If you’re an energy-statistics nerd (which you probably are if you’ve found your way to this blog), you’ve no doubt seen various regional data expressed by PADD, or Petroleum Administration for Defense District. Referring to barrels of oil sent from one PADD to another or which PADD uses certain fuel types for home heating  allows for a useful shorthand for regions of the United States and their energy related statistics. Many people who come across the PADD system might already understand PADDs to be a bygone classification system from the country’s fuel rationing days, but most people’s understanding of the PADD system stops here and the history of PADDs are not explored any further.

 

That’s where this article comes in! This piece will serve to explain what the PADDs are, where they originated, how they evolved over the years, and how they are relevant today.



What are PADDs?

Petroleum Administration for Defense Districts, or PADDs, are quite simply the breaking down of the United States into different districts.
PADD 1 is referred to as the East Coast region and, because of its size, is further divided into three subdistricts:
  • PADD 1A, or New England, comprises Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island, and Vermont;
  • PADD 1B, or Central Atlantic, comprises Delaware, the District of Columbia, Maryland, New Jersey, New York, and Pennsylvania; and
  • PADD 1C, or Lower Atlantic, comprises Florida, Georgia, North Carolina, South Carolina, Virginia, and West Virginia.

PADD 2 is referred to as the Midwest region and comprises Illinois, Indiana, Iowa, Kansas, Kentucky, Michigan, Minnesota, Missouri, Nebraska, North Dakota, South Dakota, Ohio, Oklahoma, Tennessee, and Wisconsin.

PADD 3 is referred to as the Gulf Coast region and comprises Alabama, Arkansas, Louisiana, Mississippi, New Mexico, and Texas.

PADD4 is referred to as the Rocky Mountain region and comprises Colorado, Idaho, Montana, Utah, and Wyoming.

PADD5 is referred to as the West Coast region and comprises Alaska, Arizona, California, Hawaii, Nevada, Oregon, and Washington.

New PADDs

There are also two additional PADDs after the original five PADDs that rarely get mentioned, likely because they are much newer and the volume of oil products going in and/or out of them are minimal compared with the rest. Despite a mention of them in the Energy Information Administration‘s (EIA) write up of the PADD system,  PADDs 6 and 7 (meant to cover U.S. territories around the world) do not have data on them included on the prominent, publicly-facing EIA data sets. However, some digging shows that PADD 6 was added in 2015 in order to properly report needed information to the International Energy Agency and comprises the U.S. Virgin Islands and Puerto Rico, while PADD 7 includes GuamAmerican Samoa, and the Northern Mariana Islands Territory. You will commonly find sources citing just five total PADDs, but don’t let that throw you off. Simply impress those you meet at energy cocktail parties by memorizing what territories are in PADDs 6 and 7.

Origin of PADDs

The federal government first established the regions that would become the five PADDs during World War II. Specifically, the Petroleum Administration for War was established as an independent agency by Executive Order 9276 in 1942 in order to organize and ration the various oil and petroleum products to ensure the military had all the fuel it needed. Part of that organization process was the establishment of these five districts as a tool for that goal. The Petroleum Administration for War ended in 1946 after the war efforts were over, but these five original districts were quickly reestablished by the successor Petroleum Administration for Defense that was created by Congress in 1950 in response to the Korean War. This Administration provided these districts with the name Petroleum Administration for Defense Districts.


Source

Changes over time

As stated, the original function of the PADDs was to ensure proper distribution of oil supplies during World War II. In fact, the Department of Defense made use of the PADD system to redirect oil resources to specific PADDs  in response to Nazi attacks on U.S. tankers. These oil distribution efforts were the largest and most intricate such efforts yet, leading to the realization that interstate pipelines would soon become necessary to connect oil refineries with distant U.S. markets. But once World War II ended, the government determined there was no more need for the Petroleum Administration for War, and gone with the Administration were the districts.

After the Petroleum Administration for Defense revived the five districts, they were then under the management of the Department of Interior’s Oil and Gas Division, with the continued function to ensure the oil needs of the military, government, industry, and civilians of the United States were met. As with the Petroleum Administration for War, the Petroleum Administration for Defense was short-lived and was abolished just four years later by the Secretary of the Interior’s Order 2755 in April of 1954. Even though the government agency was eliminated, the names and organization of the various PADDs continued to be used ever since.

One significant change over the history of PADDs that is important to note is that there are no present day ‘official’ government keepers. While the PADDs served an official function and thus had official definitions set out by government agencies during World War II and the Korean War, that is no longer the case today– but that does not mean they are no longer significant. Within the Department of Energy (DOE), EIA uses the PADDs extensively in its aggregation and dissemination of data (discussed in more detail next). Further, government agencies have defined PADDs for use within specific regulations. For example, the Environmental Protection Agency (EPA) codified PADDs in the Code of Federal Regulations (CFR) when regulating motor vehicle diesel fuel sulfur use (though it explicitly dictates that the definition is only applicable as codified for that specific regulation) and specified total benchmarks and reductions that were to be met PADD-wide, as well as in reporting requirements regarding fuel additives so that they get published by PADD.

Use of PADDs today

With the government being out of the business rationing oil and petroleum since the end of the Korean War, the PADDs have found new purpose. The same PADDs have survived to allow analysis of data and patterns of crude oil and petroleum product movements within (and outside) the United States. Using these PADDs, government and industry players are able to ensure they are using the same regional collection of states and shorthand language to analyze and spot trends within regions instead of being confined to looking at the nation as a whole or analyzing on a more state-by-state basis.

Further, the PADDs are separated in a way that makes analysis straightforward. For example, following the crude supply in PADDs 2 and 3 are the most important to crude prices because they contain the largest number of refineries. Heating oil demand is mostly concentrated in PADD 1, making that the region to look at when investigating heating oil prices. Additionally, using the language of PADDs enable quick insights into data such as EIA noting the impact of Hurricane Harvey on flow of propane from PADD 2 to PADD 3 or detailing how PADD 1C needed to supplement its gasoline inventories with foreign imports when there was an accident that shutdown the pipeline that typically supplies the area with gasoline from PADD 2.

Examples of trends, statistics, and PADD characteristics

There are plenty of other examples of the usefulness of dealing with oil-related data within PADDs. A common example is to delineate from where different PADDs receive their oil. For example, with the knowledge that almost half of U.S. refining capacity is on the Gulf Coast (i.e., PADD 3) while less than 10% of refining capacity is on the East Coast (PADD 1) (though PADD 1 contains about one third of the U.S. population), an obvious conclusion is that there must be a lot of intra-PADD oil shipments everyday. In fact, about half of the oil consumed everyday by PADD 1 is supplied from PADD 3 over pipeline, rail, truck, and barge.

Going further, much of the commonly distributed data from EIA (click here to learn about the vast data available from EIA and how to navigate it all) utilizes PADDs. For example, EIA allows you to look at the following:

and much more.

So hopefully the next time you read a table from EIA that deals with oil movement specific to PADD 3 or read a news article citing the disruption of a pipeline that serves PADD 1, this article will come to mind and you’ll be better served to speak to it– and remember to try and win some bets with your knowledge of the seldom-mentioned PADDs 6 and 7!
Sources and additional reading:
About the author: Matt Chester is an energy analyst in Washington DC, studied engineering and science & technology policy at the University of Virginia, and operates this blog and website to share news, insights, and advice in the fields of energy policy, energy technology, and more. For more quick hits in addition to posts on this blog, follow him on Twitter @ChesterEnergy.  

Stranger Things Season 2: A Pointed Comment on the Department of Energy’s Nuclear History and Future?

This post is written assuming you have watched both season 1 and season 2 of ‘Stranger Things.’ If you have not yet watched and want to avoid potential spoilers, consider this your warning!

‘Stranger Things’ was the Netflix sensation out of nowhere in 2016, which made season 2 one of the most anticipated TV releases of this year. While this sci-fi mystery thriller seemingly had something for everyone– parallel dimensions, 80s nostalgia, mystical and mysterious forces, pop cultural references– I was also drawn in by the depiction of the Department of Energy (DOE) as the malevolent government forces behind the secretive experiments. Seeing DOE scientists at the fictional Hawkins Lab, rather than the typical Hollywood choices to use the FBI or the CIA for supernatural government cover-ups, was exciting for all of us who have worked in or with DOE and created a buzz in DOE offices and labs across the country.

Leading up to the release of season 2, I wrote about the interesting parallels that existed between Hawkins Lab and the real DOE labs. Some of these parallels appeared to be intentional similarities written by the Duffer Brothers (the show’s creators), while others were likely coincidental. With that in mind, I was very eager to watch for anything DOE-related in season 2 to see if I could gather more information about what it was the Duffer Brothers might have been trying to say about the real government agency, or would season 2 put to rest the connection between Hawkins Lab and the real DOE.



Well after just three nights on the couch, I’ve finished by ‘Stranger Things’ season 2 binge and have two main takeaways:

  1. I can’t believe I’m already done with the new batch of episodes and now have to go through another year at least before getting to do it again with season 3!
  2. One scene in particular has convinced me that the choice to use DOE was intentionally symbolic and is a pointed metaphor for the history and future of the agency.

The scene in question

Honestly, I would have been bingeing this show regardless of the DOE connection. So after a few episodes I had ceased paying terribly close attention to potential DOE parallels and was simply enjoying the story. But a specific scene in ‘Chapter Four: Will the Wise’ hit me over the head with its metaphor enough that I had to pause the episode to excitedly discuss it with my wife.

To set the scene, Nancy Wheeler and Jonathan Byers had called the mother of the missing and dead (from season 1) Barb Holland to admit that they hadn’t been fully honest about the night that Barb went missing (they knew the truth that Barb had been lost and killed in the parallel dimension of the Upside Down, but Barb’s parents had been shielded from this fact). They expressed their hesitation to discuss the matter on the phone, as they were correctly concerned that their phones were tapped by the government monitoring forces, and instead requested to meet in person in public. When Nancy and Jonathan go to the meet up spot, they are sitting ducks and get intercepted by undercover Hawkins Lab agents. They are taken to the lab to speak with Dr. Sam Owens, the new head scientist at Hawkins Lab, replacing the evil and manipulative Dr. Martin Brenner. Immediately, this situation looks like it will end poorly for the teens, as it surely would have were Dr. Brenner still in charge– he was never overly concerned with protecting the citizens of Hawkins and might have resorted to threats of violence. However, Dr. Owens’ approach is instead to explain the difficult scenario he inherited and hope the Nancy and Jonathan understand why the secrets of the lab cannot be made public.

wv_publicity_pre_launch_a_still_3-000001r1Source

The following is a transcript of the dialogue of this scene:

Dr. Owens: Men of science have made abundant mistakes of every kind. George Sarton said that. You guys know who George Sarton is? Doesn’t really matter. The point is mistakes have been made.

Nancy: Mistakes? You killed Barbara!

Dr. Owens: Abundant mistakes. But the men involved in those mistakes– the ones responsible for what happened to your brother and Ms. Holland’s death– are gone. They’re gone, and for better or worse I’m the schmuck they brought in to make things better. But I can’t make things better without your help.

Nancy: You mean without us shutting up?

Dr. Owens: She’s tough, this one. You guys been together long?

Jonathan: We’re not together.

Dr. Owens: You want to see what really killed your friend?

The three of them enter the area containing open portal to the Upside Down, which has grown much larger and more dangerous looking compared with what we saw throughout season 1. There are tentacles coming from the portal.

Dr. Owens: Teddy– brought you an audience today, hope you don’t mind.

Teddy (lab agent who is getting dressed in a protective suit): The more the merrier, sir.

Dr. Owens: I’d call it one hell of a mistake, wouldn’t you? The thing is, we can’t seem to erase our mistake. But we can stop it from spreading. It’s like pulling weeds. But imagine for a moment if a foreign state, let’s say the Soviets, if they heard about our mistake. Do you think they would even consider that a mistake? What if they tried to replicate that? The more attention we bring to ourselves, the more people like the Hollands that know the truth, the more likely that scenario becomes. You see why I have to stop the truth from spreading too, just like those weeds there. By whatever means necessary.

Teddy begins to spray fire all across the portal and the tentacles of the creature coming from the portal, which leads it to squirm and let out a noise of pain.

Dr. Owens: So, we understand each other now, don’t we?

After this scene when Nancy and Jonathan leave the lab, it is revealed that Nancy had a tape recorder and recorded Dr. Owens’ admission that Hawkins Lab, and thus DOE, was at fault for the death of Barb and all the other ills that had befallen the town due to the opening of this portal.

How does this relate to the real Department of Energy?

After hearing Dr. Owens describe the creation of the portal to the Upside Down and all the associated technology as a mistake and express the fear that enemy nations might replicate it, it immediately signaled that this scene was intended to describe the way many scientists and government officials felt during and after the Manhattan Project was used to develop and deploy the world’s first atomic bomb during World War II, as well as the fear and regret about the continued existence of nuclear weapons since that time.

The Manhattan Project was the government sponsored effort to develop the technology behind nuclear weapons, and it is to this effort that the Department of Energy traces its origins. These efforts were marked with secrecy, espionage, and a recognition of the vast worldwide implications of a potential development of a nuclear bomb.
Manhattan_Project_emblem_4

The quotes from Dr. Owens during this scene, if interpreted as an allegory for the development of nuclear weapons by DOE in the 1940s, provide a number of clues as to the parallels between the Manhattan Project and the ‘mistakes’ to which Dr. Owens refers.

Men of science have made abundant mistakes of every kind…The point is mistakes have been made.

Noting that all the experimentation and resultant terrors performed by Hawkins Lab during season 1 were mistakes does nothing to change that these mistakes were made. However, such an admission is one way to begin a healing and repair process. Similarly, many of the scientists involved in the Manhattan Project have been noted in the years that followed to have found the entire effort to have been a mistake, using such admission to spur discussion about the future use of nuclear weapons, deal with personal guilt, and find any potential good that can come out of the situation.

Despite the official stance that DOE is “proud of and feels a strong sense of responsibility for its Manhattan Project heritage,” many people would still contend that it was wrong to bring nuclear weapons into the world. In the years that followed, various levels of regret have been expressed by the physicists involved in the creation of the nuclear technology.

  • While Albert Einstein was not directly involved in the development of nuclear weapons for the Manhattan Project (the government denied him the necessary security clearance to be involved), it was a letter he wrote to President Franklin D. Roosevelt urging him to support the research and development of atomic weapons before Germany could do so that prompted to U.S. government to launch the Manhattan Project. Einstein would come to regret his role in kicking off the age of nuclear weapons after finding that the Germans never did produce an atomic bomb, stating that if he had known that would be the case he “would have never lifted a finger.”

 

 

  • At the same time, 70 scientists who actively worked on the Manhattan Project wrote and circulated the Szilard Petition that asked President Harry S. Truman not use the atomic bomb on populated land. Instead, they urged him to deploy an observed demonstration of the power of the bomb. The hope of these less hawkish scientists was that they were creating a weapon the threat of which would end the war, and if deployed on a remote island for the enemies to see its devastating power then that would be enough to earn surrender (in an odd footnote of history, the petition never made its way up the chain of command to reach the President). Obviously, the efforts of these scientists to delay (or ideally make unnecessary) the dropping of the atomic bomb failed.

 

  • The most famous Manhattan Project scientists who would openly consider the dawn of the age of nuclear weapons a mistake was J. Robert Oppenheimer– considered to be the father of the atomic bomb that came out of the Manhattan Project. At his farewell ceremony from Los Alamos Lab, Oppenheimer speculated that if atomic bombs were now to become a regular part of war then “mankind will curse the names of Los Alamos and of Hiroshima.” Even more famously, in a meeting with President Harry S. Truman after the war, a still-shaken Oppenheimer confided that he felt he had blood on his hands. While Truman dismissed those concerns by insisting the responsibility for the deaths of the tens of thousands of Japanese who died was his own, Oppenheimer was instead concerned about the countless potential deaths his atomic bomb could cause to future generations.

While the Manhattan Project scientists like Shachter and those who signed the Szilard Petition were focused on whether the development and use of the bomb was the right move during World War II, Oppenheimer was forward looking and was contemplating if the development of the technology was one of those abundant mistakes that science makes. Several years later, Oppenheimer would confirm this position, stating that “we have made a very grave mistake” in even considering the massive use of nuclear weapons.

 

But the men involved in those mistakes– the ones responsible for what happened to your brother and Ms. Holland’s death– are gone. THey’re gone, and for better or worse I’m the schmuck they brought in to make things better. 

When Dr. Owens says that those responsible for the nefarious actions of Hawkins Lab are gone, he seems to be suggesting that because the original architects are gone that those in charge are largely inculpable. They are gone, and now the new leadership can only do what it can to make things better.

Similarly, in the years that followed the dropping of the atomic bombs, much was made about the need for new leadership behind the research, production, and regulation of the technology. Along with the uncertainty the scientists of the Manhattan Project had regarding the appropriateness of using the nuclear weapons was the uncertainty that that power belonged in the hands of the government. As such, some of these scientists joined and formed the Federation of Atomic Scientists in 1945 and pushed for civilian control of nuclear research and production. These scientists thought it was the scientists, not the policymakers, who were the best stewards for the technology and that a change in this leadership would allow them to make things better.

Another leadership option that was widely discussed in the years following World War II was the possibility of a United Nations Atomic Energy Commission to take worldwide responsibility for atomic energy. The idea was that worldwide leadership would ensure that nuclear technology was only developed for peaceful purposes, rather than the destructive and warring use that was immediately developed under the leadership of the U.S. government. The agreements of the Commission would have called for the United States to destroy its atomic arsenal and a disclosure of the atomic secrets, but disagreements between the Soviet Union and the United States ultimately undermined and tanked the Commission. This failure would point the world towards a future Cold War and a path where the nuclear question still loomed.

In the end, the U.S. government settled on passing the Atomic Energy Act in 1946, which created the Atomic Energy Commission (the predecessor agency of DOE) as a civilian committee that took over responsibility of legacy U.S. nuclear development from the Manhattan Project. While the agency eliminated complete military control, a Military Liaison Committee to the Atomic Energy Commission kept the military involved and there was still a “strict government monopoly on both scientific and technological knowledge, and fissionable materials.”

In the end, despite efforts on the national and international scale, the leadership was never changed completely away from the U.S. government that created the nuclear weapons in the first place. In the absence of such real change, it appears that things have predictably only gotten worse– with nuclear warhead inventories skyrocketing to above 60,000 at their peak during the Cold War and remaining around 10,000 warheads across 9 countries today. Perhaps if a real schmuck, an international equivalent to Dr. Owens, had been given control and leadership, then things would have been made better.
Unknown

I’d call it one hell of a mistake, wouldn’t you? The thing is, we can’t seem to erase our mistake. But we can stop it from spreading. It’s like pulling weeds.

While Dr. Owens and the new leadership at Hawkins Lab were not responsible for the creation of the portal to the Upside Down and the unleashing of the creatures that inhabit it, the job of containing the mistake did fall to them. They couldn’t undo the past even if they wanted to, so instead they continually try to clean up the mess and stop it from spreading.

This weeding metaphor is very apt for the responsibilities DOE continues to manage after the predecessor agency brought for the age of nuclear weapons. As Oppenheimer noted, “the physicists have known sin: and this is a knowledge which they cannot lose.” While the scientists cannot take back the knowledge of nuclear weapons and how to create them from the world, they have a responsibility to do what they can to prevent its spread.

During the Cold War, DOE was in charge of nuclear weapons development and production. While the goal since the end of the Cold War has been to decrease stockpiles of nuclear warheads across the world, DOE has remained involved in the fallout of these nuclear weapons of the past. In 2000, the National Nuclear Security Administration (NNSA) was formed as a semi-autonomous agency within DOE whose jobs include managing the nuclear weapon stockpile, promoting international nuclear safety and nonproliferation, and more. Also included in these efforts is managing the environmental aspects of past and future nuclear development, such as managing and storing nuclear waste. These waste storage sites are managed by DOE across the country, often sparking outrage and controversy wherever they go, and are one of the ongoing containment activities required by DOE after the ushering in of nuclear weapons. DOE also finds itself at the table during discussions of international nuclear issues, such as its role in negotiating the 2015 Iran nuclear deal, in an effort to prevent the further spread of nuclear weapons.
Carlsbad-Nuclear-Waste-Isolation-Pilot-Plant
In addition to storing new nuclear waste, a large part of DOE’s mission (and associated budget) is to provide environmental cleanup at “107 sites across the country whose area is equal to the combined area of Rhode Island and Delaware” where nuclear weapons were developed, tested, and stored. Not only that, but DOE also finds itself continuing to pay for healthcare costs to those in the Marshall Islands that ended up affected by radioactive fallout of nuclear tests conducted in the 1950s on nearby islands. The need to perform these actions now and for the foreseeable future are possibly the best examples of DOE’s need to continue ‘weeding’ to prevent the spread of ills from its previously developed nuclear weapons.

But imagine for a moment if a foreign state, let’s say the Soviets, if they heard about our mistake. Do you think they would even consider that a mistake? What if they tried to replicate that?

One of the chief concerns at Hawkins Lab is that an enemy nation will find out about the technology they created and then assume it was done to create a weapon and/or replicate that technology for a weapon of their own. These fears are what drives the massive amount of security, secrecy, and monitoring at Hawkins Lab. These ideas are also directly applicable to the use of nuclear technology– both in its origin in the United States and in modern times across the globe.

In the days of the Manhattan Project, chief among the priorities were keeping the entire program secret from Germany, Japan, and the Soviet Union. While fission, the core scientific discovery behind the atomic bomb, was discovered in Germany, the ability to harness the resultant chain reaction and use it as a weapon was what was at stake. The result was a period of extensive espionage between the United States and these enemy nations, with Soviet spies actually successfully penetrating the Manhattan Project at several locations. Between these governments, it was no secret that the technology was actively being pursued and that the goal of doing so was for anything but peaceful means. However, the secrecy about the progress and scientific breakthroughs were critical– and in these ways the Manhattan Project embodied the paranoid secrecy that Dr. Owens and Hawkins Lab felt about their dimension jumping technology falling into the hands of enemy nations.

Even after the bombs were dropped on Hiroshima and Nagasaki and the war ended, the efforts of the U.S. government continued to focus on making sure the nuclear capabilities stayed out of the hands of the Soviets and other nations. This secrecy was so important to the U.S. government that one of the main reasons the United Nations Atomic Energy Commission failed to become a reality was due to the proposed requirement that the United States turn over the scientific and technological secrets behind the nuclear bomb. This fear went to such an extent that when the Cold War started to heat up, accusations that Oppenheimer, the central figure in the development of the atomic bomb for the United States, was a communist resulted in a repeal of his security clearance.
Even today, the United States finds itself as the country with the most nuclear weapons in its arsenal but also leading the conversation in ensuring additional nations do not acquire these weapons and working to reduce the existing stockpiles of weapons across all nations. The desire to ensure foreign states do not acquire the technology that the United States developed decades ago rings true to the fears Dr. Owens expresses about the past mistakes at Hawkins Lab.

The more attention we bring to ourselves, the more people like the Hollands that know the truth, the more likely that scenario becomes. You see why I have to stop the truth from spreading too, just like those weeds there.

Lastly, the highly secretive nature of Hawkins Lab is very true to the situation across U.S. towns that were home to Manhattan Project facilities. Despite employing 130,000 workers and spending $2.2 billion during the course of the Manhattan Project, most people across the United States were floored to find the extent to which such a large operation could have been kept such a secret. The entire town of Oak Ridge was built around the secret project, with the existence of the town itself kept a secret as well. Even among employees at the Manhattan Project facilities the end goal of the labs were kept secret, with most lower level workers at the facilities simply performed whatever rote task they were assigned without being explained what its purpose was or the big picture. Many workers simply watched large quantities of raw materials enter the facility, saw nothing coming out, and were tasked with monitoring dials and switches  behind thick concrete walls without knowing the purpose behind these monitors or their jobs. This extent of secrecy was seen as critical to the mission of the Manhattan Project, as any amount of information spreading out to the outside world would put the mission at risk. Secrecy defined the early stages of the nuclear age, as it also defined the work going on in Hawkins Lab. The secrets behind the real DOE and Hawkins Lab only remained secrets, however, until the scientists lost control of their creations as they started to affect the unsuspecting public.

OLYMPUS DIGITAL CAMERASource

Is this reading too much into one scene of a TV show?

While I don’t particularly like over-analyzing metaphors and symbolism that aren’t intended by creators to be there (shout out to literature teachers everywhere insisting that Fahrenheit 451 is about something Ray Bradbury himself denies), due to my experience with DOE and focus on its depiction in the show I couldn’t help but find some real world parallels that I think might have been an intentional metaphor included by the writers.

Admittedly, it seems that this part of the episode that is midway through season 2 might just be meant to signal shift in the plot. Whereas the antagonists in season 1 were Dr. Brenner and his team, with the Demogorgon being the unintended creation of these bad guys, it seems the Duffer Brothers used this scene as an opportunity to reset and shift the plot. The scientists at Hawkins Lab no longer have nefarious intentions (in a later episode, Dr. Owens is even the voice of reason in not allowing Will to die as a means to an end of defeating the mysterious forces putting the town at risk), and instead the main antagonists of the show are now the forces and creatures that continue to make their way through from the Upside Down.

Despite this function of the scene as a story-telling device that sets up the rest of season 2, it does also appear to speak to advent of nuclear weapons as the reason why DOE was chosen as the dark government agency in the series instead of the more commonly used FBI or CIA (seriously, can you name another pop culture avenue in which the Department of Energy plays a main role in the plot? The only two I could come up with are 1) Captain America, Campbell’s Soup, and DOE teaming up in comic book form for energy conservation and 2) the selection of ‘Dancing with the Stars’ participant Rick Perry as the Secretary of Energy.

imageimage-2

Source 1 Source 2

Because of the seemingly deliberate choice of words for Dr. Owens in this one scene, I believe the Duffer Brothers are pointing to the proliferation of nuclear weapons as the large mistake made by DOE in the past, which to this day requires constant weeding to prevent the effects of this mistake from spreading. Further, the devastating impacts shown by the creatures of the Upside Down when released into our dimension serve as a small reminder of the apocalyptic effects that the use of nuclear warfare could have on the world– a point that is made all the more poignant with nuclear tensions as high as they are today between the United States and certain hostile foreign states. For that, let’s all just hope diplomacy and cool heads prevail, lest the metaphorical Demagorgons of the world show what devastation really looks like.

 

Sources and additional reading
A Petition to the President of the United States: Dannen.com

As Hiroshima Smouldered, Our Atom Bomb Scientists Suffered Remorse: Newsweek

 

About the author: Matt Chester is an energy analyst in Washington DC, studied engineering and science & technology policy at the University of Virginia, and operates this blog and website to share news, insights, and advice in the fields of energy policy, energy technology, and more. For more quick hits in addition to posts on this blog, follow him on Twitter @ChesterEnergy.  

How would Hawkins National Lab from ‘Stranger Things’ fit in with the real Department of Energy Labs?

NOTE THAT THIS ARTICLE WILL DEAL OPENLY WITH THE PLOT OF STRANGER THINGS SEASON 1, SO IF YOU HAVE NOT YET WATCHED IT THEN THIS IS YOUR ONE AND ONLY SPOILER WARNING

Unless you were living under a rock and/or don’t subscribe to Netflix, you know that the 2016 debut of the television show ‘Stranger Things’ was one of the surprise pop culture hits of the year. The story follows a small town in Indiana where a boy goes missing, a girl with supernatural abilities is found, and it all unfolds in the shadow of dark and mysterious government agents.

While I loved the show and would recommend it to anyone who likes a good sci-fi mystery, what really grabbed my attention was that those dark and mysterious government agents were from Hawkins National Laboratory—a fictional Department of Energy (DOE) laboratory. While DOE’s National Labs are often referred to as ‘crown jewels’ of national science and research, they are not fully understood by the general public. So even though Hawkins Lab is fictional (and sinister), ‘Stranger Things’ shined an unfamiliar light on DOE labs that are not usually recognized outside of the federal energy policy and energy technological research spheres.



With season 2 of ‘Stranger Things’ set to hit Netflix on October 27, 2017, I thought it would be fun to explore the similarities and differences that Hawkins Lab shares with the 17 real DOE labs across the United States. While DOE has already commented on how DOE doesn’t deal with monsters or evil scientists—isn’t that exactly what evil scientists who deal with monsters would say? Seems like some outside research is warranted.

Location

Indiana

In ‘Stranger Things,’ Hawkins National Laboratory is located in a federal complex in Hawkins, a fictional city in Indiana. Depending on where in Indiana the fictional Hawkins is located, since that is never made specific in the series, the closest DOE lab is either Argonne National Lab just outside of Chicago, Illinois, or Oak Ridge National Lab in Oak Ridge, Tennessee. Either way, DOE has labs in the Midwestern states, making Indiana a realistic place for a National Laboratory.

Source

City of Hawkins

The city of Hawkins, Indiana is portrayed to be a small city where everyone knows each other’s business and the local police force is a very small operation. Of the options that are near to Indiana in real life, this type of town is certainly more reminiscent of the town surrounding Oak Ridge National Lab, where the sum of employees, students, visiting scientists, and facility users is equal to over 35% the total city population. The city of Hawkins might even have a much larger population than the non-laboratory citizens realize if they are all housed inside the secret laboratory campus, making the parallels in type of location between Hawkins Lab a real DOE lab even stronger than they initially seem.

One note here is that, originally, the show was going to take place in Montauk on Long Island. If this were the case, it would have placed the setting of the show only 60 miles from Brookhaven National Laboratory, also on Long Island. It appears that even in an alternate dimension (something the kids in ‘Stranger Things’ know a lot about…) where the showrunners ran with Montauk as the location, Hawkins Lab was destined to be located in a place that mirrors where a real DOE lab might be.

Building

Due to the secretive nature of Hawkins Lab, it is hidden in a forest, surrounded by a barbed wire fence and heavily guarded by security and police.

Source

None of these lab complex features could be considered outside of the norm for various DOE labs:

Origin

According to the bits of history peppered in during Season 1 of ‘Stranger Things,’ Hawkins Lab was created in the wake of World War II and the scientific endeavors sponsored by the U.S. government during that time. As was the case during the timeline of the show in the 1980s, Hawkins Lab was formed in secret due to the sensitive nature of the work going on there.

This aspect of Hawkins Lab is probably the most closely mirrored in actual DOE labs. The entire Department of Energy also traces its lineage back to the Second World War and the scientific pursuits of the Manhattan Project. The Manhattan Project was the government sponsored effort to create the atomic bomb that ultimately brought World War II to an end. Specifically, DOE worked on the research and development of the atomic bomb in Oak Ridge, Tennessee; Hanford, Washington; and Los Alamos, New Mexico—present day homes to Oak Ridge National Lab, Hanford Site, and Los Alamos National Lab, respectively. Not only that, but DOE also notes that when the existence of the Manhattan Project and its various sites (accounting for 130,000 workers and $2.2 billion in spending) was made public, it came as a shock that the government was able to run such far-flung secret operations. Hey residents of Hawkins, Indiana, sound familiar?

Mission

While never stated explicitly, much of the subtext and fan speculation of ‘Stranger Things’ pins Hawkins lab as being controlled by the CIA– either with the DOE label as a cover or in tandem with the DOE due to the dubious nature of the operations and what would happen if the public found out. Hawkins is the location of the top secret experiments conducted by the U.S. government. Based on the specific projects we know about (discussed next), the mission of Hawkins appears to be pushing the boundaries of science and the understanding of physics by any (dubious) means necessary.

The mission of each particular DOE lab varies depending on the program office it serves. The 10 labs under the Office of Science support the advancement of “the science needed for revolutionary energy breakthroughs, seek to unravel nature’s deepest mysteries, and provide the Nation’s researchers with the most advanced large-scale tools of modern science.” The three labs under the National Nuclear Security Administration serve the mission of “enhancing national security through the military application of nuclear science.” The missions of the remaining four labs include energy efficiency and security, national security, and the environment.

Based on these options, it seems reasonable that the mission of Hawkins Lab lines up with the mission of labs under DOE’s Office of Science—as both are focused on using DOE labs to advance science and solve the physical mysteries of the universe.

Projects

From creation in the 1950’s through the 1970’s, Hawkins was home for Project MKUltra, which exposed human subjects to psychedelic drugs and extreme isolation to test the boundaries of the human mind (the CIA actually did conduct a ring of experiments called MKUltra on that aligns with this type of description, though there was never any indication that the Department of Energy was involved).

One of the test subjects at Hawkins was pregnant while undergoing the experiments of MKUltra, leading to her daughter, who we only know as ‘Eleven’, to be born with telekinetic abilities.  The discovery of her abilities led Eleven to be subject to intense testing and experimentation on those abilities. One discovered ability was to connect with other living creatures when she was placed in sensory deprivation, which the scientists at Hawkins worked to leverage to gain intel on a Russian enemy (the show takes place during the Cold War).

While conducting one of the tests on Eleven to gain access to the Russian enemy, Eleven encountered a mysterious monster-like creature (known in show lore as the Demogorgon) from another dimension, called the Upside Down. This discovery led the scientists to aggressively pursue and continue this line of experimentation on Eleven to gain more information about the Upside Down and the Demogorgon.

Source

So in short, at Hawkins you have projects dealing with:

  • Human test subjects;
  • Telekinetic powers;
  • Espionage on enemy nations; and
  • Alternative dimensions containing scary monsters.

For the real-life DOE parallels, let’s break that down:

Human test subjects

Unfortunately, this aspect of projects at Hawkins Lab cannot be unequivocally declared to have no parallel to the DOE labs. The truth is that the Atomic Energy Commission, which became the Department of Energy in 1977, has a history of human experimentation. These shady tests dealt with the effects nuclear exposure had on humans, and a Freedom of Information Act inquisition revealed that DOE still to this day provides “healthcare to people in various Pacific Islands affected by nuclear tests.” So again, the origination of the labs and these tests comes from World War II era science, just like we learn is the case for Hawkins Lab.

Telekinetic powers

The development or research into telekinesis is one aspect of the fictional Project MKUltra that does not appear to have any parallel in the DOE lab system. Though this must obviously come with a caveat of—well, if they did have such abilities, would we as the public necessarily know about it yet?

Espionage on enemy nations

If any sort of actual top-secret espionage activity had technology developed by DOE, odds are that information wouldn’t be publicly available and thus would not end up in this article. However, Lawrence Livermore National Laboratory (LLNL) has billed itself as the ‘real’ Hawkins Lab and is responsible for “certifying the safety, security and reliability of the U.S. nuclear deterrent in a post-nuclear-test-world.”

With their state-of-the-art supercomputers, radiochemistry team, and asteroid defense (too bad this is comparing DOE to ‘Stranger Things’ and not ‘Armageddon’), LLNL boasts that its scientists are responsible for “technical guidance to the policymakers who struck the recent Iran deal, they certify airport security equipment to ensure bad things don’t make it onto planes and they are cyber defenders tasked with thwarting attempts to bring down critical U.S. infrastructure.”

If these are the projects they are telling the public about, its only up to your imagination the types of projects that are considered hush-hush…

Alternative dimensions containing scary monsters

On DOE’s website, they admit that the closest DOE labs come to exploring parallel dimensions is contributing to various NASA technologies (such as nuclear batteries for deep space probes) to explore new worlds in this dimension. In contrast to that message, though, former Secretary of Energy Ernest Moniz did coyly tell Chelsea Handler on her talk show, when asked about whether DOE explores parallel universes like in ‘Stranger Things,’ that DOE’s support of basic science and theoretical physics “looks at things like higher dimensions than three dimensions, and parallel universes.” However, your mileage may vary on how directly to connect that type of research to Hawkins’ research into the Demogorgon and the Upside Down.

Accolades

In its 40-year history, scientists associated with DOE have been bestowed many awards– including a host of Nobel prizes. Accounting for all of DOE and its predecessor agencies, science and research at DOE and DOE labs have accounted for 115 Nobel Laureates across the fields of chemistry, physics, and physiology/medicine.

A key characteristic of Hawkins Lab is its intense secretiveness. As such, it is reasonable to assume that most revolutionary projects in the lab, whether the creation of a human with telekinetic powers or the ability to open up a rift to the Upside Down, are not public knowledge to the scientific community and thus have not received the Nobel prizes such discoveries surely would have warranted.

 

 

So if you take all that information in, and line it up side-by-side as I’ve done below, it becomes clear that the distance between real DOE labs and Hawkins Lab is not as far as DOE would want you to believe. But at the very least, we can breathe easy that it does not appear that the parallels that are still in existence today encompass any of the sinister motivations or human rights violations found in Hawkins Lab. Let’s just keep our fingers crossed that no future FOIA’s reveal anything sinister, and, if anything, we simply find out that Barb was found safe and sound.

Click to enlarge

Is there anything about Hawkins National Lab that I missed? Let me know! Also, I’ll do an update of deemed necessary once I’ve completed my binge of the second season. While everyone else is desperate to learn the fate of Barb, find out more about the Demogorgon, and watch to see if Will makes it out of the Upside Down alive, I’ll be glued to my TV to try and get a peek at the administrative structure of Hawkins Lab and find out which DOE Program Office it falls under! (Update: Read about what season 2 of ‘Stranger Things’ might be saying about DOE’s nuclear past and future!)

Sources and additional reading

A government official confirms the scariest thing in ‘Stranger Things’ may actually be real: Business Insider

Come work at the ‘real’ Hawkins Lab

DOE National Laboratories Map: Department of Energy

Hawkins National Library– Stranger Things Wiki

Honors & Awards: Office of Science

Labs at-a-Glance: Oak Ridge National Laboratory

Manhattan Project Background and Information and Preservation Work: Department of Energy

Nuke Lab Can’t Keep Snoops Out

Our Mission: National Nuclear Security Administration

Science at its Best Security at its Worst: Department of Energy

Stranger Things: Netflix Official Site

Stranger Things but true: the US Department of Energy does human experiments, searches for The Upside Down

Stranger Georgetown: Declassified: The Hoya

The Office of Science Laboratories: Department of Energy

The Stranger Things creators want some scares with their Spielberg: AVClub

What “Stranger Things” Didn’t Get Quite-So-Right About the Energy Department: Department of Energy

 

About the author: Matt Chester is an energy analyst in Washington DC, studied engineering and science & technology policy at the University of Virginia, and operates this blog and website to share news, insights, and advice in the fields of energy policy, energy technology, and more. For more quick hits in addition to posts on this blog, follow him on Twitter @ChesterEnergy.  

Advice for Effective Public Comments in the Federal Rulemaking Process

Having spent a few years earlier in my career entrenched in the rulemaking process behind a number of regulations from the Department of Energy concerning appliance standards, I am able to empathize with the teams of analysts at federal agencies that are tasked with receiving and addressing the feedback that comes in during public comment periods. During every rulemaking process, there are real humans reading every single public comment received (even when those comments number in the thousands), cataloging the specific concerns from the stakeholders, conducting research and analysis regarding the points that were brought up, and ultimately responding to those comments– either by detailing why the existing analysis already addresses the comment or, if the stakeholder comment has successfully done its job, adjusting the analysis during the next round of the rulemaking to account for the issues brought up in the comment.

While submitting a comment in response to the federal rulemaking process can seem intimidating, the truth is that every rulemaking process receives comments from every sort of stakeholder, large and small, with the widest range of expertise on the topics possible (see previous article on how the rulemaking process works and what the function of the public comment period is here). Those involved in the regulatory process know to expect multi-page comment submissions with loads of data and testimonials from powerful trade associations or advocacy groups, but it is also common to receive more pointed and specific comments from concerned private citizens who don’t have any experience in the relevant industry, but simply have their own opinions and concerns. The beauty of the public rulemaking process, however, is that every single comment must be summarized in the next step of the rulemaking, along with a response as to how the new analysis addresses the concerns, no matter who submits it. With that in mind, regardless of whether you are representing a larger organization or just your personal interests as a citizen, what follows are six methods you can employ that will ensure your comment most effectively influence the federal rulemaking process.



1. Be accurate

This piece of advice should go without saying, but rest assured I have found that it needs to be said. If a comment submitted to the federal agency is found to have a basic inaccuracy in it, then the rest of the comment on that topic will be called into question and it can potentially carry less weight. An underlying inaccuracy in the comment will make responding to, or dismissing, the whole comment all too easy. So while it may be overly obvious, if you hope to make an impact on a regulatory rulemaking then be sure to verify the accuracy in everything you say.

 

2. Be specific with issues and provide alternatives

If you want your comment to be addressed specifically in the analysis, be sure to include specifics in the comment. Don’t say that something would be detrimental to businesses– state exactly what the detriment would be and why. Don’t state that a discussed technology would not be technologically or economically feasible– state what technology would be feasible and note what exactly is preventing the original technology from being so. Don’t state that a pricing analysis is unrepresentative of the market– describe how and why the analysis is off.

The point is that if the federal agency is given a vague reason for why the analysis is ‘bad’ or ‘off,’ but not given any specifics, then there is nothing tangible to address. The rebuttal to the non-specific comment can simply be to restate the original analysis and reasons behind it. However if a specific reasoning and alternative is instead provided, then you are giving the federal agency something meaty to address. The subsequent analysis must either move towards your alternative or give details about why that alternative is incorrect. But if your alternative is airtight and there are not holes to poke in it, then you will likely find success in shifting the analysis behind the rulemaking.

 

3. Address the issues the rulemaking asks about– but don’t be restricted to those topics

When reviewing a rulemaking document, whether in the early stages with a Request for Information (RFI) or later during the Notice of Proposed Rulemaking (NOPR) stage, you will often find specific issues called out on which the agency behind the rulemaking is seeking comment. These issues are numbered for ease of finding them, and sometimes (but not always!) listed in a single place at the end of the notice. If you do not see a list at the end of the notice, be sure to go through the document carefully to find them all in-line, where they’ll appear as in the example below.

Source

When the agency is pointing out these specific issues on which it requests comment, that shows where the most impact of a comment might be received. These are the issues that they might have the least amount of information (or they have information but recognize it’s outdated) concerning, or where they recognize there is considerable debate. Regardless of the reason, all comments on each of these numbered issues end up getting aggregated to get a clear picture of the available information and data before a decision on the direction of the rulemaking is made (though it is important to note that it is not decided by what received the most comments, but rather the accuracy and quality of the comments outweigh the quantity of comments received on an issue). If your position on the rulemaking is related to any of these specifically identified issues, make sure to frame your comment in direct response to the question asked (it even helps to note by number which issue your comment is addressing).

With all of that said, you should not feel that the identified issues are the only ones eligible for response or that the agency will not put equal weight behind comments regarding other aspects of the analysis. You might have comment or information on a topic on which the agency wasn’t focused or didn’t realize was controversial. So while it is important to fit your comments into the box of the issues identified by the notice if they are relevant to those issues, do not feel restricted to those topics. You just might be the only one to bring up this new issue, influencing the next stage of the rulemaking to address it more specifically.

 

4. Include hard data

The best way to back up your comment and encourage a specific response in the next stage in the analysis is to include your own data as evidence. Perhaps you think this data was overlooked by original analysis, or maybe you think the data that was included originally does not tell the whole story. Either way, if your data supports a change in the analysis and a different conclusion, then providing the full set of that data in your public comment is the best way to influence the rulemaking. Doing so will force the next stage of the analysis to either include that data (and thus changing the course of the analysis towards your desired outcome) or at the very least will require the next stage of the analysis to refute your data.

 

5. Include sources

Similar to including your own hard data, crucial to an effective public comment is providing evidence towards your points. Providing a comment that breaks down to be essentially subjective is unlikely to be effective, but if you can demonstrate your points with sources– e.g., scientific studies, experimental results, industry information, or marketing analysis, then the comment will make a bigger splash. The more you can ‘show’ your point rather than ‘tell’ it, the more substance and weight your comment will have.

 

6. Offer to follow up

An under-utilized strategy with regards to public comments on the public rulemaking process is making yourself available to the federal agency. The public comment stands on the record as a written statement of your thoughts and concerns on the rulemaking, but in commenting you can also offer to discuss the points further with the agency pursuing the rulemaking. Doing so may result in you being interviewed in the next fact-finding stage of the analysis, or you might also be invited to the next public meeting on the rulemaking to discuss your concerns further. These conversations can be the most valuable tool for really getting your point across and making sure the agency understands the basis of your viewpoint. Written comments only have the opportunity for a single back and forth between commenter and government agency, but conversations allow for the complete back-and-forth required for full understanding between the parties.

 

Conclusion

While there is no guarantee any single public comment will change the course of a particular rulemaking, if you follow these six guidelines then there is a greater chance that your comment will be well-received by the agency and carry the weight of consideration it deserves. If you have any additional questions on this process, don’t hesitate to reach out in the comments below or by contacting me directly.

Additional Reading

A Guide to the Public Rulemaking Process: Office of the Federal Register

Frequently Asked Questions: Office of Information and Regulatory Affairs

Notice and Comment: Justia

Notice-and-Comment Rulemaking: Center for Effective Government

Policy Rulemaking for Dummies

Rulemaking Process and Steps to Comment: The Network for Public Health Law

Tips for Submitting Effective Comments: Regulations.gov

 

 

About the author: Matt Chester is an energy analyst in Washington DC, studied engineering and science & technology policy at the University of Virginia, and operates this blog and website to share news, insights, and advice in the fields of energy policy, energy technology, and more. For more quick hits in addition to posts on this blog, follow him on Twitter @ChesterEnergy.  

Best from “Today in Energy” in 2017

Among the wide array of regular articles the Energy Information Administration (EIA) releases, as detailed in this post on navigating EIA’s data sets , one of the most varied and interesting is the Today in Energy (TIE) series of articles released every weekday. According to EIA, TIE articles “provide topical, timely, short articles with energy news and information you can understand and use.”   

What makes TIE particularly compelling to read each day is that the topics it covers range across the spectrum of energy-related topics. Where most of the other reports released by the EIA are restricted to a specific fuel type or survey of consumers, TIE articles bring all of these topics from across EIA into relevant, digestible, and fascinating briefs to give a broad spectrum of information to its readers.



Further, TIE articles feature both stories that are relevant and important to current events (e.g., Hurricane Irma may cause problems for East Coast energy infrastructure) and stories that provide useful background information that can be referenced for years to come (e.g., Crude oil distillation and the definition of refinery). Not only that, but keeping up with TIE articles is a great way to keep up with other EIA publications as well, such as when articles such as the Annual Energy Outlook, International Energy Outlook, or Short-Term Energy Outlook are posted, TIE often includes an overview of some of the relevant conclusions of those articles and a link to read the full version.

To prove how valuable TIE articles can be for all these reasons, I’ve picked a sampling of 13 of my favorite TIE articles thus far in 2017 that are particularly interesting and demonstrate the cross-cutting topics offered by TIE. The ones I’ve chosen are based on the topics I find the most engaging, as well as the graphics that are the most clever and elegant.

1. EIA’s AEO2017 projects the United States to be a net energy exporter in most cases

January 5, 2017

Released the same morning as the Annual Energy Outlook 2017 (AEO2017), this article demonstrates the tendency of TIE to alert the readers of the latest EIA publications, while also providing a good overview to new readers as to what AEO2017 is and what the main takeaways from the report were.

2. Canada is the United States’ largest partner for energy trade

March 1, 2017

Utilizing the latest data from the U.S. census bureau, this article details the energy imports/exports between the United States and Canada broken out by U.S. region and fuel type and demonstrates TIE articles on the topic of trade. Most interesting is the graph showing the difference in electricity trade over the years from each of four U.S. regions.

Source: Energy Information Administration

3. U.S. energy-related CO2 emissions fell 1.7% in 2016

April 10, 2017

This TIE article from April breaks down carbon dioxide (CO2) emissions data, from the Monthly Energy Review, from 2005 to 2016 by both emitting fuel and industry, while also introducing carbon intensity as a metric and shows the progress made in reducing energy-related carbon intensity over the previous decade. As climate change heats up as an issue in domestic politics, industry, and foreign affairs, this type of window into U.S. CO2 emission data can prove invaluable.

4. Most U.S. nuclear power plants were built between 1970 and 1990

April 27, 2017

I chose this article because it provides a fascinating chart that shows the initial operating year of utility-scale generation capacity across the United States, broken out by fuel type, to demonstrate the relative age of each source of electricity generation and, in particular, the relative old age of the U.S. nuclear generating capacity, while also showing the explosion of non-hydroelectric renewable generation since the turn of the century.

Source: Energy Information Administration

5. American households use a variety of lightbulbs as CFL and LED consumption increases

May 8, 2017

An example of a TIE article getting into the use of energy inside of U.S. homes, this piece takes information from the 2015 Residential Energy Consumption Survey (RECS) to show how residential lighting choices have been trending in the face of increased regulation and availability of energy-efficient lighting technologies, highlighting the differences depending on renter vs. owner occupied, household income, and whether or not an energy audit has been performed.

6. More than half of small-scale photovoltaic generation comes from residential rooftops

June 1, 2017

Utilizing data from the Electric Power Monthly, this article breaks out the use of small-scale solar power systems based on the geographic location and type of building, highlighting the rapid rise these systems have experienced in the residential sector, as a great example of renewable energy in the residential sector.

7. Dishwashers are among the least-used appliances in American homes

June 19, 2017

Again taking data from RECS, this TIE article provides insights on the frequency that certain appliances are in American homes, how often they go unused in those homes, pervasiveness of ENERGY STAR compliant appliances, and other data regarding residential energy use of appliances. This article also includes a plug for the 2017 EIA Energy Conference that was to be held a week after its publication, again showing how good of a job reading TIE articles daily can do of making sure you know the latest happenings at EIA.

8. Earthquake trends in Oklahoma and other states likely related to wastewater injection

June 22, 2017

A reason I find this TIE article particularly interesting is that it goes beyond just the energy data collected by EIA and synchs with outside data from the Earthquake Catalog to show additional effects of energy production in the environment. This kind of interplay of data sources demonstrates how powerful EIA data collection can be when analyzed in proper context.

9. Monthly renewable electricity generation surpasses nuclear for the first time since 1984

July 6, 2017

I highlight this TIE article for two reasons. First, the graphic below showing the monthly generation of nuclear compared with the cumulative generation of renewable energies—and the highlighting of 2016-17 particular—is really illuminating. This graph is a great demonstration of the power of data visualizations to convey the data and the message of that data. Second, the reason behind that graphic—that monthly renewable generation surpassed nuclear generation for the first time in over three decades—is a remarkable achievement of the renewable energy sector, showing the trending direction of the U.S. fuel mix going forward.

Source: Energy Information Administration

10. California wholesale electricity prices are higher at the beginning and end of the day

July 24, 2017

This TIE article was identified because of how interesting the topic of wholesale electricity prices varying throughout the day can be. As net metering and residential production of electricity increases across the United States, this will be a topic those in the energy fields will want to keep a keen eye on.

11. Among states, Texas consumes the most energy, Vermont the least

August 2, 2017

Grabbing data from the State Energy Data System, this TIE article presents a graphic displaying the most and least overall energy use as well as the most and least energy use per capita among the 50 states and the District of Columbia. Using color to demonstrate the relative consumption and consumption per capita creates a pair of really elegant visuals.

Source: Energy Information Administration

 

12. Solar eclipse on August 21 will affect photovoltaic generators across the country

August 7, 2017

As everyone was scrambling to find their last minute eclipse glasses, this TIE article detailed where, and how much, the total solar eclipse of August 2017 was to diminish solar photovoltaic capacity and an assessment of how local utilities will be able to handle their peak loads during this time (a nice follow up TIE article on this also looked at how California dealt with these issues on the day of the eclipse, increasing electricity imports and natural gas generation).

Source: Energy Information Administration

13. U.S. average retail gasoline prices increase in wake of Hurricane Harvey

September 6, 2017

Another example of TIE addressing energy-related current events, this article not only provides the information and analysis of the effect that Hurricane Harvey had on retail gasoline prices, but it also provides the context of why the effect was being felt, how it compared to previous hurricanes, and what could be expected moving forward.

 

 

If you’ve been sufficiently convinced that Today in Energy articles would be an engaging read to start the day, you can sign up for an email subscription by following this link.

 

 

About the author: Matt Chester is an energy analyst in Washington DC, studied engineering and science & technology policy at the University of Virginia, and operates this blog and website to share news, insights, and advice in the fields of energy policy, energy technology, and more. For more quick hits in addition to posts on this blog, follow him on Twitter @ChesterEnergy.  

DOE in Focus: Strategic Petroleum Reserve

The Strategic Petroleum Reserve (SPR), owned by the U.S. federal government and operated by the Office of Fossil Energy within the Department of Energy (DOE), is collectively the largest reserve supply of crude oil in the world. These massive reserves of oil are divided between four storage sites along the Gulf of Mexico.
As the name implies, the SPR exists to provide a strategic fail-safe for the United States, ensuring that oil is reliably available in times of emergency, protecting against foreign threats to cut off trade, minimizing potential impacts of price fluctuations, and more. Understanding the SPR, both its history and its present form, are crucial to recognizing the role it may play in the future and understand the implications of its discussion by politicians.



Origin of the SPR

Initial calls for a stockpiling of emergency crude oil began as early as the 1940s, when Secretary of the Interior Harold Ickes advocated for such reserves. The idea continued to be brought up and kicked around through the decades– by the Minerals Policy Commission in 1952, by President Dwight Eisenhower in 1956, and by the Cabinet Task Force on Oil Import Control in 1970– but it wasn’t until the Arab oil embargo of 1973-74 that the concept of a strategic stockpiling of oil really gained traction.

For a detailed history on the embargo itself, I would recommend reading The Prize: The Epic Quest for Oil, Money, and Power by Daniel Yergin (who also wrote The Quest: Energy, Security, and the Remaking of the Modern World). But in short, the embargo was due to the United States’ support for Israel in the 1987 Arab-Israeli War. In response, the Organization of Arab Petroleum Exporting Countries (OAPEC) (not to be confused with OPEC– the Organization of Petroleum Exporting Countries) imposed an oil embargo on the United States, while also decreasing their overall production. U.S. production on its own was not enough to meet the country’s needs, and even in the rare instances when oil originating from the Arab nations made its way to the United States, it came at a price premium three times higher than before the embargo.

While an existing stockpile of oil would not have prevented the United States from paying the market price for oil, the availability of such reserves would be enough to help mitigate the magnitude of the market price jump. Not only that, but having reserves of oil available would buy the government time to continue diplomatic efforts to resolve the dispute before the oil shortage caused more devastating impacts on the national economy. Lastly, having a national reserve of oil would reduce the allure of any oil-exporting nations from using the control of their oil exports as a political tool in the first place, as it would not hold the immediate and impactful sway.
With these goals in mind and to prevent the repetition of the economic impacts felt in the U.S. by the oil embargo, President Gerald Ford signed into law the Energy Policy and Conservation Act (EPCA) in 1975. Among the law’s effects was to declare that the United States would build an oil reserve of up to one billion barrels, owned and operated by the federal government. On July 21, 1977, the first shipment of 412,000 barrels of oil from Saudi Arabia arrived and the SPR was officially open.

Operation of the SPR

Storage

The SPR comprises underground storage facilities at four different locations on the U.S. Gulf of Mexico, with each facility in a hollowed out salt dome. The locations in Texas and Louisiana were chosen because of the existence of the salt domes that have proven to be inexpensive and secure storage options and because the Gulf Coast is the most significant U.S. hub for oil refineries, pipelines, and shipments ports. Additionally, the SPR controls the Northeast Heating Oil Reserve (NEHHOR), which stores up to 2 million barrels of heating oil to ensure the northeast is insulated from emergency interruptions in heating oil during the winter months.
The SPR reserves have a storage capacity of over 713 million barrels, with the active amount of oil stored being enough to cover over 100 days of imports since early 2013.

Drawdowns

As the DOE is an executive agency, the decisions regarding when emergency withdrawals from the SPR are needed are made by the President, as specified in EPCA. According to this authorization, the President is only permitted to direct sales from the SPR if he or she “has found drawdown and sale are required by a severe energy supply interruption or by obligations of the United States under the international energy program” or if an emergency has significantly reduced the worldwide oil supply available and increased the market price of oil in such a way that it would cause “major adverse impact on the national economy.”
In addition to this authorization for full drawdowns, Congress enacted additional authority in 1990 to allow the President to direct a limited drawdowns to resolve internal U.S. disruptions without the need to declare a “severe energy supply interruption” or comply with international energy programs. These limited drawdowns are limited to a maximum of 30 million barrels.  Both full drawdowns and limited drawdowns are limited to the President’s authority.

Other SPR Movements

Outside of these authorities of the President over the SPR, the Energy Secretary also has the authority to direct a test sale of oil from the SPR of up to 5 million barrels. The purpose of these test sales is simply to evaluate the drawdown system of physically removing and transporting the oil from storage, as well as the sales procedure. By law, DOE is required to buy back oil from these test sales within a year.
SPR oil can also be sold through a process known as exchanges, where a company will borrow oil from the SPR to address emergency supply disruptions. The terms of the exchange will include the date by when the company is required to resupply the SPR with the amount of oil it borrowed plus an additional amount of oil as “interest.”
Lastly, Congress can enact laws to authorize additional sales of oil from the SPR. These non-emergency sales are typically to respond to smaller supply disruptions and/or to raise funds for specific reasons, such as the Bipartisan Budget Act authorization to sell a portion of SPR’s oil to pay for modernization of the SPR system and a general fund of the Department of Treasury.

Sales process

Regardless of the authority or reason for it, the oil sold from the SPR is done by competitive sale. The DOE issues a Notice of Sale in the Federal Register, detailing the volume, characteristics, and location of the oil for sale, as well as the procedural information for bidding on that oil. After the official authorization for a sale, it typically takes about two weeks to begin the movement of the oil– which can be moved at up to 4.4 million barrels per day.

Emergency drawdowns in SPR History

Since the embargo of the 1970s, there have been a handful of significant spikes in oil prices and interruptions to the U.S. and world supply caused by international conflict. However, having established U.S. reserves as large as they are has provided a domestic and foreign policy tool during that time.
There have only been three emergency drawdowns in SPR’s history. The first came in 1991, when President George H.W. Bush released 17.3 million barrels of SPR oil for sale to restore stability in world oil markets in response to the Persian Gulf War. In 2005, President George W. Bush called for the second emergency drawdown of SPR supplies, releasing 20.8 million barrels in response to the damage that Hurricane Katrina did to oil production and transportation infrastructure in the Gulf Coast. Most recently President Barack Obama authorized the largest sale by a President yet, releasing 30 million barrels in response to Middle East turbulence and subsequent disruption to the worldwide and U.S. oil supply.

Debate surrounding the SPR

Despite the agreement about the immense negative economic impacts from the oil embargo that prompted the formation of the SPR in the first place, the decisions surrounding the SPR are not without their faire share of critics and controversies.
One notable cause for debate surrounds the meaning of the language in the original authorization, specifically what exactly constitutes a “sever energy supply disruption.” This phrase was initially intended to authorize the SPR to release stocks of oil to resolve discernible, physical shortages of crude oil. However there have been debates about whether to expand that definition– such as the 2011 American Clean Energy and Security Act (which ultimately did not become law) to allow for the SPR to build reserves of additional refined oil products (outside of the already reserved crude oil and heating oil) and use them to mitigate drastic changes in the prices of those products independently of crude oil prices.
Other critics have pointed out that the private stock of inventory in the United States, excluding the SPR, far exceeds the SPR holdings. Some of these people then argue that it would be better to use these private stocks than any government stocks, as the free market would respond in the optimal way to prompt the release of these private stocks. The SPR, on the other hand, is rarely used and is more often positioned as a political tool and thus the role of keeping oil reserved is not one for the federal government, according to these credits
Another critique of the SPR, according to some, is that the government has demonstrated itself as incapable of using the stocks as they should. These critics point to times where oil prices climbed above $100 per barrel, causing significant economic disruption, without the government responding appropriately by releasing SPR oil to mitigate the price jumps. Instead, according to the argument, the markets (and specifically the oil futures market, which was created well after the inception of the SPR) do a better job.
Even as recently as September 2017, in the aftermath of the devastation in the Gulf Coast by Hurricanes Harvey and Irma, President Donald Trump and his Energy Secretary Rick Perry disagreed on the importance of keeping the SPR. While President Trump’s 2018 budget proposal called for selling off half of the oil in the SPR to pay off part of the federal deficit, Secretary Perry said the hurricanes were an example and reminder of why the United States needs the SPR. Worth noting is that the Trump administration did make the decision to send 500,000 barrels from the SPR to a Louisiana refinery in order to shield the economy from higher gas prices.

Future of the SPR

In August 2016, DOE reported to Congress on the state and the long-term strategy of the SPR. The main conclusions of this report included the following:
  • To ensure the stability of the SPR going forward, the infrastructure of the system needs further investment and upkeep;
  • Adding marine terminals is critical to the future ability of the SPR to add barrels to the market in an emergency;
  • The SPR continues to benefit the economy moving forward, and further reductions in the SPR beyond those already authorized would hinder those abilities;
  • If the SPR were to expand in inventory, new storage capacity would need to be developed;
  • Expansion beyond the current four-site configuration of the SPR would violate operational requirements; and
  • Certain improvements to the management and operations of the SPR could be made with limited amendments to EPCA.
However, the debate surrounding the SPR, the U.S. oil markets, and the worldwide energy landscapes are in a constant state of flux, so knowing what will come next for the SPR requires constant attention.

Keeping up with the SPR

If you’re interested in seeing the level of the reserves or watching the movement of oil into and out of the SPR, that information is publicly available to you. The Energy Information Administration’s website will let you look at the historical monthly/annual numbers for SPR stock. Additionally, the SPR website gives updates on the current inventory, broken out by sweet vs. sour crude.

The sale of oil from the SPR is uncommon enough that it will always be a newsworthy event. To be sure you keep up to date on any sales, you can sign up for email updates from the Office of Fossil Energy.  Subscribe to their email list here, making sure to select that you want information on “Petroleum Reserves.”

Sources and additional reading

History of SPR Releases– Office of Fossil Energy

History of the Strategic Petroleum Reserve

New legislation affects U.S. Strategic Petroleum Reserve– Today in Energy

Long-Term Strategic Review of the U.S. Strategic Petroleum Reserve– Report to Congress

Northeast Home Heating Oil Reserve (NEHHOR)

Statutory Authority for an SPR Drawdown

Strategic Petroleum Reserve- Office of Fossil Energy

Strategic Petroleum Reserve sales expected to start this month– Today in Energy

The Strategic Petroleum Reserve: History, Perspective, and Issues– Congressional Research Service

 

 

About the author: Matt Chester is an energy analyst in Washington DC, studied engineering and science & technology policy at the University of Virginia, and operates this blog and website to share news, insights, and advice in the fields of energy policy, energy technology, and more. For more quick hits in addition to posts on this blog, follow him on Twitter @ChesterEnergy.  

Federal Register Notice: Test Procedure for Distribution Transformers: Request for Information

The Department of Energy (DOE) published a Notice of Request for Information (RFI) in the September 22, 2017 issue of the Federal Register (82 FR 44347) on the test procedure for distribution transformers. This article intended to break down what exactly is being requested by the DOE, the steps that have come before and will come after this, and what it might mean for you.

What is this notice?

I’ve covered what an RFI entails my article on the DOE’s RFI for its net metering analysis, as well as the overall federal rulemaking process in the Policy Rulemaking Process for Dummies article—so click on those links to get the background information on those aspects of this process. However I have not had a chance to detail the DOE’s dealing with test procedures.



As detailed in the ‘Authority and Background’ section of the RFI, the Energy Policy and Conservation Act of 1975 (EPCA) authorizes DOE to regulate the energy efficiency of a wide array of covered consumer products and industrial equipment. Among that list of equipment is distribution transformers. As such, DOE first established regulatory standards for distribution transformers in 2007, and most recently completed full rulemaking process to update to the energy conservation standards for distribution transformers in 2013, which took effect in 2016. These standards set minimum energy efficiency standards for the equipment based on the type of distribution transformer, the applicable kVA rating, and BIL rating, and those final standards can be found here.

The authority of EPCA calls on DOE to not just set the minimum energy efficiency standards for distribution transformers (and other equipment), though. DOE is also responsible for setting the testing requirements, which manufacturers must use as a basis to 1) certify to DOE that their equipment complies with standards, and 2) make representations of the efficiency of their equipment to the public (e.g., through in manufacturer catalogs). In other words, the official DOE test procedure dictates the testing setup and methods in which the efficiency of the equipment is measured.

DOE currently has test procedures for distribution transformers, which can be found here. These test procedures were published in 2006 when the first efficiency standards for the equipment were published as well. As noted in this RFI, “EPCA requires that, at least once every 7 years, DOE evaluate test procedures for each type of covered equipment, including distribution transformers, to determine whether amended test procedures would more accurately or fully comply with the requirements for test procedures to not be unduly burdensome to conduct and be reasonably designed to produce test results that reflect energy efficiency, energy use, and estimated operating costs during a representative average use cycle.” In fact, during the 2013 update to the energy conservation standards for distribution transformers, DOE did just that and determined that the current test procedures were satisfactory and did not require an amendment. However during that rulemaking process, certain stakeholders took advantage of the opportunity to make a public comment and noted that the requirements for ‘percent of nameplate-rated load’, or PUL, of the test procedure might not be appropriate and should be addressed in a future test procedure rulemaking. This RFI published by DOE is the beginning of that promised future test procedure rulemaking on distribution transformers, set to give consideration to the test PUL requirements.

Background of Distribution Transformers

As detailed in the RFI, a transformer is “a device consisting of 2 or more coils of insulated wire that transfers alternating current by electromagnetic induction from 1 coil to another to change the original voltage or current value.” 10 CFR 431.192  Distribution transformers, according to the DOE definition, are specifically identified based on their input and output voltage and other electrical characteristics. Put simply, distribution transformers are the pieces of equipment that take the high-voltage power from transmission lines and step that power down to its safe, final voltage before it is sent to the customers (in their homes, commercial buildings, etc.). These distribution transformers can be found either on a utility pole or in a locked box on the ground. Depending on the area, a single distribution transformer might serve one customer (in a remote rural area) or it might serve many customers (in a dense urban area). Further, a single large industrial facility might require multiple distribution transformers of its own.

On the left is a utility-pole distribution transformer, while the right is a pad-mount distribution transformer. I can’t be the only one who has nostalgia looking at the one on the right and of using it as a base in kickball or as home base in capture the flag until my mom yelled at us to stop playing on it, right?

The full current test procedure for distribution transformers can be found here, which specifies the test system accuracy required; the methods for measuring resistance, losses, and efficiency value of the transformer; and the test equipment calibration and certification.

What is being requested

This RFI is the beginning of a full rulemaking cycle on the test procedures for distributed transformers, so this is the opportunity for stakeholders to make an early and strong impact on the direction of the rulemaking.

The main issue that was brought up during the 2013 energy conservation standards rulemaking with regard to the test procedure was the appropriateness of the PUL specification. The discussion of this issue centered on the idea that the PUL on which the transformers were tested, and thus the PUL on which the resultant declared efficiencies were based, are potentially not representative of the PUL at which the transformers would operate during actual use. If this is the case, then customers seeking out the transformer that would use the least energy might be misled, and transformers that actually save more energy than others in use might be found non-compliant with regulations. To address this issue, DOE is requesting comment on the following:

  • Issue 1: Any data or information on the PUL used during the first year of service for distributed transformers;
  • Issue 2: Typical PUL values used in the population of distributed transformers;
  • Issue 3: Whether data provided by manufacturers represents first year of service or full lifetime;
  • Issue 4: Whether transformer loads increase over time; and
  • Issue 5: How much the efficiency of a transformer effects the purchasing decision of customers.

DOE is also going to investigate the issue of temperature correction and if the current practice of calculating losses by assuming the temperature inside the transformer is equal to an outside ‘reference’ temperature. The concern is that the temperature inside the transformer is surely higher than an outside temperature, meaning the energy losses in practice would be higher than what is being calculated. To address this, DOE is requesting comment on the following:

  • Issue 6: Any data or information about whether calculating losses at ambient temperature or internal temperature is more representative of real transformer performance; and
  • Issue 7: Whether temperature varies with PUL.

The current test procedure specifies efficiency by a single tested PUL. DOE has engaged in some discussion on whether this is appropriate, if a different reference PUL should be used, or if transformers should be tested at multiple PULs. To this end, DOE is requesting comment on:

  • Issue 8: Any data or information on the continued use of a single PUL test requirement compared with the alternatives;
  • Issue 9: How accurate would testing at multiple PULs be to the distribution of real-use transformer operations and how much would that increase testing costs;
  • Issue 10: How many PULs would be appropriate at which to test in a scenario of testing multiple PULs; and
  • Issue 11: Whether there are alternative metrics that should be considered to determine transformer efficiency.

Lastly, DOE also seeks comment on the sampling process and calculation methods used in the test procedure. The specific types of comments DOE seeks are the following:

  • Issue 12: Whether the sampling requirements of units to be tested should be adjusted;
  • Issue 13: Whether the efficiencies advertised by manufacturers typically represent the minimum efficiency standard, the maximum represented efficiency they are allowed to use, or some other metric;
  • Issue 14: Comment on DOE’s requirements related to alternative methods for determining energy efficiency (AEDMs); and
  • Issue 15: Whether the AEDM provisions are useful and if manufacturers use them.

Again this RFI is just the beginning of the rulemaking process for the distributed transformer test procedure, but it also represents the best time to get involved if these test procedures affect you. The above issues are just the ones that DOE specifically is looking to hear about, but stakeholders are more than welcome to address any other topics they find important. As mentioned in the Policy Rulemaking Process for Dummies article, comment periods such as this one represent the best opportunities to directly impact potential regulations that could have real impacts on you or your business.

Note: I have in the works a post on how to submit the most effective public comments, so if there appears to be interest on this post regarding the net metering RFI then I’ll make sure to move up publication of that subsequent post to be helpful for commenting on this Notice in advance of the comment submission deadline. Update: See here for my post on how to make the most effective public comment on a public rulemaking.

Summary of RFI details

  • DOE published RFI asking for comments on development of the technical and economic analyses regarding whether the existing test procedures for distributed transformers should be amended (82 FR 44347).
  • Some key specific topics DOE is interested in receiving comments on include:
    • Ways to streamline and simplify testing requirements;
    • Measures DOE could take to lower the cost of testing requirements;
    • The relation between PUL being tested and PUL actually used in the field for distribution transformers;
    • Whether current temperature correction in the test procedure is flawed;
    • How testing based on a single PUL affects the final posted efficiency of equipment; and
    • The appropriateness of the sampling and calculation methods currently used.
  • Comments are to be submitted by October 23, 2017.
  • Further information is available at the Notice’s online docket, and questions can be directed to Jeremy Dommu at the DOE Office of Energy Policy and Systems Analysis or Mary Green at the DOE Office of the General Counsel.
  • As always, feel free to contact me through the Contact page or commenting below if you have any questions you think I could answer as well.

 

Updated on October 10, 2017

 

About the author: Matt Chester is an energy analyst in Washington DC, studied engineering and science & technology policy at the University of Virginia, and operates this blog and website to share news, insights, and advice in the fields of energy policy, energy technology, and more. For more quick hits in addition to posts on this blog, follow him on Twitter @ChesterEnergy.  

Correlating Energy Data Sets: The Right Way and the Wrong Way

Determining the correlation between multiple sets of data—a measure of whether data sets fluctuate with one another—is one of the most useful tools of statistical analysis. Correlating data sets can be the endgame itself, or it can be what cracks open the door on a full statistical investigation to determine the how and why of the correlation. No matter the reason, knowing what data correlation is, how to correlate data sets, what a confirmed correlation might mean are all necessary ideas to have in your tool belt.

 What is data correlation?

Generally speaking, correlation examines and quantifies the relationship between two variables, or sets of data. In statistics, data correlation is typically measured by the Pearson correlation coefficient (PCC), which ranges from -1 to +1. Whether the PCC is positive or negative indicates whether the relationship is a positive correlation (i.e., as one variable increases, the other variable generally increases as well) or a negative correlation (i.e., as one variable increases, the other variable generally decreases). The absolute value of the PCC indicates the strength of the relationship, where the closer it is to 1 the more strongly related the two variables are, while a PCC of 0 indicates no relationship whatsoever.

Source

 

How do you calculate data correlations?

The PCC of two variables can be easily calculated with a built-in function of Microsoft Excel (if you want to know how to calculate the PCC according to hand—first, kudos to you, scholar; second, see either this resource or this one for more detailed instructions on the calculation itself).



To start, list out your two variables in two columns of an excel sheet. For this example, we’ll pull the West Texas Intermediate (WTI) oil prices and the U.S. regular grade gasoline prices during a four-month period in the Fall of 2016 from the website for the Energy Information Administration (EIA) (for guidance on pulling data from EIA, see this previous blog post).

Link to Gasoline Price Data; Link to WTI Spot Price data

Note that the weekly prices here reflect the average price calculated for the week ending in the date listed. Also the Cushing, OK WTI spot price reflects the price of raw crude oil in Cushing, OK, a major trading hub for crude oil that is used as the price settlement point for WTI oil on the New York Mercantile Exchange (NYMEX).

Now to find the PCC, use the excel function CORREL. This function takes the form of the following:

=CORREL(ARRAY1, ARRAY2)

where ARRAY1 and ARRAY2 are the two data sets you are seeking to correlate.

Using this excel function, we get a PCC of 0.545. Remember that a positive PCC indicates that the two arrays tend to increase with each other,and that the closer the PCC is to 1 then the more closely related they are. This result of 0.545 would seem to indicate a fairly decent correlation between the price of WTI oil and the price of regular gasoline over these several months. Not only does a positive correlation between the prices of these two products make intuitive sense (because the price of crude oil is the largest factor in the retail price of gasoline), but we can confirm with a data visualization as well.

:

Note that the first graph is showing the change in the two prices over time, with the date on the x-axis and the prices on the two y-axes. Visualizing the data this way, we can see that the prices are climbing and falling somewhat step-in-step. The second graph shows the relationship in a different way, with the price of oil on a given week on the x-axis and the price of gas on the same week on the y-axis. Visualizing it this way, and including a trendline for that data, you again see that as one variable rises, generally so too does the other variable. However, clearly it isn’t a direct one-to-one relationship—hence why the calculated PCC is 0.5455 and not closer to 1.

As a second example, let’s now find the correlation between gas prices during this same time period with the quantity of finished motor gasoline supplied to the market—as basic economic principles give us a sense that there should be a relationship between quantity sold and price. Below we again pull the relevant data sets from EIA and use the CORREL function

Link to Gasoline Price Data; Link to Gasoline Supplied Data

Note that the weekly prices here reflect the average price calculated for the week ending in the date listed.

For these two variables, we get a PCC of -0.173. Now that the PCC is negative, this implies a negative correlation—i.e., as the gasoline price increases, the amount of gasoline sold decreases. This conclusion again makes a degree of intuitive economic sense, as when the price of something increases ,the expected consumer response would be to purchase less of it. However, with PCC so far from -1 we don’t necessarily see this as a very strong correlation. We can look at the data visualization for these data sets as well:

Looking at the first graph, we can again see visually what the PCC was indicating in general. As the gasoline price reaches local peaks, the amount of product supplied tends to reach local valleys, and vice versa. The second graph indicates that with a negative trend line, though again it’s overall just a slight, general trend and not very rigid—as indicated by the PCC being closer to 0 than it is to -1.

There’s a data correlation—what now?

So the key to answering what happens next is to know why you were looking for a data correlation in the first place. Let’s say I was examining the correlation between gas prices and oil prices because I wanted to identify the factors that best predicted gas prices going forward. For each of the two variables tested with gas prices over the four month period in 2016, the expected generally correlation was confirmed with the data, though the PCC wasn’t strong enough to definitively declare victory at having found a correlation. What would I do in this scenario?

More data

The first course of action would be to gather more information. I’ve only looked at 16 weeks of data, but it has been enough to give me a correlative hypothesis (increased gas prices correlate with increased WTI oil price and decreased gasoline supplied). You might take this hypothesis and expand your analysis to include more historical data and see if the same correlation holds or if it moves in a different direction. Further, you might reason out that there are more subtle interactions of between the data that should be explored. Perhaps looking at the price of gas and the price of oil during the same week is too simplistic, and rather you should be looking at the price of oil compared with the price of gas the following week, two weeks, or even month to account for the time needed to refine crude oil into gasoline? Or if your goal is to really find the most influential correlating factors, then it would go without saying to test many more variables to figure out the ones with the closest correlation. For gas prices, you might consider also looking at general economic data, import/export data, production and refining production data, drilling data, and much more.

Test further

Once you have exhausted the data you are looking at and determine what correlates well based on that data, it is important to make sure to test it as well to make sure any conclusions you make are based on sound correlation. As with any type of hypothesis, a correlation is essentially meaningless unless it gets tested.

A couple methods for testing the correlation are available. First, as mentioned previously, expand your data set and put the correlation to the test on a wider set of data—either by looking further in the past to see if the correlation persists, or by using the correlation as a predictive model for future data and seeing if the relationship holds when the new data becomes available.

If you have not already done so, creating a visual representation of data, as done for the two sets of variables above, is a great way to gain understanding of your correlation (and has numerous other advantages for taking in data). As you conduct your data detective work, be sure to always check yourself by creating graphs and other visualizations to confirm suspicions and/or catch some new insights. Whenever possible, as well, work with the data yourself instead of referencing the visualizations of others. In the worlds of data and statistics, it is notoriously easy to ‘make’ the data appear to say whatever you want to say to a lesser informed audience (stay tuned for a future post on this topic).

Another important ‘test’ of sorts is one we already implicitly did when selecting our examples in the previous section—reason out why a correlation might exist. For the prices of crude oil and finished motor gasoline, the reason behind a correlation is somewhat self-evident. But if you’re looking at variables that are less obviously linked, this is where you can do research or consult with experts to determine if there exists any logical rationale to explain the correlation. Otherwise you could be grasping at straws, despite the apparent correlation—discussed in more detail next.

Recognize limitations

Being aware of the limitations of correlating data is the best defense against falling victim to the shortcomings of the technique. This idea is best illustrated in another example.

Let’s say I was continuing the above effort to find factors that I could use in the future to predict gas prices. As discussed, the spot price of WTI oil, with a PCC of 0.545, is determined to be great candidate for correlation with a reasonable PCC, data visualizations that illuminate the relationship, and a very logical and rational reason for the two variables to be correlated. So if oil demand at a PCC of 0.545 is counted—then I should be excited when I stumble upon a mystery variable with a PCC of 0.592!

Link to Gasoline Price Data; Source of Mystery Variable (Spoiler Alert!)

Note—Mystery Variable had no available data for the week of November 7, 2016

With a PCC of 0.592, I could feel great that I have another factor to add to my model. Looking at the data visualizations below does nothing to dispel that notion, either.

The issue is, however, not realizing that if you wade through large enough sets of data you are virtually guaranteed to find coincidental correlations. In this example, I was able to find just such a coincidental correlative data set by looking through the only other vast set of data I spend as much (or sometimes, shamefully, more) time with than energy-related data—fantasy football! Yes, the mystery variable that appeared to correlate decently well with U.S. gas prices from September to December of 2016 was actually the standard fantasy points scored by Washington player, Chris Thompson (missing data for the week of November 7 was due to his bye week).

The man that correlates with gas prices

After revealing the actual source of my mystery variable, you would obviously have me pump my brakes on any correlation. There is no possible explanation for why these two variables would be correlated (unless perhaps you would like to make the argument that when the price of gas goes up, Chris Thompson drives less and walks to and from practice—thus improving his cardiovascular endurance and improving his performance that subsequent week; I unfortunately could find no information on his in-season transportation habits).

The fallacy of connecting my mystery variable to gas prices would almost certainly have been exposed were you to test the correlation through expanding the data set and logical reasoning, as previously discussed. Unfortunately, other factors will not always be so obvious to rule out—which is why having as large of data sets as possible is key. Even then, however, you are bound to stumble upon these coincidental correlations (for some thoroughly entertaining and statistically vigorous examples, check out the Spurious Correlations blog) when casting a wide enough net. That fact is just one of the quirky statistical truths with very large sets of data (if interested on this topic specifically, I’d highly recommend reading either or both of these two fabulous books: The Drunkard’s Walk: How Randomness Rules Our Lives & The Improbability Principle: Why Coincidences, Miracles, and Rare Events Happen Every Day)

Beyond that, even if the correlation might seem sound, keep in one of the firs things taught in introductory statistics, and also one of the first things forgotten, that correlation is not causation (credit to Thomas Sowell). So while our fantasy football to gas prices comparison is a false correlation, even a true correlation does not automatically let you leap to the conclusion that one variable must be causing the other– a topic that this section of the blog will assuredly revisit in a future post. For now, though, I’ll leave it to America’s favorite statistician to summarize:

“Most of you will have heard the maxim “correlation does not imply causation.” Just because two variables have a statistical relationship with each other does not mean that one is responsible for the other. For instance, ice cream sales and forest fires are correlated because both occur more often in the summer heat. But there is no causation; you don’t light a patch of the Montana brush on fire when you buy a pint of Haagan-Dazs.”
― Nate Silver, The Signal and the Noise: Why So Many Predictions Fail–but Some Don’t

 

About the author: Matt Chester is an energy analyst in Washington DC, studied engineering and science & technology policy at the University of Virginia, and operates this blog and website to share news, insights, and advice in the fields of energy policy, energy technology, and more. For more quick hits in addition to posts on this blog, follow him on Twitter @ChesterEnergy.  

Federal Register Notice: Costs and Benefits of Net Energy Metering: Request for Information

I’ve been excited to write a first post in the ‘Checking in on the Federal Register’ article series, but have been waiting for the right Notice to be posted in the Federal Register. Today is finally that day, as the Department of Energy (DOE) published a Notice of request for information (RFI) in the September 15, 2017 issue of the Federal Register (82 FR 43345). This Notice is a rather brief one, so I would encourage you to read it yourself, in addition to reading this post where I’ll summarize the important details and pre-emptively answer any questions you may have.

What is an RFI?

If you’ve already read my Policy Rulemaking Process for Dummies article, you might be confused—RFIs don’t appear anywhere in that summary of the typical rulemaking process. However, an RFI falls into the category of situations where a Notice of Proposed Rulemaking (NOPR) is not the first Federal Register notice. An RFI is issued, such as in this case, when a topic is particularly complex or contentious, and so the agency solicits public feedback earlier in the process to ensure it has all the best and most up-to-date data and information available before beginning its analysis.



In this instance, DOE indicates they are preparing a cost/benefit study on net metering as a part of the Grid Modernization Initiative. DOE is likely seeking out all possible resources and data sets from stakeholders because net metering is a very contentious issue and it is crucial to have all possible information before digging into the study.

Background of Net Metering

According to DOE, net metering, or net energy metering (NEM), is the “the practice of using a single meter to measure consumption and generation of electricity by a small generation facility (such as a house with a wind or solar photovoltaic system). The net energy produced or consumed is purchased from or sold to the power provider, respectively.” Thus, when these customers who generate their own electricity have generated more power than they are using, they are able to sell back their excess electricity to the utility who controls the transmission and distribution system to which they are connected.

The debate on net metering often pits companies and customers installing rooftop solar panels against utilities. The main crux of the debate surrounds rate design and the issue of whether customers whose excess solar power gets sold back into the larger utility grid system should be credited at the retail rate of electricity or if that unfairly results in these solar customers not paying their fair share of the grid upkeep costs.

These disputes have been commonly going to courts in states with frequent residential solar power installations. For a quick rundown of where each state stood on laws regarding net metering (as of late 2016), see this summary by the National Conference of State Legislatures. For a summary of the most contentious net metering legal battles going on in 2017, see this write up by Utility Dive. However since this is such a common and frequent debate across many different states, the best way to see the latest news is a quick Google search—the most recent development coming from just approved rules and regulations for net metering in Nevada.

What is being requested

In short, this RFI is requesting that stakeholders submit any relevant data, studies, or information they have in regards to the costs and benefits of net metering. DOE specifically requests information from the perspectives of the utilities’ business interests, the rate-paying consumers, and those tasked with addressing the technical and operational challenges of net metering.

As such, this RFI casts a rather wide net as to who prospective stakeholders would be—not only will the expected utilities and energy advocates likely submit comments, but the door is open for any private citizen or group of citizens to express their thoughts and concerns. As mentioned in the Policy Rulemaking Process for Dummies article, comment periods such as this one represent the best opportunities for you, either as a private citizen or a member of an organization, to directly impact potential regulations that could have real impacts on your life.

While there are numerous public studies available that DOE can, and certainly already has, referenced for their information gathering process, the key to this comment request is that they would like those with deep knowledge and experience with the topic to provide comment and context on those existing studies (specifically citing studies done since the beginning of 2012). DOE is hoping to hear if stakeholders find there to be any flaws in commonly cited studies on the topic, information that is not discussed in those studies, or data/information that stakeholders may have internally that have not yet been made public. In essence, this RFI from DOE is indicating they are studying the costs and benefits of net metering, and if you have any information you think is important to be included in that study then now is the time to raise your hand. The analysis generated from the information in this RFI will ultimately be presented to Congress.

Note: I have in the works a post on how to submit the most effective public comments, so if there appears to be interest on this post regarding the net metering RFI then I’ll make sure to move up publication of that subsequent post to be helpful for commenting on this Notice in advance of the comment submission deadline.

UPDATE: See this blog post for advice on making an effective public comment

Summary of RFI details

  • DOE published RFI asking for comments on the costs and benefits of net energy metering (82 FR 43345).
  • Specific topics DOE is interested in receiving comments on include:
    • Motivations and policy context of cost-benefit analyses of net metering;
    • Types of costs and benefits that should be considered for net metering;
    • Methodology issues typically encountered in net metering studies;
    • Context for what drives differing costs or benefits in different net metering studies; and
    • Any emerging issues that should be considered in future net metering studies that may not have been relevant to studies in the past.
  • Comments are to be submitted by October 30, 2017.
  • Further information is available at the Notice’s online docket, and questions can be directed to Kate Marks at the DOE Office of Energy Policy and Systems Analysis.
  • Feel free to contact me, through the Contact page or commenting below, if you have any questions you think I could answer as well.

 

 

 

Updated on October 10, 2017

About the author: Matt Chester is an energy analyst in Washington DC, studied engineering and science & technology policy at the University of Virginia, and operates this blog and website to share news, insights, and advice in the fields of energy policy, energy technology, and more. For more quick hits in addition to posts on this blog, follow him on Twitter @ChesterEnergy.  

Navigating the Vast EIA Data Sets

The Energy Information Administration (EIA) is an independent arm the Department of Energy (DOE) that is tasked with surveying, analyzing, and disseminating all forms of data regarding energy in the United States. Further, EIA is a politically isolated wing of the DOE– meaning it is there to provide independent and factual data and analysis, completely independent from the partisan decision makers in Washington or the political inclinations of those in charge of at the top of DOE. Because that is the case, you can be confident the data put out by EIA is not driven by any agenda or censored in favor of a desired conclusion.

Thus for anyone with even a passing interest in the national production and use of energy, EIA really is a treasure trove of valuable information. However, those who are unfamiliar with navigating the EIA resources can easily get overwhelmed by the vastness of the data at their fingertips. Additionally, even seasoned veterans of the federal energy landscape might find it difficult to find the exact piece of data for which they are digging within the various reports and data sets made publicly available on the EIA website. So regardless of your experience level, what follows is a brief guide to what type of information is available as well as some advice as to how to make the best use of your time surfing around EIA.gov.



Types of data available

One of the really fabulous things about the EIA data sets is that they cover every kind of energy you can imagine. The energy categories you can focus into include, but are not limited to, the following:

Within these energy categories, you can look at the trends of production, consumption, imports/exports, and carbon dioxide emissions going back years (oftentimes even decades) and also modeled as a forecast into the coming years. Most data sets will have tools to automatically manipulate the data to change between units (e.g., total barrels of oil vs. barrels of oil per day), or even manipulate data trends (e.g., go from weekly data to 4-week moving averages to 10-year seasonal averages). Depending on the type of data, these numbers are regularly updated weekly, monthly, and/or yearly. If there’s a topic of particular interest, there’s a good chance there’s a report with the data on it being released at regular intervals– some of the more prominent reports are highlighted below.

Regularly updated reports

EIA releases a regular stream of reports that serve to update the publicly available data at given intervals. Some of the more prominent reports are listed below, and they are typically used to update all of the energy categories previously mentioned:

  • The Monthly Energy Review (MER) is a fairly comprehensive report on energy statistics, both from the past month and historically back a number of decades. Published during the last week of every month, the MER includes data on national energy production, consumption, and trade across petroleum, natural gas, coal, electricity, nuclear, renewables– as well as energy prices, carbon dioxide emissions, and international petroleum.
  • The Short-Term Energy Outlook (STEO) is another monthly EIA report, this one released on the first Tuesday following the first Thursday of the month. The STEO includes data on much the same topics as the MER, with the inclusion of some international energy data, and it also includes monthly and yearly projections for the rest of the current year and all of  the next year based on EIA’s predictive models. The inclusions of these forecasts makes for particularly useful data sets for anyone who might be trying to stay a step ahead of the energy markets. Also of particular interest for statistically-minded people out there is a regular comparison of numbers between the current STEO forecast and the previous month’s forecast. These comparisons show which way the model shows data to be trending, with the more significant ones called out in the report and noted with reasoning behind the changes.
  • The Annual Energy Outlook (AEO), like the STEO, provides modeled projections of energy markets– though the AEO focuses just on U.S. energy markets, models these annual forecasts long-term through the year 2050, and is released every January. The other aspect of the AEO that makes it particularly interesting is that its modeled forecasts, in addition to a reference case forecast, include different assumptions on economic, political, and technological conditions and calculate how those various assumptions might affect the outlook. For example, the 2017 AEO includes projections based on high economic growth vs. low economic growth, high oil price vs. low oil price, high investment in oil and gas resources and technology vs. low investment, and a projection that assumes a complete roll-back of the Clean Power Plan.
  • The International Energy Outlook (IEO) provides forecast energy market data consistent with the AEO, but regarding the international energy market through 2040.
    • With forecasts in both the STEO and the AEO, an understanding of exactly what is meant by the forecasts is imperative. The forecasts and projections do not necessarily reflect what a human prognosticator within EIA thinks could, should, or will happen– rather it demonstrates what the predictive models calculate given the best possible and unbiased inputs available. This difference is a subtle one, but if you ever find yourself questioning “does the person behind this report really think this is going to happen?”, recognize that some nuance exists and the reason you are skeptical might have not yet been able to be statistically included in the model.
  • The State Energy Data System (SEDS) is published once annually and breaks down national energy use, price, spending, and production by sector and by individual states. Within each of these categories, you can also break down the data by energy type (e.g., coal vs. natural gas) and by primary energy use vs. electric power generation. Having this granularity is useful to further dig into if certain energy trends are regional, restricted to certain climates, or are in response to specific state policies.

While they are not necessarily releasing new and specific data on a regular basis, two other EIA articles of note are worth pointing out because of the interesting stories and analyses they tell:

  • Today in Energy (TIE) comes out every weekday and gives a quick and readable article with energy news, analyses, and updates designed to educate the audience on the relevant energy issues. TIE frequently features graphs and charts that elegantly demonstrate the data in an easy to understand but also vastly elucidating way. One of the real advantages to reading TIE each day, though, is they often include tidbits from all the previously mentioned regularly updated reports, as well as other major releases or EIA conferences, enabling you to keep up with the newest information from EIA (click here for a post on the best TIE articles of 2017 to get you started).
  • This Week in Petroleum (TWIP) is an article that comes out every Wednesday that is very similar to the TIE articles, but focuses on the world of petroleum specifically and provides crucial insights on topics such as drilling, oil company investments, retail prices, inventories, transportation of crude and refined petroleum products, and more.

If any of these regular reports are of interest to you, you can sign up to get email alerts anytime these (or a number of other) reports are released by EIA by visiting this page. If you don’t know which reports you’d want but you want to keep an eye on what EIA is putting out, you can also simply subscribe to the “This Week at EIA” list that will once a week send you an email to notify you of ALL the new EIA productions from that week.

Finding specific data

While keeping up with all the regular reports from EIA is immensely useful, what brings many people to the EIA website is the search for a specific piece of data. You might want to see a history of average gasoline prices in a certain region of the country, find the projection of how much solar capacity is expected to be added in the next few years, track how much petroleum product is being refined in the Gulf Coast, or countless other facts and figures. Below you’ll find a few strategies you can employ to track down the information you seek.

Navigating the menus

EIA.gov has a useful menu interface through which you can usually navigate to your desired dataset easily.

Source: Homepage of EIA.gov
  • The “Sources & Uses” drop down will be where you can navigate to data sets about specific fuel sources and energy use;
  • The “Topics” drop down highlights the analysis on data by EIA as well as economic and environmental data; and
  • The “Geography” drop down is where you can navigate data by state or look at international data.
Source: Homepage of EIA.gov

Navigating from these menus is fairly self-explanatory, but let’s walk through the example of finding the recent history of gasoline prices in the Gulf Coast region of the United States. Gasoline is a petroleum product, so we would click on “Petroleum & Other Liquids” under the “Sources & Uses” menu.

Once on the “Petroleum & Other Liquids” page, the information we’re interested in would be under the data menu with the “Prices” link.

Source: Landing page for EIA.gov/petroleum

You’ll then see a listing of various regular releases of petroleum product price reports and data sets. Since we’re interested in Gulf Coast gasoline prices, we’ll click the third link for “Weekly retail gasoline and on-highway diesel prices.”

Source: EIA’s Petroleum and Other Liquids Prices

Clicking on this report will bring up the below interactive table. The default view will be to show U.S. prices averaged weekly. The time frame can be adjusted to monthly or annual prices (we’ll keep it at weekly). The location of the prices can be changed to allow viewing of data by region of the country or by select states and cities (we’ll change it to the Gulf Coast). The interactive table then displays the most recent week’s data as well as the previous five weeks (note: for ‘gas prices’ as is most often reported in the media and related to people filling up the gas tanks in their cars, we’re interested in the row titled ‘Regular’).

Source: EIA’s Weekly Retail Gasoline and Diesel Prices

If you’re interested in going further back in time then shown in the interactive table, the ‘View History’ links can be clicked to bring up an interactive table and graph going as far back as EIA has data (1992, in this case), shown below. Alternatively, if you want to have the raw data to manipulate yourself in Microsoft Excel, then click the ‘Download Series History’ link in the upper left (I’ll download and keep this data, perhaps handy for later in this post).

Source: EIA’s Weekly Gulf Coast Regular All Formulations Retail Gasoline Prices

Note in the above interactive chart there is the built-in abilities to view history by weekly/monthly/annual data, to download the source data, or the adjust the data to be a moving average or seasonal analysis.

If you find a page with the type of information you’ll want to reference regularly or check in on the data as they update, be sure to bookmark the URL for quick access!

STEO Custom Table Builder

Another useful tool is the STEO Custom Table Builder, which can be found here. The Custom Table Builder allows you to find all of the data that is included in the monthly STEO report (e.g., U.S. and international prices, production, and consumption for petroleum products, natural gas, electricity, coal, and renewable energy; CO2 emission data based on source fuel and sector; imports and exports of energy commodities; U.S. climate and economic data broken down by region; and more). This data can be tracked back to 1997 or projected forward two years on a monthly, quarterly, or annual basis. All you need to do is go to the Custom Table Builder, shown below, and select the options you wish to display.

Source: EIA’s Custom Table Builder

As an example, let’s use the STEO Custom Table Builder to determine the projected of how much solar power capacity in the near term. Solar would fall under the ‘U.S. Renewable Energy’ category, so click to expand that category, then expand the ‘Renewable Energy Capacity,’ and you’ll see the STEO has data for data for the capacity of large-scale solar for power generation, large-scale solar for other sectors, and small-scale solar for other sectors.

Source: EIA’s Custom Table Builder

Select all the data relevant to solar data, select the years you want (we’ll look at 2017 thus far through the end of 2018), and what frequency you want the data (we’ll look at monthly). Then hit submit, and the following will be the custom table built for you.

Source: EIA’s Custom Table Builder

Note: The forecast data is indicated in the Custom Table Builder with the numbers shown in italics. The above data was pulled before the September 2017 STEO was published, so the projections begin with the month of August 2017.

For this example, we’ll want to then download all the data to excel so the total solar capacity can be added up and analyzed. Click the ‘Download to Excel’ button at the upper right to get the raw data, and with a few minutes in Microsoft Excel you can get the below chart:

Source of Data: EIA.gov, pulled on September 10, 2017

This graph, made strictly from STEO Custom Table Builder data, shows the following:

  • As of July 2017, large-scale solar generation capacity was only 0.3 GW outside of the power sector and 23.7 GW, while small-scale solar generation capacity was 14.8 GW.
  • Together, solar power capacity in the United States added up to 39.1 GW as of July 2017.
  • By the end of 2018, total solar power capacity is projected to rise to 53.7 GW (an increase of 14.5 GW, or 37%), according to the EIA’s August 2017 STEO.

Search function

Using a search bar on some websites can be surprisingly frustrating, but luckily the EIA search function is very accurate and useful. So, I have found that, when in doubt, simply doing a search on EIA.gov is the best option.

Perhaps I want to track the amount of petroleum products in production on the Gulf Coast. This information is not in the STEO report, so the Custom Table Builder won’t be of use. And maybe I don’t immediately see how to navigate to this specific information on the menus. I would type into the search bar the data I’m seeking as specific as possible—‘weekly gulf coast refiner gasoline production’:

Source: Homepage of EIA.gov

Doing the above search yields the below results, of which the first one looks like just what we need.

Source: EIA.gov

Click on that first link, and ta-da! We’re taken to the weekly gasoline refinery report for the Gulf Coast (referred to as PADD 3). Again, you see the options here to look at the history back to 1994 both on a weekly and a 4-week average basis, use the chart tools to analyze moving averages or seasonal analyses, or download the data to utilize in your own way.

Source: Weekly Gulf Coast Refiner and Blender Net Production of Conventional Motor Gasoline

Contact experts

As a last resort, the EIA website offers resources to contact should you have questions or issues navigating the data. The people behind the EIA data are civil servants who are intelligent and very dedicated to their job and making sure you get the accurate and relevant information you need. So in a pinch, head to the Contact Us page and find the topic on which you need help from a subject matter expert.

If you want an alternative to going straight to the people at EIA, however, feel free to contact me as well and I’d be happy to try and help you track down information on EIA.gov as well. Use any of the contact methods mentioned in the Contact Page of this site, or leave a comment on this post.

Using the data

I have found that it is not at all an exaggeration to say that the world (of energy data, at least) is at your fingertips with EIA’s publicly available data. To demonstrate, I’ll walk through a quick example of what you can find.

If we take the previously gathered weekly data for Gulf Coast gasoline prices and gasoline production, we can plot them on the same graph:

Source of Data: EIA.gov, pulled on September 10, 2017

By taking advantage of the publicly data on EIA’s website, we can notice some trends on our own. In the above, there is a drastic increase in Gulf Coast gasoline prices, coincident with a large decrease in Gulf Coast refiner production of gasoline that bucks the month-long trend of production generally increasing. This is a curious change and would prompt investigation as to the reason why. Luckily, several of EIA’s Today in Energy articles already points out this trend and offers explanation—all related to the effects of Hurricane Harvey on the Gulf Coast petroleum systems (Article 1, Article 2, Article 3). Just goes to show that one of the best way to stay abreast of trends and information in the energy world is to follow EIA’s various reports and analyses.

 

Updated on September 28, 2017

 

 

 

About the author: Matt Chester is an energy analyst in Washington DC, studied engineering and science & technology policy at the University of Virginia, and operates this blog and website to share news, insights, and advice in the fields of energy policy, energy technology, and more. For more quick hits in addition to posts on this blog, follow him on Twitter @ChesterEnergy.