by Pete Maltbaek: For more than 100 years, electricity has been reliably provided to end users through a centralized generation and transmission model…
Large coal, hydro and (later) nuclear generating facilities produced huge amounts of electricity and, through a spider’s web of high voltage transmission lines, sent the power to distribution substations which in turn, through a secondary set of lower voltage feeders, distributed the power out to the end users. And when the end user flipped the switch, their lights would go on. This system was very reliable.
As we move into the 21st century, centralized electrical generation is being replaced by renewable and distributed energy resources (DERs) like wind and solar, many of which are being deployed at the edge of the grid. These DERs are fast approaching cost-parity with traditional resources.
With the economics lining up, it is hard to know why anyone would not want to use the wind and the sun as our primary generation resources. After all, the fuel is free and inexhaustible (the renewable part), and generally non-polluting, with no carbon emissions or dangerous nuclear waste. And of course, people do want this – Hawaii, California, Washington D.C. and now New Mexico have all enacted legislation to eliminate carbon emissions from the grid and move to 100 percent renewable electrical generation – and a dozen other states have similar goals.
Why doesn’t every state and the federal government immediately adopt the same mandates? Well, two of the main technical reasons caused by the proliferation of DERs are maintaining power supply reliability and finding a way to create grid resiliency.
The intermittency problem
To be reliable, the electrical grid needs to be in constant balance, with the amount of power generated equal to the amount demanded at any instant in time. Previously, it was possible for utilities to be very certain about how much power large centralized plants would produce.
Renewable generation such as wind and solar is not so reliable. As these intermittent sources of power become a larger part of the generation mix, it is becoming much harder for utilities to be sure that they can maintain this balance of supply-and-demand.
An additional problem for the utility engineers who work diligently to keep the lights on is that under the old centralized model, the control systems were also centralized. The utility control center could see the generation at the larger power plants and what was being consumed down to the distribution substation level, and due to the sheer size and inertia of the system, this was good enough to maintain high levels of reliability.
Today, however, many of the new DERs are being connected at the grid edge – beyond the visibility of these traditional control systems. It has been known for some time that new control technologies are required to deal with this. It has become critical that grid control technologies evolve fast enough to keep up with the increasing levels of these variable, intermittent DERs.
One of the physical answers to the intermittency problem is to store power – hoard it when it is cheap and supply it when it is scarce or when needed to counter the sudden loss of power when a cloud crosses the sun. But storing electricity has always been difficult and expensive, with the cheapest way traditionally being the water in reservoirs behind dams above hydroelectric power plants.
However, the rapidly reducing cost of batteries is changing that paradigm for everyone. Another great thing about batteries is that they can react quickly when needed, supplying or absorbing power in seconds when conditions demand fast action. To make economic sense, however, the power used to recharge the battery has to be cheaper than the value of the power returned.
Mother nature and bad actors
Grid resiliency refers to the ability of the grid to survive two different kinds of attack — one by Mother Nature in the form of weather, and the other by humans, in the form of cyber attempts to take the grid down.
Here again, the old centralized model had a degree of robustness. Firstly, against natural disaster, it was designed to survive the sudden loss of (usually two) of the largest generating resources or transmission lines. Secondly. since there were few (and not very intelligent by today’s standards) communicating grid-edge devices, it was difficult to hack into the utility’s control systems.
However, when something the size of a hurricane rolls in, even if it causes only local major damage, catastrophic failure of the electrical supply over large areas could result, and worse, it could take a long time to restore this huge system once it was down. Similarly, there is always the fear that a bad actor, if they ever get access to the central control systems, could cause widespread disruption.
The proliferation of DERs, theoretically at least, affects resiliency both positively and negatively.
On the positive side, since generation (at a small scale) could eventually be as widespread as consumption, it should be possible to both survive catastrophic and widespread failure of the large, centralized system by quickly breaking it into many smaller self-contained parts (called microgrids). It should also be easier to restore the whole system more quickly if some areas do fail – or at least restore many of these microgrids independently and more quickly. Essentially this would occur by restoring from the outside (grid-edge) in, instead of from the inside-out, as is done today.
On the negative side, the economies of scale make it difficult for a microgrid to compete on price with grid-supplied power. This is changing with the free fuel supplied by the wind and sun, but then there is also the engineering problem that the smaller size of the microgrid provides less inertia to help with the intermittency problem.
On the cyber side, the proliferation of intelligent grid-edge devices that can communicate bidirectionally with the utility’s control systems appears to multiply the opportunity for someone somewhere to find a way in. Perhaps even through your EV’s computers when you are charging your car. This is a real problem.
We are off to the races here
I have touched on two of the issues that are most important to our policymakers, and also ones that an army of engineers in dozens of companies around the world are trying to solve. The federal government also has a robust research agenda relating to grid modernization, with cutting-edge work occurring in many of the seventeen federal labs.
Personally, I am certain these problems can be solved, and the forces changing the way we will generate the bulk of our electric power, and changing the way we operate the grid, are unstoppable. The new “solar-plus-storage” mantra is everywhere, and for good reason. I don’t know anyone who wouldn’t want to power their house and at least their commuter vehicle with the free non-polluting fuel provided by their own rooftop solar system if they could, and if it made economic sense – which it just about does today and will very soon.
In summary, regulatory issues, politics and an overall dislike of disruption and change can and will slow the inevitable drive to 100% clean and renewable electric power. This disruption includes affecting all of the fuel-supply industries such as coal, oil and nuclear – major parts of the world economy. But I am certain that in the end, the economics will win as it always does – you just can’t compete with fuel that is free, inexhaustible and non-polluting, not to mention zero-carbon, especially if it is what the consumer ultimately wants.
Pete Maltbaek is currently General Manager North America, Smarter Grid Solutions (SGS), and a member of the SGS executive management team. He has more than 30 years of experience working in the electric industry in North America and with utilities and regulators in over 30 countries around the world. His primary expertise is in the design and technology requirements of electric utility grid control systems, deregulated power markets, and the integration of renewable energy into grid operations.