9 Traditional Reliability Engineering Tools and Their Limitations

9.1 INTRODUCTION

The third of the five aspects of aging T&D infrastructure discussed in Chapter 1 is Engineering Paradigms and Approaches. Traditional power system engineering methods, and the paradigms built around them, are as much a part of existing power systems and their strengths and weaknesses, as old equipment and obsolete system layouts. Many of the key concepts and engineering approaches used in “modern” power systems were created in the mid-20th century, at a time when electric consumer demands, societal expectations, and engineering criteria were far different from today’s. Like all engineering methods, the mid-twentieth century’s had their limitations and advantages. Those limitations were for the most part well understood by their developers, but were far outweighed by their advantages, among the most important being that the methods could be implemented with the very limited computing resources available at the time. The power industry has carried many of these engineering methods and the concepts built around them through several generations of power system engineers. Not only are these methods in widespread use, but several of these paradigms have become dogma.

This chapter will begin by examining the traditional power system reliability engineering tool: the N-1 criterion and the contingency-based planning approach. This method was first used in the mid-20th century and had been developed into efficient computerized form by the late 1960s. It, and the conceptual approach to power system planning that goes along with it, are at the core of nearly every utility’s overall power delivery planning and reliability engineering procedures. Similarly, many of the core concepts and criteria concerning the layout of distribution feeder systems were developed in the 1930s through the 1960s, and are largely unchanged in their application as the electric utility industry enters the 21st century. These will be covered in Chapter 10.

These traditional power system-planning methods proved more than adequate to meet the industry’s needs through the 1950s to early 1990s. However, as experience has shown, and this chapter will explain, they are less than completely adequate for application to aging infrastructure areas in some of today’s high-stress utility systems. This is due to an incompatibility between these traditional planning methods and the way that modern utilities need to plan, design, and operate their systems. As a result, systems planned and designed along proven but traditional lines, using tools that engineers could once confidently use, often provide poor customer service reliability and experience severe operating problems.

This chapter begins in Section 9.2 with an examination of the basic N-1 contingency-based reliability design criterion and the typical methods of its application. Section 9.3 then explores this methodology’s limitations with respect to modern needs, and discusses how they interact with the characteristics of modern utility systems, and discusses how that often results in a system that does not provide the expected level of reliability. Section 9.4 looks at some other planning-related issues that have created real challenges for aging infrastructure utilities, most notably load forecasting errors and the way they interact with system reliability. Section 9.5 provides a reminder that high utilization rates and cost-reductions, per se, are not the reason that modern power systems and particularly aging systems, tend to give poor results – rather it is the inability of traditional planning tools to fully analyze the reliability implications of design in those areas. Section 9.6 rounds out the chapter by summarizing key points and gives five recommendations for effective planning procedures to be applied to aging infrastructure areas.

9.2 CONTINGENCY-BASED PLANNING METHODS

The N-1 Criterion

The traditional power system planning method used to assure reliability of design at the sub-transmission – substation level is the N-1 criterion. In its purest form, it states that a power system must be able to operate and fully meet expectations for amount (kW) and quality of power (voltage, power factor, etc.) even if any one of its major components is out of service (a single contingency). The system has N components, hence the name N–1.

This criterion makes a lot of sense. Unexpected equipment failures happen. Expected equipment outages (maintenance) are a fact of life. A prudent approach to design reliability should include making certain that the system can perform to its most stressful required level (i.e., serve the maximum demand, the peak load) even if a failure or maintenance outage has occurred. Just how much reliability this assures depends on a number of factors that will be addressed later in this section and the fact that it does not always assure reliability is the major topic of this chapter. However, from the outset it seems clear that this criterion sets a necessary requirement for any power system this is expected to provide reliable power supply.

Extension of the concept to multiple failures

The criterion can also be applied as an “N–2” criterion or “N–3” criterion in which case the system must be able to perform to peak requirements even if any two or three units of equipment are out of service, rather than one. Generalized, this becomes the N–X criterion, the power system will satisfy expectations even if any set of X and its components are out of service. Regardless, the method is generally referred to, and will be referred to here, as “the N–1” concept and criterion, even if X is greater than one.

Application of the Basic Contingency Planning Concept

Typically, this concept is applied as a criterion in the planning and engineering of the transmission/sub-transmission/substation portion of an electric distribution (T&D) system – the portion from 230 kV down to 69 or possibly 34.5 kV. At most electric distribution utilities, it is not applied to the distribution feeder system. Instead, techniques and analytical planning methods devised for application to radial power flow systems are used (see Section 9.3).

The base case

Application of the N–1 criterion begins with a base case, a model of the power system as designed or planned, with all equipment in place and operating as intended. An appropriate engineering description of all this equipment along with a set of expected peak loads that it will serve, forms the complete base case. In all modern utility planning procedures, this base case is a data set representing the system, to be used in a load-flow. A digital computer analysis will determine for the system represented by that data set, the power flows, voltages, power factors and equipment loadings that will result when that equipment set us asked to serve that demand. Various “minus one” or “contingency” cases are then done using this model as the base, literally deleting one element of the data set at a time and “resolving” the model to see what effect that loss had on voltages, currents, etc.

At the time the N–1 method was first developed, (which was prior to the availability of digital computers), the base case was an analog computer model built using patch-cord connections and numerical settings of rheostats and switches on a network analyzer. Essentially, an analog computer built for simulation of power system behavior. Since the mid-1960s, digital computer programs that solve the load flow computation using a set of simultaneous equations have been used. By the end of the 20th century, these programs had become very specialized, with features and analytical tricks employed to make them fast, robust, and dependable when applied to contingency analysis.

But regardless of the type of engineering computer being used, studies that build upon, or more properly “delete upon” a base case are the foundation of the contingency-analysis method. As a first step, a base case representing the system in “normal” form, i.e., with all equipment in operation and fully functional, is set up and solved, making sure that it (a system with all equipment operating) fully satisfies all loading, power quality, and operating criteria.

Contingency cases

Variations from this base case are then conducted as a series of “contingency studies.” In each contingency study, one particular unit of equipment – a key transformer or line or bus, etc., is removed from the system database, and the remaining system’s performance studied using the engineering analysis model (load flow) applied to this “contingency case model.” The analysis determines if the system can still serve all the demand, while remaining within specified operational criteria (see Table 9.1), with this one unit out of service, or “outaged.” If not, additions or upgrades are made to the system model until the case does meet the criteria. Once the first contingency case is completed (the study for the first component in the system), the method proceeds to study the outage of the second. It begins with a “fresh copy” of the base case and removes the second unit in the equipment list, again performing its analysis. In this way, it proceeds through all N components, outaging each one and identifying whether performance in that situation is sub-standard, thus giving planners an indication of where problems in the system lie, and what the problems are.

Relaxation of design standards for contingencies

In most cases, electric distribution utility planners allow the contingency cases to meet less stringent requirements for loading, voltage, or other design goals, than for the “base” (no contingencies) case. For example, loading criteria may state that in the base case, no component can be loaded to beyond 100% of its normal rating. However, during any single contingency, a loading of 115% might be accepted, during a double contingency, a loading of 125% might be accepted. Table 9.1 lists the voltage and loading requirements for several utilities in the U.S. as a function of contingency situation.

Table 9.1 Transformer Loading Limits and Voltage Criteria for Various Contingency Situations Used By Four Electric Utilities in the U.S.

Image

Application of N–1 using a Computer Program

As originally conceived, prior to the existence of really powerful digital computers for power system studies, this process was done with an analog computer. Each case was set up and studied on an individual basis by the utility’s power system planners, by adjusting settings and patch-cord connections on the analog computer. For this reason, initially (1950s) often only the 100 or so most important components (out of several thousand in a large power system) could be studied for contingency outage. However, beginning in the late 1960s, programs on digital computers were developed which would automatically check all single contingencies in the system [Daniels, 1967]. These programs became a staple of utility for system planning.

A modern contingency analysis program works along the lines shown in Figure 9.1. It is built around a load-flow program – a digital program that takes a data set describing the power system and the loads it is to serve, and solves a set of simultaneous equations to determine the voltages, flows, and power factors that can be expected in that system. In an automatic contingency analysis program, the basic load flow program is augmented with an outer loop, which automatically cycles through this power system data set, removing each unit of equipment and line, in turn, and solving the load flow analysis for that particularly contingency case.

For each such case, the program checks the results of that contingency case and reports any loadings or voltages that are out of acceptable range. It then “restores” that outaged component in the database, removes the next unit in turn, and runs that contingency case, cycling through all components in the system.

Image

Figure 9.1 Basic approach behind the N–1 contingency planning approach. Engineering studies cycle through all components of the system and outage each one, studying what loadings and voltages would result. The system is considered to meet “N–1” criterion when all such contingency cases result in no out-of-range loadings or voltages.

A system plan was considered acceptable when this type of evaluation showed that no voltage and loading standards violations would occur in every one of these instances of single contingencies. Figure 9.1 illustrates this basic approach.

A Successful Method in the Mid and Late 20th Century

From the early 1960s through the early 1990s, the vast majority of different electric utilities applied this basic approach, with a number of slight variations here and there. Typically, a utility would design its system to completely meet N–1 criterion (the system can perform despite the loss of any one component) and to meet certain N-2 criteria (a set of specific two-failure conditions, which are thought to be critical enough to engineer to tolerate). Systems designed using this approach produced satisfactory performance at costs not considered unreasonable.

Supposedly Good Systems Begin Giving Bad Results

In the late 20th and early 21st century the operating record of large electric utilities in the United States revealed increasing problems in maintaining reliability of service to their customers. During the peak periods in the summers of 1998 and 1999 a number of power systems that fully met the traditional N–1 criterion experienced widespread outages of their power delivery systems. Table

9.2 lists only some of the more significant events in 1999, as identified by the U.S. Department of Energy. In particular, the ComEd system’s (Chicago) inability to provide service was disturbing, because that system was designed to N–1 levels of

Table 9.2 Major System Outages Identified By US DOE in 1999

Image

Contingency standards and even met an N–2 and N–3 criterion in some places. There were a sufficient number of these events to make it clear that ComEd was not an isolated or atypical situation [U.S. DOE, 2000].

Perhaps more significantly, the level of reliability-related problems on U.S. systems had been growing for several years prior to 1999. Based on analysis of industry survey data, the authors first noticed the trend in 1994. While at that time there were few widespread customer outage events, throughout the 1990s there was a growing “background noise level” of operating emergencies, equipment overloads, and frequent if small customer interruptions on many utility systems. Even without the major events cataloged in the DOE report (Table 9.2) the industry’s record of customer service quality was falling in many regards prior to 1999.

9.3 LIMITATIONS OF N-1 METHODOLOGY

This section explains the limitations that N–1-power system planning and design techniques encounter when applied to modern (high utilization factor, lean margin) power systems. Contingency-based planning, like all engineering methods, is based on certain assumptions, uses approximations in key areas, and has limitations in its accuracy and range of application. And like other engineering methods, if applied to situations within which these assumptions, approximations, and limitations do not seriously hinder its accuracy and effectiveness, the method provides very good, dependable results. But if applied outside that range, its results may prove undependable, and the system plan may provide unsatisfactory performance.

Gradually, throughout the last quarter of the 20th century, the electric utility industry changed how it used and operated its power systems. Some of those changes meant that their systems operated in states and in situations for which the N–1 approach was not completely valid. The resulting incompatibility of method versus use of the resulting system contributed to many of the reliability problems experienced by utilities.

Of necessity, in order to fit within the space available in this book, this chapter’s discussion is somewhat shorter and contains certain simplifications with respect to a “real power system.” This abridging has been made in order to shorten the discussion, allowing the reader to quickly get to the heart of the matter, and to diminish distractions from the main theme due to secondary and tertiary factors. Therefore, this discussion makes the following “assumptions” or simplifications with respect to the system being discussed as an example here:

• All equipment of a specific type will be of the same capacity, e.g., all substation transformers are the same capacity.

• All equipment is loaded to the same peak level, that being the average utilization ratio for the system at peak.

• All equipment units of any one type (e.g., all substation transformers) have the same failure rate.

The reader familiar with power systems will recognize all three as great simplifications of the many details that complicate power system planning and reliability engineering. However, real-world variations from these assumptions do not diminish the limitations and phenomena that will be discussed in the next few pages. In fact they slightly exacerbate them: these complexities generally worsen the problems explained here.

Overall Summary of the Problem

N–1 methods and the N–1 criteria assure power system planners and engineers that there is some feasible way to back up every unit in the system, should it fail. However, they make no assessment of the following:

• How likely is it that such backup will be needed?

• How reasonable is the feasible plan for each contingency situation or is the planner actually building a “house of cards” by expected “too many things to go right” once one thing has gone wrong?

• How much stress might the system be under during such contingency situations, and the long-term implications for both equipment life and operating policy?

• How often will conditions occur which cannot be backed up (e.g., multiple failures) and how bad could the situation become when that is the case?

As a result, systems that meet the N–1 criteria may be far less reliable than needed, even though the N–1 criteria “guarantees” there is a way to back up every unit in the system. This is much more likely to happen in modern power systems than it was in traditional, regulated power systems, due to changes in utilization and design made in the period 1990–2000, and the more volatile operating environment of deregulation.

Utilization Ratio Sensitivity

The major culprit that led to problems that “N–1 could not see” was an increase in the typical equipment utilization ratio used throughout the industry. When it is raised, as it was during the 1980s and 1990s, an N–1 compliant system, which previously gave good service, may no longer give satisfactory reliability of service, even if it continues to meet the N–1 criterion. Similarly, power systems designed using the N–1 criterion and intended to operate at higher than traditional levels of equipment loading are very likely to have higher customer interruption rates than expected.

There is nothing inherently wrong with this trend to higher loading levels. In fact it is desirable because it seeks to make the utility financially efficient, which is potentially beneficial to both stockholders and customers. A power system that operates at 83% or 90% or even 100% utilization of equipment at peak can be designed to operate reliably, but something beyond N–1 methodology is required to assure that it will provide good customer service reliability.

N–1 is a necessary but not a sufficient criterion

Due to the success that N–1 methods had throughout from the late 1950s through the late 1980s, during which it led to power systems that provided good reliability of service, most power system planners and most electric utilities treated the N–1 criteria as a necessary and sufficient criteria. Design a system to meet this criterion and it was, by definition, reliable. But at higher utilization factors, while the N–1 criterion is still a necessary criterion, it alone is not sufficient to assure good quality of service. The reasons for this change in reliability as a function of utilization ratio are far subtler than is typically recognized. This section will review the limitations that N–1 has when applied to high-utilization-ratio systems and explain what happens, and why. The authors want to make clear that they definitely are not labeling high utilization ratios as the cause of all the industry’s problems. Rather, it is the incompatibility between traditional ways of applied the N–1 criterion, and the way these systems operate, that created the problem.

Traditional utilization levels

In the 1960s through early 1980s, electric utilities typically loaded key equipment such as substation power transformers and downtown sub-transmission cables to only about two/thirds or a little more (typically about 66%) of their capacity, even during peak periods. The remaining capacity was kept as an “operating reserve” or “contingency margin.” Engineers and planners at distribution utilities designed their power systems using the N–1 and other criteria, while counting on this margin.

In such systems, when a transformer or line failed, it required one neighboring transformer of equal capacity, perhaps already loaded to up to 66%, to be available to pick up its load. Depending on how close the system was to peak demand at the time of the outage, this unit might have to accept as much as 133% of its normal load (its normal 66% of rating and its neighbors 66%, too). Such overloading was tolerable for brief period. Power equipment can be run above rating for brief periods without significant damage, if this is not done too often.

Image

Figure 9.2 Annual load duration curve for a utility system. Risk periods for high contingency loading of a traditional power system occurs only 10% of the time (shaded area). See text for details.

And in fact it was unlikely that the loading would be as high as 133%, because that would only occur if the outage occurred during a peak load period. Only when loading was above 75% that of peak demand would overloads occur (75% x 66% = 50%, so at any load level below 75% of peak, one transformer can handle the load of two without going over 100% of its rating). In the system whose load duration curve is shown in Figure 9.2 (a typical U.S. utility system), such load levels occur only 10% of the year. As a result, it was (and still is) very likely that when equipment failures occur, they will occur at some time when loading is not near peak and hence stress on the equipment picking up the outaged units load was not unduly high.

Higher utilization rates

Beginning in the 1980s, increasingly in the 1990s, and through to today, utilities pushed equipment utilization upwards to where in some systems the average substation transformer was loaded to more than 100% of its nameplate during peak periods, by design. As was discussed in Chapter 8, in aging areas of this system, utilization rates can average close to 100% under “normal” peak conditions. Table 9.3 shows industry averages obtained in 2010 and past years in a comprehensive survey of loading practices across the industry.1

Table 9.3 Design loading guidelines for normal and contingency loading

Image

The engineers in charge of these higher-utilization rate power systems knew, and in fact, planned for, their system to accommodate these higher utilization rates in several ways. To begin, these higher utilization rates required considerable augmentation of system configuration and switching. In a system loaded to an average of only 66% at peak, each transformer and line required one neighboring unit to “stand by” to pick up its outage. This meant, for example, that in each two-transformer substations, there had to be buswork and switchgear (breakers, switches) configured so that if one of the units failed, the other could automatically pick up its load. Alternately, the load at the substation had to be partially transferred to neighboring substations (onto their transformers) through the feeder system, as will be discussed in Section 9.4. Usually, some combination of stronger substation and sub-transmission-level buswork and switching flexibility, and increased reliance on feeder-level transfers was used.

1 From Electric Power Distribution Practices and Performance in North America – 1998 (“The Benchmark Report” by H. L. Willis and J. J. Burke, ABB Power T&D Company, Raleigh, NC). “Design loading” as used here, refers to the peak load on a transformer, above which it is considered so highly loaded that it should be upgraded in capacity, or load transferred elsewhere. “Emergency rating” refers to the maximum load permitted to on the substation during an equipment outage or excessive load contingency.

Those plans for these higher-utilization systems were created using N–1 methods, which “took into account” those higher utilization rates in their analysis. These N–1 analysis applications assured that there was a way to back up every unit in the system, should it fail. The system plans developed as a result fully met the full N–1 criteria everywhere, and N–2 criteria in critical places, even though they were operating at these higher utilization rates. Thus, these systems did have well-engineered contingency capability. The equipment was there and it would, and did, work as intended. Any problems lay elsewhere.

Looking at N–1’s Limitations

In order to understand where the problem with N–1 criterion application lies, it is important to first understand that a power system that meets the N–1 criterion can and routinely does operate with more than one unit of equipment out of service. Consider a power system that has 10,000 elements in it, each with an outage expectation of .16% – a value lower than one would ever expect on a real power system. One can expect that on average, about 16 elements will be out at any one time. Yet the system will usually continue to operate without problems. The reason is that the N–1 criteria has guaranteed that there is a backup for every one of these failed units, as shown in Figure 9.3.

The system will fail to serve its entire load only if two of these multiple outages occur among neighboring equipment. For example, if a transformer and the transformer designated to back it up both fail at the same time, as shown in Figure 9.4, then, and only then, will a customer service interruption occur.

Contingency support neighborhood

“Neighboring equipment” as used in the paragraph above means the equipment in the vicinity of a unit that is part of the contingency support for its outage. This can be more accurately described as its contingency support neighborhood: the portion of the system that includes all equipment that is part of the planned contingency support for the unit’s outage. For a substation power transformer, this might include at least one neighboring transformer (usually at the same substation) which would provide capacity margin during its outage, along with portions of the high-side and low-side buswork and switchgear, which would operate in a non-standard configuration during its outage.

Figure 9.5 illustrates this concept, showing several “contingency support neighborhoods” as in the example system used in Figures 9.3 and 9.4. As stated in the introduction to this section, this discussion simplifies the real world somewhat. Here, every unit is the same size and contingency support is always grouped exclusively in sets of neighbors. Actual design is more complicated, but the complications do not make any substantial net impact this discussion.

Image

Figure 9.3 One-line diagram for a small part of a large power system. Four equipment outages are shown, indicated by an X, two transformers, one high-side bus, and one sub-transmission line. Each outaged unit has a neighboring unit (filled in) that has picked up its load: the N–1 criteria assured that this was the case. The system continues to operate smoothly because no two of the outages occur close enough to one another.

Image

Figure 9.4 One set of dual failures in the same contingency support neighborhood, as illustrated here with the failure of two neighboring transformers (each was the designated backup for the other), will lead to interruption of service to consumers. Here, the shaded circle indicates the rough area of the system that would be without power.

Image

Figure 9.5 Every unit in the system has a “contingency support neighborhood” that includes all the equipment that provides contingency support for the unit. Shown here are two transformers (filled in) along with their neighborhoods (circled). Equipment in a neighborhood provides contingency margin (capacity) as well as connectivity flexibility (switching, flow capability) during the outage of that unit.

Problems in this N–1 system that lead to customer outages occur only when two or more equipment units fail simultaneously within one contingency support neighborhood. Such a “double failure” does not have to be among just the unit and its like-type of support unit, i.e., the failure of both transformers at a two-transformer substation. Failures of one transformer and a line, a breaker, or a bus needed to support the contingency re-configuration of the system to support its outage can also lead to a failure to maintain service. Still, such occurrences are very rare. While there are perhaps 10 to 15 units out of service in a system of 10,000 elements, it is most likely that they are scattered singly throughout the system. The likelihood that two are concentrated in a one-minute neighborhood is remote.

Traditional power systems had “small” contingency support neighborhoods

In traditional power delivery systems, those whose utilization ratio for power transformers and sub-transmission lines was nominally targeted to be about 66% of equipment rating during normal (design) peak conditions, the contingency support neighborhood for any unit of equipment was small. As discussed earlier, every unit in the system needed one backup unit of like size. A 32 MVA transformer would be loaded to 22.5 MVA (66%) at peak. If it failed, its partner at a substation, also already serving 21 MVA, would pick up its load too, briefly running at 133% (45 MVA) loading so that all demand was served. Therefore, the contingency support neighborhood for both units was a small locality that included the other transformer and the various switchgear and buswork needed to connect the two to the other’s load during a contingency.

Systems with high utilization rates have larger contingency support neighborhoods

Suppose that the area of the power system being considered has 88.5% loadings on all transformers, instead of 66%. In that case, when any transformer fails, and if the utility is to keep within a 133% overload limit, a failed unit’s load has to be spread over two neighboring transformers, not just one. The size of the “contingency support neighborhood” for each unit in the system has increased by a factor of fifty percent. Previously it included one neighboring transformer, now it includes two.

More importantly, the probability that an outage will occur among the designated support units for each transformer is double what it was in the system loaded to only 66%. Previously, whenever a transformer failed, there was only one unit whose failure stood in the way of good service. Now, if either of its two designated support units fails, an interruption of service to the utility’s customers will occur. Two possible failures, each as likely to occur as the one failure that could have taken out the 66% loading system.

Thus, in a system where utilization rate has been pushed upward, every contingency support neighborhood is proportionally larger, and thus a greater target for trouble to occur: There is more exposure to “simultaneous outages.” In a system loaded to 66%, there is only one major target. Failure to serve the load occurs only if a unit of equipment and one specific neighbor designated as its contingency support are both out of service. In a system or area of a system loaded to 88.5%, it occurs if a unit and either one of two neighbors is out. In an area of a system loaded to over 100%, (as some aging areas are) it occurs whenever the unit and any one of three designated neighbors is out (Figure 9.6).

Basically, the whole problem boils down to this: the contingency support neighborhoods are larger. But there are still “N–1” neighborhoods: each can tolerate only one equipment outage and still fully meet their required ability to serve demand. A second outage will very likely lead to interruption of service to some customers. In these larger neighborhoods, there are more targets for that second outage to hit. “Trouble” that leads to an inability to serve customer demand is more likely to occur. The analysis below estimates the relative likelihood that this occurs in example systems loaded to different levels.

Image

Figure 9.6 At 100% loading, each transformer from Figure 9.5 needs three nearby units to cover its load, expanding the “contingency support neighborhood” involved. See text for details.

A system as discussed earlier with “10,000 major elements” might contain 1,200 substation transformers. Assuming that the outage rate for them is .25%, this means:

1. In a 66% utilization system, there are 600 two-transformer contingency support neighborhoods. Failure to serve the load occurs only if both transformers of this pair fail. That is:

• Failure probability = .00252 = .000625

• Hours per year = .000625 x 8760 hours/year x 600 pairs = 32.9 hours/year.

2. In an 88.5% utilization system, with three-transformer contingency support neighborhoods, failure to serve the load occurs only if all three or any two transformers of this triplet fail. Over the whole system, annually, that is

• Failure probability = .00253 + 3x(.00252x(1 - .0025)) = .00001871872

• Hours per year = .00001871872 x 8760 hours x 400 triplets = 65.6 hours/year.

3. In a 100% utilization system, with four-transformer contingency support neighborhoods, failure to serve the load occurs if all four, any three, or any two transformers of this triplet fail. Over the whole system, annually, that is

• Failure probability = .00254 + 4x(.00253x(1 - .0025))+6x(.00252x(1 - .0025)2= .000037375

• Hours per year = .000037375 x 8760 hours x 300 quadrals = 98.22 hours/year

By comparison to a traditionally loaded system, a power system at a higher utilization rate is two to three times as likely to experience a situation where a pattern of equipment outages fall outside of the N–1 criterion. For example, one of the “N-2” situations that might lead to an interruption of load. Systems run at higher equipment utilization rates are more likely to experience events that could put them in jeopardy of being unable to serve all customer loads. N–1 analysis does not measure or evaluate this in any manner.

N–1 criterion assures planners and engineers that a feasible way to handle every equipment outage has been provided. It does nothing to address how often situations outside of that context – i.e., those that will lead to unacceptable service quality, might occur.

High Utilization Coupled with Aging System Equipment Leads to Greatly Increased Service Problems

Chapter 7 discussed the effect that aging has on equipment failure rates. In aging areas of a power system, the failure rate for equipment is three to five times that of normal areas of the system. Coupled with the high utilization rates common in these aging areas and the result is a ten to one or slightly worse incidence of customer service interruptions due to equipment outages.2

2 Here the authors will bring in a real world factor. Chapter 8 discussed why utilization rates are usually above system average in aging infrastructure areas of the system. A system where the average has gone from 66% to 88.5% may have seen only a modest increase in the utilization rate for equipment in newer areas of the system, while increases in aging areas make up for those below-average statistics. Thus, the aging part of the system has much higher utilization than other parts. Its customer service problems stand out both because of the higher failure rate in this area, and the higher likelihood that outages lead to customer interruptions. As a result aging areas often have a customer interruption rate up to twelve times that of newer areas of the system.

Increased High-Stress Levels and Periods

The situation is slightly worse than the perspective developed above when one looks at the stress put on the system’s equipment, and the portion of the year that the system is likely to see: high- and medium-stress events due to equipment outages.

In a 66% utilization system, every transformer is paired with one other: whenever the unit it is backing up fails, it must support the load of two transformers. Given the transformer failure rate of .0025%, this means each transformer can expect to have its partner out of service and thus be in this “contingency support mode” about 22 hours (.0025 x 8760 hours) per year. Given that the system is at peak demand about 10% of the time, this means that a transformer can expect about two hours of severe loading time per year.

When utilization ratio is 88.5%, each transformer is partnered with two other units: failure of either one will put it in a contingency support mode. Thus, neglecting the slight amount of time when both its partners are out of service (and thus customer service in interrupted), the amount of time it can expect to be in this mode is twice what it was in the 66.5% system: or about 44 hours, 4.4 of that will be high stress. Similarly, in 100% utilization, each transformer will see about 66 and 6.6 contingency support and high-stress hours period year. Stress put on system equipment is much higher in high utilization systems.

“Low standards” Operating Hours are Increased

It is also worth considering that standards on loading, voltage regulation, and other operating factors are relaxed during contingency situations (see Section 9.2). Since the amount of time that the system spends in these “medium stress times” is greater in high utilization systems, which means that the distribution system spends more time in “sub-standard” situations – twice is too much if the utilization rate is 88.5%, three times as much if utilization is 100%.

The Result: Lack of Dependability as Sole Planning Tools

The limitations discussed above can be partly accommodated by modifications to the traditional approaches and changes in N–1 criteria application. But the overall result is that resource requirements (both human and computer) rise dramatically, and the methods become both unwieldy and more sensitive to assumptions and other limitations not covered here. The bottom line is that N–1 and N–2 contingency-enumeration methods were, and still are, sound engineering methods, but ones with a high sensitivity to planning and operating conditions that are more common today than in the mid-1960s when these methods came into prominence as design tools. These limitations reduce the dependability of N–1 analysis, and the use of N–1 as a reliability criterion, as a definition of design sufficiency in power system reliability engineering.

Image

Figure 9.7 Contingency-based analysis (solid lines) determined that a particular power system could sustain the required peak load (10,350 MW) while meeting N–1 everywhere and N–2 criteria in selected places. This means it met roughly a 30 minute-SAIDI capability at the low side bus level of the power delivery system.

Figure 9.3 illustrates this with an example taken from a large utility system in the Midwestern U.S. Traditional N–1 analysis determines that a power system operating at an average 83% utilization factor can serve a certain peak load level while meeting N–1 criteria (defined as sufficient reliability). Basically, the “rectangular” profile on the diagram given by N–1 analysis indicated that the system passed N–1 criteria everywhere, and N–2 criteria at a set of selected critical points, while the demand was set to that of projected design levels for system peak load.3

Actual capability of the system shows a rounded corner to the load versus reliability of service capabilities of the system. It is capable of delivering only 9,850 MW with the required 30-minute SAIDI. If expected to serve a peak load of 10,350 MW, it has an expected SAIDI of four times the target, 120 minutes per year.

By contrast, an analysis of the system’s capability using a reliability computation method that does not start out with an assumed “everything in service” normalcy base, which accommodates analysis of partial failures of tap changers, and which accommodated some (but not all) uncertainties in loads and operating conditions, determined the profile shown by the dotted line. At high loading levels (those near peak load), the system is incapable of providing the reliability required) – the rectangular profile is actually rounded off. The system can provide the peak load but with much less reliability than expected.

3 N–1 analysis does not determine an actual estimated reliability value, but in this case subsequent analysis showed that a valid N–1 criterion was equivalent to about 30-minute SAIDI, and that value is used here as the target reliability figure.

9.4 OTHER PLANNING RELATED CONCERNS

Partial Failures

Traditional N–1 contingency planning methods use “zero-one” enumeration of failures. In the contingency case analysis method (Figure 9.1), every unit of equipment and every line in the system is modeled as completely in service. In each contingency case, a unit is modeled as completely out of service. But modern power systems often encounter partial failures:

• A transformer may be in service but its tap changer has been diagnosed as problematic and is locked in one position, limiting system operation.

• An oil-filled UG cable’s pumps are disabled and the cable has been de-rated, but is still in service.

• Concerns about a ground that failed tests have dictated opening a bus tiebreaker to balance fault duties.

At the traditional loading levels that existed when contingency analysis was developed, such partial equipment failures seldom led to serious operating limitations and were safely ignored, while the contingency analysis still remained valid. In systems operating at higher loading levels, partial failures cause problems under far less extreme situations, and often cannot be ignored in reliability planning. For example, a power transformer loaded to 85% at peak, whose tap changer is locked into one position, is subject to voltage regulation problems that can easily reduce its ability to handle load by close to 15% in some situations.4 The contingency margin (100% - 85%) that the typical N–1 method assumes is there may, in fact, be mostly nonexistent.

4 Loss of its load tap changer does not cause any lowering of the transformer’s capability to carry load. However, flow through it is now subject to variation in voltage drop – higher flows result in higher voltage drops. It may be unable to do its job, within the electrical confines of its interconnection to other parts of the network, due to this variable voltage drop, which may limit it to partial loading only. Ignoring this and accepting the higher voltage drop that goes with full capability during a contingency would lead to serious problems of another type (unacceptably low service voltages to customers, or higher demands on other transformers).

Connectivity Sensitivity

As mentioned earlier, success in handling a contingency depends on the power system in the “neighborhood of contingency support” around the failed unit being connected in such a way that the neighboring units can provide the support while remaining meeting all electrical standards and satisfying all operational requirements. At higher equipment utilization rates, this neighborhood is generally larger everywhere within the power system.

This greater size is not, per se, the cause of problems for traditional N–1 analysis. Standard contingency-based analysis and the engineering methods that accompany it can fully accommodate the detailed electrical and capacity analysis of any and all contingency support neighborhoods, regardless of their sizes.

But each of these wider contingency neighborhoods involves more equipment and interconnections. Thus, accurate modeling is sensitive to more assumptions about the exact amount and location of loads in the surrounding areas of the system, and the way the system operator has chosen to run the system at that moment, and myriad other details about operating status. There are more assumptions involved in accurately depicting the status of each of the N components’ contingency support neighborhood.

And the analysis of each of these N contingency support neighborhoods is more sensitive to these assumptions. The range of uncertainty in many of these factors about future area loads and operating conditions is ± 5% to ±10%. Such ranges of uncertainty are not critical in the N–1 contingency analysis of a system operating at 66% utilization. The available contingency margin (33%) is considerably larger than the range. But when operating at 90% utilization, the uncertainty ranges of various factors involved often equal the assumed available contingency support capacity, and there is a larger neighborhood, within which it is more likely something will be different than assumed in the N–1 analysis.

Aging High Utilization Systems Are Sensitive to Forecasting Errors

A projection of future need is the first step in power delivery planning. The forecast of future peak load defines requirements for the capability of the system, starts the evaluation of alternatives as to feasibility, value, and cost, and defines the constraints for selecting the alternative which best meet requirements. Poor load forecasting has been a contributor to a significant number of aging infrastructure system problems around the nature – in the authors’ experience roughly half. Two areas of forecasting deserve special attention.

Weather normalization

Peak demand levels depend very much on the peak seasonal weather. In summer, the hotter the weather, the higher the demand. In winter, colder weather increases demand levels. Therefore, a projection of electric demand can, and should, include an assessment of the impact of temperature on demand. For example, Figure 9.8 shows the peak demand vs. peak temperature relationship for a small municipal electric system. The peak demand is:

• Summer Peak Load (MW) = 495 MW + (T – 57) x 12.5 MW (9.1)

• Winter Peak Load (MW) = 495 MW + (57-T) x 8.2 MW (9.2)

Recommended practice for electric load forecasting is to adjust historical weather data to a standard set of weather conditions to which the system design is targeted, then to project future demands under this same constant weather criteria for all planning purposes. In this way all weather data and forecasts are based on comparable situations: increases or decreases due to “real” reasons versus those due to variations in weather are distinguishable. Similarly, all planning should target a specific “design weather standard.” The forecast load, which defines requirements for the system plan, should be adjusted to this weather criterion. Essentially the system is being designed to serve peak-demand for weather this extreme, but not worse. Weather conditions (and peak loads) that exceed those conditions are treated as contingencies, just like other contingencies.

Image

Figure 9.8 Peak daily loads and temperatures are related with a “jackknife” function (solid line). Shown is peak daily load versus peak daily temperature for all Tuesdays in a year (several Tuesdays thought non-representative because they were holidays or similar special events were left out of the analysis). Only Tuesdays are used in order to reduce the effects that different weekday activity patterns may have on load variation. See Spatial Electric Load Forecasting- Second Edition, Chapters 5 and 6, (Willis, 2002).

What temperature should planners select for this standard weather condition? Temperatures vary from year to year. Setting the design conditions at the mean, or most expected temperature, means that the forecast loads will be exceeded, and the system’s capability exceeded by the demand, roughly every other year. On the other hand, it isn’t cost-effective to install equipment to handle the worst possible weather conditions: "the heat storm of the century," etc. Generally, the recommended practice is to define a set of "design weather conditions" “extreme enough” to be rare but not so extreme as to be totally unexpected. Situations and needs vary, but a reasonable criterion is "design weather conditions are defined so that they will be exceeded no more than once every ten years.” See Willis 2002, Chapters 5 and 6 for a detailed discussion of both the techniques used to determine such adjustments and for recommendations on what constitutes “extreme-enough weather.”

Impact of Mistakes in Weather Normalization on Reliability

The weather normalization method used in planning and the design weather targets set for the system are among the easiest matters for “rationalization” when efforts are being made to cut costs. For example, if planners re-set their design weather conditions from a criterion of once in ten years to once in five, or lower the forecast target in some other manner, the budget requirements that flow out of their system planning process will fall. As an example, the utility whose load is diagrammed in Figure 9.8 has an annual growth rate of nearly 1.0%. Summer peak demand sensitivity is 1.25%/degree F. Reducing the design weather target by one degree Fahrenheit, about the equivalent of going from once-in-ten to once-in-five years, will reduce the load growth forecast for a four year period to that forecast over five years. Assuming for the sake of this analysis that budget directly corresponds to amount of growth, that means it results in an annual budget reduction of 25% over the next four years. For this reason, a number of utilities succumbed to the temptation to change weather normalization too much.

Weather normalization that targets a “too average” weather condition puts the power system in high-stress situations too often.

A low load forecast results in several detrimental impacts. First, it generally leads to a situation where the system is serving more load than intended. Usually, this does not create severe problems when all equipment is functioning, although it does age equipment somewhat faster than expected (accelerated loss of life). One can view operation at loads above design conditions as a contingency. Poor load forecasts used in the planning or operation of a power delivery system effectively “use up” its contingency capability (see Willis et al, 1985). Poor normalization of weather data for forecasting or poor spatial forecasting (poor correlation of loads with areas and equipment) results in deterioration of a system’s contingency withstands capability. This greatly exacerbates the reliability-of-service problems discussed up to this point in this book.

Equally important, and far less frequently recognized as a key impact of poor forecasting, the system serving a load above that for which it was designed will operate for many more hours of the year in a state where service quality is at jeopardy if complete or partial failures occur, or if “things don’t go exactly right.” Figure 9.9 compares the annual load duration curves for a “average year” as used in design of the system and 1999 (a one-in-ten year), for a large investor-owned utility in the central United States. The difference in peak demand between an average year and an extreme year is 4.4%. However, as shown, not only peak load changes but, annual load factor. The period of time when the system is above 75% of peak (defined as “high stress” earlier in this chapter) increases by 28%. As a result SAIDI increases significantly.

Image

Figure 9.9 When weather is above average, it is usually somewhat above average for an entire season (summer, winter) or a good portion of it, not just for a day or a week. Shown above are forecast versus actual annual load duration curves for a utility in the mid – United States, for 1998. As a result of the higher temperature summer weather, peak demand was 3.3% higher than the mean-weather peak projected for the system. But in addition and perhaps more serious, the number of hours that the system was in a “high stress” loading situation – where contingency loading and switching are a concern – was 30% higher than for an average year. This, rather than the higher one-time peak load, was the major impact weather that year would have on reliability and operation.

Image

Figure 9.10 Maps of peak annual demand for electricity in a major American city, showing the expected growth in demand during a twenty-year period. Growth in some parts of the urban core increases considerably, but in addition, electric load spreads into currently vacant areas as new suburbs are built to accommodate an expanded population. Forecasts like this, done for two-, four- and similar periods out to twenty years, set the requirements for both short- and long-range power delivery system planning.

Table 9.4 Percent of Utilities in North America Using Some Type of Formally Recognized Spatial or Small Area Load Forecasting Method

Image

A number of methods are in use for spatial forecasting, from simple trending methods (extrapolation of weather-adjusted substation and feeder peak load histories) to quite comprehensive simulations involving analysis and projection of changes in zoning, economic development, land-use, and customer end usage of electricity. Results vary greatly depending on method and resources used, but engineering methods exist to both determine the most appropriate methods and forecast characteristics needed for any utility application, and to evaluate the efficacy of a forecast. The most important factor is that a utility employs some legitimate means of studying and projecting load on a detailed enough location-basis to support its planning needs.

Traditionally, good spatial forecasting required both considerable labor and above-average engineering skills and was considered a “high-expertise” function within state of the art distribution planning methods. The best traditional methods worked very well but had rather high labor and skill costs methods (see Willis and Northcote Green, 1983 and Engel et al, 1996). Many utilities cut back on both the quality of the technique used and the effort devoted to data collection and forecasting study, when they downsized professional staffs during the 1990s. Table 9.4 illustrates the reduction the number of utilities using the best class (simulation-based) spatial forecast methodology, but does not reflect reductions in the data or time put into the forecasting effort. As a result, at a time when spatial forecasting needs are at an all time high (see below), the quality of local-area forecasting done at many utilities deteriorated sharply.5

In the very late 1990s, new forecasting methods were developed that reduce labor and skill requirements considerably, but these have limited availability and are not widely used (Brown et al., 1999). However, this means methods that can provide the information needed within reasonable labor and skill limits are available to the industry.

Impact of spatial forecast errors on reliability

In some manner, every T&D plan includes a spatial forecast, for the total load growth allocated in some manner among the various parts of the system. Classically, the viewpoint on forecast sensitivity of T&D systems has been that if the spatial element of the forecast is done poorly, the result is a very poor use of capital. A projection of the wrong locations for future load growth identifies incorrectly those portions of the system that need to be reinforced. Capital additions are made less effectively than possible.

But in addition, a large effect of poor spatial forecasting is a loss of contingency capability. Normally a power system designed based upon a mildly incorrect spatial forecast (i.e., one pattern of “where load is”) will suffer from less than the planned contingency withstanding capability (i.e., provide less reliability in service) than expected. It will operate well enough during times when “everything is going well” but suffer from problems that are both more serious and take longer to fix than expected during contingencies. Essentially the poor forecast “uses up” the contingency capability built into the system [Willis and Tram, 1984].

5 Load forecasting problems related to local area forecasting were identified as a major contributing problem in six events (Table 9.2) investigated by DOE’s P.O.S.T. report.

Systems with high utilization ratios are more sensitive to this degradation of contingency planning due to peak load and spatial forecasting errors. Although subtle, the effect is best described this way: The contingency neighborhoods described earlier increase in size as a result of the higher utilization ratios being used for major equipment (Figures 9.5 and 9.6). While it may seem that this makes the planning less in need of detailed spatial forecasts (there are fewer “units” – contingency support neighborhoods – and they are on average far larger), the opposite is true. Contingency capability is very sensitive to the allocation of load within each support neighborhood. For example, the analysis given earlier assumed the load in each neighborhood was evenly split among units in that group: if it is even slightly unbalanced, the system’s contingency capability is greatly degraded. The forecast of where load is located within each contingency support neighborhood is critical to planning of contingency schemes that will prove successful when needed. Again, as with poor weather normalization, a poor spatial forecast “uses up” the contingency capability of the system, something in very short supply in aging areas of the system. SAIDI increases.

Interconnection Complexity

In the slightly simplified power systems used as examples earlier in this chapter, the small contingency support neighborhoods needed at 66% loading required interconnection with only two neighboring units assured success without overload during N–1 conditions. But at higher utilization ratios, contingency support neighborhoods grew in size and number of mutually supporting components. Interconnection of more equipment, into a scheme where it could support one another during contingencies, was necessary for success of the contingency plans.

In aging areas or where for other reasons planners and engineers have accepted near 100% utilization of equipment, there is a requirement for an even stronger and more widespread interconnection. Everywhere in a high-utilization system, each of its N units must have a strong-enough electrical tie to a wider neighborhood of equipment around it, to support its outage.

At higher equipment utilization rates, the importance of configuration and operating flexibility in the design of the system becomes more critical to reliability.

Traditional contingency-based study methods can deal with the analysis of these issues relating to the wider neighborhood of support around every one of the N units in the system and its great complexity. They can determine if the required number of neighbors are there, if they have enough margin of capacity to accept the load without overloads, and if the system’s electrical configuration makes it possible for them to pick up the demand that was being served by the failed unit. Basically, modern N–1 methods can and will determine if the failure of each unit in the system is “covered” by some plausible means to handle its failure and still provide service.

Again, N-1 methods do not work with probabilities nor determine the system’s sensitivity to multiple failures, so they cannot determine the failure sensitivity or failure likelihood of these complicated interconnected schemes. At higher utilization ratios, complexity of the contingency backup cases has increased. Determining if a particular contingency backup plan is really feasible, if it is really connected with sufficient strength to survive the likely failure states, or if it depends on too much equipment operating in exactly the wrong way, is something N–1 methods do not fully address. Configuration needs to be studied from a probabilistic basis – is this entire scheme of rollover and re-switching likely to really solve the problem?

The problem is not high utilization rates

A point the authors want to stress again is that high equipment utilization is not the cause of poor reliability in aging infrastructure areas of a power system. It is possible to design and operate power systems that have very high (e.g., 100%+) utilization factors and provide very high levels of service reliability. This is accomplished by designing a system with the configuration to spread contingency burden among multiple equipment units, with the flexibility to react to multiple contingencies, and that can apply capacity well in all the situations most likely to develop. Such designs require detailed analysis of capacity, configuration, configuration flexibility, failure probabilities, and the interaction of all these variables, and careful arrangement of circuits to build in sufficient reliability.

Equipment utilization is only one factor in the design of a power system. In some cases, the best way to “buy” reliability along with satisfactory electrical (power flow) performance is to use capacity – to build a system with low utilization ratios. But in other cases, particularly where the cost of capacity is very high (as it is in many aging infrastructure areas), good performance comes from using the equipment to its utmost. Achieving high reliability of service even in these situations where equipment is highly stressed and contingency margins are small or non-existent requires using configuration and interconnection flexibility in an artful manner.

9.5 SUMMARY AND CONCLUSION

Traditional Tools Have Shortcomings With Respect to Modern Needs

Many of the problems faced by a utility owner/operator of an aging power T&D infrastructure are compounded by the fact that the tools being used by its planners and engineers cannot directly address one vital aspect of the required performance: reliability. Traditionally, reliability was addressed in power system design by engineering contingency backup capability into the system: every major unit of equipment could be completely backed up should it fail. This was termed the “N–1” criterion and methods that engineer a system based on this criterion were often referred to as “N–1” or “N–X” methods.

Such methods addressed a key and necessary quality for reliability: there must be a feasible way to do without every unit in the system, some way of switching it around during its outage and/or picking up the burden it was serving during its outage. Realistically, no power system can be expected to provide reliable service to its energy consumers unless it possesses this necessary qualification of having complete N–1 contingency capability.

But through many years of use during periods when equipment loading levels of equipment were lower than is typical in the 1990s and 2000s, the power industry came to view the N–1 criterion as necessary and sufficient. For power systems with roughly a 33% redundancy (contingency margin), the criterion is effectively a necessary and sufficient criterion to assure reasonable levels of reliability. However, when a power system is pushed to higher levels of equipment utilization efficiency, N–1 is still a necessary criterion, but no longer sufficient to assure satisfactory levels of reliability.

Basically, the shortcoming of the N–1 criterion, as well as engineering methods based upon it, is that they do not “see” (respond to, identify problems with, or measure) anything with respect to reliability of service except the “yes/no” satisfaction of this one criterion. Therefore, they cannot alert engineers to a flaw in the design of a power system with respect to its reliability, due to either the likelihood that events may get worse than the system can stand. “Yes, you have a feasible backup plan, but for this system it is very likely that while this unit of equipment is outaged, something else will go wrong, too.” Or, because some parts of the system are key elements for reliability of the system around them, to the extent that they are “important enough” that they need more reliability built into their design. “This unit is more important than you realized: A second backup is really needed for the reliability level you want to achieve.”

Furthermore, they cannot provide quantitative guidance to planners and engineers on what to fix and how to fix it in a satisfactorily effective but economical manner.

A Deeper Problem, Too

Finally, there is a more subtle but perhaps more fundamental problem associated with the use of N–1, one that is difficult to fully demonstrate in a textbook, but nonetheless real. N–1 engineering procedures solve reliability problems by using capacity: It is a criterion and method that uses contingency margin as the means to achieve reliability. This is partly the nature of the criterion and the tool, but also the fault of the paradigm – the way of thinking – built for the engineers around the N–1 tools.

By contrast, experience with planning and engineering tools that directly address reliability will quickly show any engineer that very often configuration is the key to reliability. In fact, capacity and configuration are both key factors in achieving reliability and artful engineering of power system reliability requires combining both in a synergistic manner.

Traditional N–1 analysis tools (Figure 9.1) can analyze the functional and electrical characteristics of configuration. However, they cannot do so on a probabilistic basis, analyzing how the likelihood that the interconnection will be there and determining how that interacts with the probabilities that the capacity will have failed. Table 9.5 summarizes the limitations of N–1 analysis that need augmentation in order to meet modern power system reliability planning needs. Such methods will be discussed in Chapter 13.

Table 9.5 Seven Desired Traits of the Ideal Power System Reliability Planning Method (that N-1 Methods Do Not Have)

Image

Explicit Reliability-Based Engineering Methods

What is needed to assure the sufficient requirement in power system reliability engineering along with the necessary, is a method that addresses capacity, configuration with an explicit, quantitative evaluation of the reliability of service they provide. Ideally, this must be a procedure that computes the reliability of service delivered to every element of the system, in much the same manner that a load flow computes the current flow and voltage delivered to every point in the system. It should be a procedure that can then be used by planning engineers to explore the reliability performance of different candidate designs, and that, in the same manner that load flows identify equipment that is heavily loaded (i.e., key to electrical performance), would identify equipment that is heavily loaded from a reliability standpoint (i.e., key to reliable performance).

Modern planning needs are best met using power system design and planning techniques that directly address reliability performance: Explicit, rather than implicit, methods. Such techniques have been used for decades in the design of systems where reliability is of paramount importance – nuclear power plants for commercial and shipboard use, spacecraft design, and throughout the aircraft industry. Adaptation of these methods to power systems provided a much more dependable design method to achieve operating reliability.

Such analysis begins with the same “base case” model of the power system as traditional techniques did. But probabilistic analysis starts out with a true normalcy base – a recognition that the natural condition of the system is “some equipment out” and that that condition will always be in a state of flux, with equipment being repaired and put back in service, and other equipment out of service. Like contingency enumeration methods, probabilistic analysis determines the consequences of every failure or combination of failures – can the system continue to operate and how close to the edge will it be during that time? But unlike traditional methods, probabilistic analysis determines if and how every combination of two or more simultaneous failures could interrelate to create problems, whether they are nearby or not in the immediate neighborhood, and it determines if that combination of failures is likely enough to be of concern. By tracing reliability through the configuration of the system while analyzing expectation of failure-to-operate, it effectively analyzes configuration and its relationship to reliability, too.

Depending on the exact method used (Chapter 14 will cover three basic types), partial failures can be accommodated using conditional capacity levels, partial failure states, or by recognizing sub-units within each main unit. Assumptions about loads, operating conditions, and other aspects of system operation can be modeled using probability distributions with respect to those variables.

Table 9.6 Overall Recommendations for Planning Methods

Image

Evaluations determine and prioritize problems in the system as they will occur in the actual operation. Areas of the system, or operating conditions, that are especially at risk are identified. A big plus is that the method can be fitted with optimization engines to solve the key identified cases – “Find the lowest cost way to make sure this potential problem won’t be a real problem.”

Such methodology for reliability-based design of a power system can be created using a highly modified form of N–1 (Figure 9.1) analysis in which probability analysis is used at every stage, or by combining N–1 with reliability-analysis methods. What is best for a particular utility or situation depends on a number of factors specific to each case. But the important point is that there are proven, widely used methods to perform this type of work available to the power industry, and in use by selected utilities. Such methods are a key aspect of solving aging infrastructure problems: the reliability of such system areas must be well analyzed and solutions to it well engineered. These types of methods will be covered in Chapter 14. Table 9.6 summarizes the overall recommendations on planning method improvement needed to meet modern reliability engineering needs.

REFERENCES AND BIBLIOGRAPHY

H. A. Daniels, Automatic Contingency-Based Analysis of Power Systems, PhD Dissertation, University of Texas at Arlington, 1967.

M. V. Engel, editor, Power Distribution Planning Tutorial, IEEE Power Engineering Society, 1992.

H. L. Willis, Spatial Electric Load Forecasting – Second Edition, Marcel Dekker, New York, 2002.

H. L. Willis and J. E. D. Northcote-Green, “Spatial Electric Load Forecasting – A Tutorial Review,” Proceedings of the IEEE, February 1983.

H. L. Willis, T. D. Vismor, and R. W. Powell, "Some Aspects of Sampling Load Curve Data on Distribution Systems," IEEE Transactions on Power Apparatus and Systems, November 1985, p. 3221.

United States Department of Energy, Power Outages and System Trouble (POST) Report, March 2000, Washington DC.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.216.138.97