USGS


[Previous Section] [Table of Contents]

APPENDIX A: DIAGNOSING COMPUTATIONAL PROBLEMS IN FEQ

SECTION FIVE


5. COMPUTATIONAL-PROBLEM DIAGNOSTIC CHART


Abrupt computational failures are fairly rare but the failures signaled by the continuous reduction of time step, called computational failures, are to be expected. Learning to find the causes of computational failures is essential, because such failures usually happen more than once with nearly every model. The skill to find the causes of these failures is acquired by experience, but the following guide is offered as an aid. This guide consists of a flow chart with various decision nodes (represented by diamond-shaped boxes) and abbreviated solution summaries. The abbreviated solution summaries are discussed at greater length in a series of notes, referenced by number in the body of the flow chart given in Plate 1. The notes should be read in the sequence given in the flow chart as the problem is defined.

You may view the flowchart of this document; clicking on a "Note n" will take you to that place in the flowchart.

Note 1: Look at the iteration log. The key to understanding the source of the problem is to examine the iteration log. Do not focus attention on the last few iteration logs if the minimum time step is only a few seconds or less in duration. By that time, the true cause of the failure may have been masked. Instead, find the point where the time step begins its downward descent to failure. Then examine a series of iteration logs from that point as the time step declines. Several characteristics are checked. The first, and most important characteristic, is the node that has the maximum relative correction (RCORECT). Does one node clearly stand out? If not, is there a small subset of nodes that consistently appear? Several different nodes may appear but if they are all adjacent or nearly adjacent on a branch they are treated as if they are a single problem node. The second most important characteristic to check is the variation of RCORECT as the iterations progress. Does RCORECT decrease rapidly but then stagnate at a value that is larger than the tolerance (EPSSYS) but very close to the tolerance? This is often the characteristic signature of Newton's method when it is iterating around a sharp corner in a function table.

A corner of some degree is present at every tabulated value in a table interpolated by using linear interpolation. The slope of the interpolated function is discontinuous at each tabulated value, so that corners thus formed violate the typical convergence criteria for Newton's method. No appreciable problem results if the corners are relatively smooth; when they become too sharp, however, Newton's method may be trapped at the corner and the iterations will never converge.

Two other items in the iteration log may be useful in some cases. These are the location of the equation that contains the maximum residual (MXRES) and the sequence of the number of variables that exceed the current primary tolerance. If the equation containing the maximum residual is always the same but the other signatures are less clear, it may be useful to check the nodes associated with the equation containing the maximum residual.

Note 2: Scan output close to the problem node. Once the problem node is identified, look at the results at this node for several consecutive time steps. The variation with distance of flow area, top width, flow rate, and the Froude number are the important items to note. The printed output will need to be at short time step intervals in order to find the problem.

Note 3: Find problem nodes subset. In some cases, no single node stands out from a review of the iteration logs. Try to find a small subset of nodes that appear frequently. Each of these nodes must be checked.

Note 4: Shoaling problems. The initial definition of branches may have to be refined if the water level in one or more branches reveals high points in the profile that were not isolated at junctions. Shoals will not cause computational problems if the water is always deep enough to prevent formation of critical flow. If the simulation involves dewatering of the shoaling areas, then great care must be used to make sure that no branch node will ever become dry. Using CHANRAT to define a 2-D table for a short distance at and near the high point of the shoal is a possible representation. The contraction and expansion losses into and out of the shoal area can be represented using KA and KD at the branch nodes.

Note 5: Steep channel. If a channel is too steep, the flow will be supercritical above a certain limiting depth. In this case use the partial-inertia option for the branch in the series of options for each branch. Storm sewers are often smooth and a slope of less than 0.005 can result in supercritical flow at moderate depths. As the storm sewer fills, the flow will return to subcritical as the celerity increases rapidly. However, partial inertia may still be needed in order to compute the intermediate supercritical flows. When using the partial-inertia option, start with a value close to 1.0 (full inertia) and gradually reduce it until the simulation runs to completion.

Note 6: Steep channel. In this case, the steep channel is present in only part of the branch. Add a branch to isolate the steep portion of the channel and connect it to the mild-slope section using a control structure that will support critical flow. Frequently, the use of code 3 (equal water-surface elevation) to relate the nodes at the junction will allow for a transition in flow type. If that fails, then a two-dimensional table computed using CHANRAT may be suitable. A two-dimensional table computed using EXPCON also may be useful if an identifiable abrupt change is present in the cross section. Short rapids should probably not be represented with EXPCON this way. If the rapids are short, they can be converted to an abrupt drop in the model. This can be done with little effect on the results away from the rapids. However, if the rapids are the focus of the model, then they must be represented explicitly. The partial-inertia option can be used if the steep branch is too steep.

Note 7: Falsification of energy conservation: Named as such because the depth will decline to very small values with the velocity increasing to exceed the speed of sound in air before the computations terminate. This problem begins at a contraction in flow area and a slight increase in Froude number at the contraction. The flow is often quasi-steady but once started, the decline in depth is continuous. This problem is caused by the falsification of conservation of energy. It is impossible to satisfy both conservation of momentum and conservation of energy when the continuous governing equations are approximated by algebraic equations. Normally, the falsification of energy is so small that it has negligible effect on the simulation outcome. However, when the flow area changes too rapidly the algebraic equations no longer give a good approximation of the continuous equations and appreciable falsification of energy appears. This falsification occurs at each time step and feeds back into the solution. The behavior at the node with the decreasing depth is as if a large capacity pump were injecting energy at that location. The solution to the problem is to interpolate one or more cross sections above and below the smaller section so that the errors of approximation will be reduced. Addition of eddy losses by KA and KD also may be applied. If the problem persists even after the step length in the neighborhood of the problem has been reduced to small values, then a junction must be added at the node and one of the control-structure options used to represent the flow.

Note 8: Add energy dissipation. The loss of energy caused by turbulence when a partial-inertia branch joins a full-inertia branch must be represented. A hydraulic jump will form near this junction and will move as the flow changes. This jump is not simulated because the de Saint Venant equations do not apply to suddenly varied flow. However, there must be a provision for the loss of energy that should occur in this vicinity. To do this, increase KA and KD to large values (greater than 1.0) as the junction is approached along the partial-inertia branch. Experience indicates that increasing from fractional values to values of 3 to 4 over a few computational elements will often represent the energy losses that occur in transitions from supercritical flow to subcritical flow. Note that these losses can only be approximated and that the water-surface elevations near this junction are subject to considerable uncertainty both in the model and in the physical hydraulic system. However, the water-surface elevations away from the junction should be valid because the effect of the hydraulic jump is local.

Note 9: Possible control structure needed. The simplest elevation relation at a junction is equality of water-surface elevation. Consider only two branches at a junction with a change of cross section at the junction. If the smaller section is upstream, then forcing equality of water-surface elevation implies a loss of energy equal to the difference in the velocity heads at the junction. Conversely, if the smaller section is downstream, equality of water-surface elevation implies a gain of energy equal to the difference in velocity heads. The greater energy upstream case may be acceptable but greater energy downstream can lead to large errors if the velocity difference is appreciable. If three branches meet at a junction, the possibilities become more complex. In any case, the most likely solution to the problem is adding a control-structure function table that will support critical flow, such as CHANRAT or EXPCON.

Note 10: Control structure capacity overwhelms channel capacity. The channel may be too small for the rating at the control structure. The flow in the branch must always remain subcritical in the approach to a control structure. Check for mistakes or inconsistency in defining the cross section at the node in FEQ, or in the approach section in FEQUTL.

Note 11: Interpolate cross sections. Interpolating intermediate cross sections is almost always the first thing to try to solve a wide variety of computational problems. The cause of computational failure could be that a step length is too long. The step length needed in an analysis is difficult to define before the failure. However, step lengths suitable for steady flow are often adequate for unsteady flow,: especially if the steady flows are near typical low flows. However, there are cases in which the step length that allows computations to progress is too long for producing accurate results. See the Franz and Melching (1996a), section 6.0, "Approximation of the Full Equations of Motion in a Branch," for more details.

Another reason for computational failure in this case may might be a transition from an unlined channel to a lined channel. If the effect of the lining is large, the flow may transition from subcritical to supercritical. A branch break (adding a junction) at the location of the change in smoothness is always advisable. The new branch may need to be simulated with the partial-inertia option if it is too steep.

Note 12: Break in slope or change in section. The break in slope or section must be isolated if the flow approaches critical. Add a junction and an appropriate control-structure option that supports critical flow. The partial-inertia option can be used if the slope of the channel is steep.

Note 13: Level-pool reservoir nearly empty. A level-pool reservoir must always contain water. Dead storage should be included in the capacity table. If the reservoir goes dry, then capacity at the low end must be allotted by adding some false dead storage. The stage-storage table should extend down to zero elevation. The dead storage should have a value commensurate with the full capacity of the reservoir. For example, do not try to use a dead storage of only 0.01 acre-feet when the full capacity is 10 acre-feet! The change in storage is the key value in the computations. The dead storage will be added to the true storage at each elevation.

Finally, the capacity table for the reservoir must extend higher than any simulated water level. This may require extending the table to a higher value than is physically possible, to help prevent problems in the Newton's method iterations. It will be apparent from the results if the method converges on a reservoir level that is physically impossible.

Note 14: Use of type 2 table for level-pool reservoir. If a type 2 table for a level-pool reservoir is used, make sure that the tabulation interval is small. Both the volume and surface area of the reservoir must be input to FEQ. The surface area is obtained by differentiating the volume with respect to elevation. In a table of type 2, the surface area is constant within each tabulation interval and a discontinuity in area is present at each tabulated elevation. Newton's method may fail if this discontinuity is too large. It is much better to use tables of type 3 or 4 to represent capacities for level-pool reservoirs because these tables produce a continuous first derivative.

Note 15: Problem with control structure. Make sure that the table used to represent the control structure is smooth. A common problem is to compute flow over a spillway or roadway with a coarse spacing in the head values at low flows. The relative change in flow over a spillway or weir is larger for smaller heads. Try to keep a nearly constant ratio between adjacent heads (rather than a constant increment) to approximately maintain a constant maximum relative error in interpolation. Rapid changes in flow with a coarse argument interval creates corners in the tabulated function that can be large enough to prevent Newton's method from converging.

Note 16: Argument spacing in two-dimensional table. Avoid large relative changes in flow in two-dimensional tables. These changes create corners in the tabulated function that can prevent Newton's method from converging. Make sure the argument spacing is close enough for convergence. This should not be a problem if automatic argument spacing has been used.

Note 17: Near full-submergence problems. At complete submergence, the flow at a two-node control structure is zero. The rate of change of flow as the structure moves to non-zero flow is unbounded at the zero-flow full submergence limit. As this limit is approached, the absolute value of the slope of the linear approximation gets large. When this slope is too large, Newton's method also may encounter convergence problems. To minimize this problem, the two-dimensional table should approach full-submergence but not too closely. A minimum partial free drop (PFD) value of about 0.01 is probably reasonable. This yields a minimum non-zero tabulated submerged flow that is often one-fourth to one-sixth the free flow. Recall that measurement of submerged flows with PFD's near 0.1 are difficult to make. Thus, there is no essential loss in simulation accuracy with a value of 0.01.

Note 18: High frequency irregularities in channel geometry. Natural channels often are highly irregular with variations at all levels and spacing of measured cross sections. If these variations are large within a short distance, critical flow can result, and a junction and an appropriate control structure may have to be inserted in the model. However, in most instances the variations are more modest but they may contribute to annoying problems with computational failure or near failure. Near failure is often difficult to diagnose but the symptoms are clear; a time step that varies widely for no clear reason in the variations of the flow. Interpolating a cross section often solves the problem. Sometimes, the addition of eddy losses using KA and KD can solve this and other related problems.

The value to use for eddy losses is unclear; practice is varied. Values for KA of 0.1 and for KD of 0.3 are commonly used for other models such as HEC-2. The USGS has recommended 0.0 for accelerating flow and 0.5 for decelerating flow. These losses are supposed to account for losses that are not accounted for in typical Manning's n values. It is understood that the values of Manning's n given in summary tables are for regular straight channels with minimal irregularity. Thus, small values of KA and KD could be used in most natural streams to represent the effects not included in the Manning's n values estimated using standard tabulations.

Note 19: Add delay time to lateral inflows. Treatment of lateral inflows in an unsteady-flow model can be problematic. Typically, these flows are estimated based on some hydrologic model to represent the rainfall-runoff characteristics of the area tributary to the simulated stream channels. However, only the larger stream channels will appear in the unsteady-flow model. Therefore, the hydrologic model is assumed to represent the time delay and attenuation effects of the streams not present in the unsteady-flow model. Most hydrologic models do not include these effects or take large amounts of user time to create the necessary watershed description. Furthermore, users often specify a coarse definition of the tributary-area distribution. All of these result in unsteady flow models with diffuse (lateral) inflows more difficult to apply than those without diffuse inflows.

In its output of tributary area, FEQ computes crude estimates of the average distance that water must travel in order to reach the branch from its tributary area. These are based on one-half and one-quarter of the ratio of tributary area to branch length. If these values are several miles long, and the time step in the unsteady flow model is 15 to 30 minutes, then it is physically impossible for all water appearing at any point in the tributary area to reach the branch within the time step. This is the assumption used in FEQ. Additional delay and attenuation introduced by using parameter KLR can approximate some of the effect of the non-represented channels and flow distance.

Note 20: Large lateral inflows. The computational difficulty of lateral inflows into a branch increases roughly as the square of the relative inflow rate if the computational element size is constant. Two options can be used to reduce the relative inflow rate. These are to reduce the lateral inflow or increase the flow in the stream. If neither option is reasonable, the only alternative is to reduce the computational element size by interpolating additional cross sections.

Note 21: Introduce a junction. The situations described in Note 19 often cause the distribution of tributary area along a modeled portion of stream to be highly variable. The area tributary to the modeled stream undergoes a discontinuity at the point of entry of every unmodeled tributary to the stream. If this discontinuity is large enough it may be reasonable to add a junction in the modeled branch and add a level-pool reservoir to delay and attenuate the lateral inflow. Sizing the level-pool reservoir is difficult, but some rough estimates of its hydrologic characteristics can be made to reasonably attenuate and delay the flows from the unmodeled tributary. The other alternative is to explicitly add the unmodeled tributary to the current model.

Note 22: Smooth bottom profile. If very low flows are to be simulated, evaluate the irregularity of the measured bottom profile. Although this may appear to distort the field measurements, it should be considered that the measured minimum point of most natural channels is uncertain by at least 0.5 feet and often more. Making reasonable adjustments to smooth the bottom profile can minimize the difficulty of modeling low flows.

The other alternative is to explicitly represent the riffle and pool characteristics of the stream in the model. Thus, every local high point in the channel invert must have the potential of supporting critical flow as the flow declines toward a minimum value. This can become quite complex and is only justified for a detailed investigation of flow on a small stream.

Note 23: QSMALL too small. QSMALL is an input parameter designed to provide a smooth transition between a relative-error criterion and a non-relative error criterion as the flows become small. QSMALL is only important if the flows in the simulation become small. The signature for QSMALL problems is failure to converge on the flow value when the flows are small with the relative correction coming close but never reaching the tolerance value. One way of setting the value of QSMALL is to select a minimum flow that will be of interest in a particular application. This is not the same as the minimum flow value to be simulated in most cases. This could be the smallest flood peak of or an estimate of bankfull flow, and so forth. A small fraction of this flow (about 10 percent) could be used for QSMALL.

Note 24: Flow increasing from low flow. The initial rise from low flow in a flood is often the most computationally difficult part of the flood to simulate. The rate of rise and the initial flow in the channel are often not under user control. Therefore, to reduce the error of approximation inherent in the conversion of the continuous governing equations to algebraic equations, reduce the average computational element length.

An alternative to reducing the average computational element length is to apply the Near-Zero Depth option. This can often allow the computations to approach essentially dry conditions and still rise to large flows. However, experience using this option is presently limited to date (1996). In some cases, increasing WT, KA, and KD also may solve the problem. However, these changes should be made carefully and within limits of properly simulating the physical hydraulic system.

Note 25: Interpolate cross sections. The action to take in this case is unclear. It is not even clear why this branch of the flow chart should ever be needed! Therefore, try interpolating cross sections at the problem area and determine whether the problem goes away. If not, perhaps the results at the added cross sections will clarify the nature of the problem.

Note 26: Corner in conveyance representation. If the argument spacing in a cross section function table is too coarse relative to changes in conveyance, then Newton's method may iterate endlessly at the corner. This is a rare problem. Reduce the value of DZLIM and recompute the cross section function tables close to the problem area.

Note 27: Reversing flow. Reversing flow places maximum demands on the computations, especially at the computational element that contains zero flow at the given time. The water levels in that computational element are changing rapidly. The location of this element changes with time, with flow in the stream above the backwater-affected zone, and with the backwater level. The effect is more severe when the flows in the stream are small and the depths are small. If the depths are always large, the problem is minimized. Thus, the simulation of backwater effects on small streams is especially difficult and may require an average step length in the tidal zone considerably smaller than the step length outside the tidal zone.

Flows also can reverse close to junctions when a large flow is input from a tributary stream. The water in the receiving stream can flow upstream for some distance and time depending on the flow ratios, and channel slopes.

A simulated flow reversal that is not likely to occur in the physical stream sometimes occurs when diffuse (lateral) flows are modeled and the lateral inflow is unreasonable. Flow reversal resulting from lateral inflow must be reviewed carefully before accepting it as a physically possible fact. It is most likely to be an artifact of the representation of the tributary area and the flows that come from the area.

Note 28: Number-greater-than convergence tolerance. The secondary convergence tolerance must be used with care. Sometimes a problem is apparently solved, but another one results. If the iteration log shows frequent occurrences of the secondary tolerance being used, and if the problem node appears to be one of the nodes that is outside the primary tolerance but inside the secondary tolerance, then turn off the secondary tolerance. This will probably require solving an earlier problem but it also also could solve the later problem.


[Previous Section] [Table of Contents]