Any use of trade, product, or firm names in this document is for descriptive purposes only and does not imply endorsement by the U.S. Government. See release.txt packaged with FEQ 8.92 for notes on earlier versions. Also available at http://il.water.usgs.gov/proj/feq/software/release_feq892.txt Descriptions of changes made to FEQ ----------------------------------- (Descriptions of changes made to FEQUTL follow) Version 9.00 October 1, 1997 --Fixed bug in subroutine OPER that may have affected resetting of variable geometry structures when the time step was reduced after failure of convergence. --The internal representation of tributary area was reorganized to make support of detention and delay reservoirs possible. Externally the input remains unchanged, but the order and nature of the internal computations is different. Limited testing has shown that the differences in results are generally small with extreme values showing occasional variation of 1 or 2 units out of 10,000. The output of tributary area has been changed to reflect the internal change. In previous versions, tributary area assigned to a branch in the branch mode of input was allocated among the computational elements based on the length of the element relative to the length of the branch. The tributary output showed the areas allocated to each element. The internal computations also were done element by element. In the current version, the tributary area is not allocated to the elements on the branch, and only the fraction of the runoff is allocated to each element. Thus, the runoff total for the branch is computed and then the runoff is allocated to the elements on the same basis as the tributary area was allocated in early versions. This order obtains the same result within roundoff error and reduces the number of computations. In general the current version retains the tributary area internally as it was given in the input. --A side effect of reorganizing the tributary area representation is that the allocation of space for tributary area was modified. --It is no longer required that every branch in the model appear in the tributary area input. Only the branches that have tributary area need appear in the input; other branches may be omitted. This input works only if the exact spelling and capitalization of the block heading names as given in the manual are used. If they are not used, the model description cannot be processed correctly. --Tributary area can now be attached to a boundary node. The tributary- area input for a boundary node is identical to the tributary-area input for a level-pool reservoir. The Code 6 boundary condition is still given for the boundary node as if nothing had changed. However, the boundary condition must always be constant flow. A time-series table or a time-series file CANNOT be given at a boundary node that has tributary area attached to it. In addition, the flows from the tributary area must be routed through a delay reservoir. Specification of delay is discussed below. The delay reservoir must be used in order to convert the mean flow that comes from the product of tributary area and unit-area runoff intensity to a instantaneous flow, that is, a point value of flow. Boundary nodes cannot support specification of mean flows; they do not work in general and eventually result in undamped oscillations as the model tries to adjust the flow at the boundary to match the imposed mean flow. --Detention ponds are now supported within a tributary area. These ponds are used to approximate the effect of detention storage that is required in all new developments. Other commonly used terms include ponds, basins, and reservoirs. The following assumptions are used in implementing detention ponds: 1. The outflow from each pond takes place through a low-level circular orifice that is always free of tailwater or over a spillway that is also assumed free of tailwater. 2. Each pond has a conservation pool at the level of the minimum point of the circular orifice. In other words, the surface area of the pond at zero outflow is always greater than zero and should generally be much greater than zero. The water surface in the pond is taken to be horizontal for all combinations of inflow and outflow. 3. Each tributary area given in the input can be divided into two parts: one part that flows through one or more detention ponds and the other part that does not flow through detention ponds. The outflow from both subareas is added to create the inflow to the branch, level-pool reservoir, or boundary node. The outflow from the detention reservoir is assumed to discharge into the branch, level-pool reservoir, or boundary node directly just as the tributary area not subject to detention does. 4. The reservoir, orifice, and overflow spillway are defined by the following set of values: Identifier used What it means in FEQ input ----------------- ---------------------------------------------------------- YD Design depth. This value is the vertical distance from the invert of the orifice to the minimum point on the overflow spillway. Default= 5 feet. UAQ Unit-area flow. This value, expressed in the ENGLISH unit set as ft^3/second/acre, gives the maximum flow that the pond is to release through the orifice when it is at its design depth. The typical range is from 0.05 to 0.30. In DuPage County, the storage required in acre-feet per acre to meet a range of unit-area flows for a range of directly connected impervious area fractions has been computed by the Northern Illinois Planning Commission (NIPC). These results are tabulated in a 2-D table of type 10 and stored in the ASCII file, DETAIN.TAB, that is stored in the \BIN directory. This table is number 10000. For example, if the directly-connected impervious fraction is 0.25 and the allowed unit-area flow is 0.1 ft^3/second/acre, this table requires that 0.33 acre-feet/acre or 0.33 feet of storage be provided for each acre of area that drains through the detention pond. Default = 0.1 ft^3/s/acre. BSS Basin side slope. This identifieds the assumed side slope of the detention pond given as the horizontal distance for an assumed unit rise. A typical range is from 2.0 to 5.0 A value of 0.0 indicates that the pond walls are vertical. Default = 4.0 AVDA Average drainage area. Gives the assumed average drainage area for the detention ponds in a tributary area. The area tributary to a branch, LPR, or boundary node in an FEQ model may have more than one detention pond in it. We usually do not wish to simulate every detention pond. If we want to simulate every pond, then we can describe each pond as a level-pool reservoir and include it explicitly in the model. In this case, each pond would have to be connected to the model and the detail required would be excessive. The detention facility in the Tributary Area input is not designed to simulate the detention pond network in detail. The input is designed to approximate the major effect of numerous small detention ponds. The tributary area to the branch, level-pool reservoir, or boundary node, is divided by the AVDA to find the number of average ponds that would be present. The flow computed from the tributary area is divided by the number of average ponds. For example, if the AVDA= 40 acres and the tributary area subject to detention is 365.5 acres, then the number of average ponds is 365.5/40=9.138. If the average flow from the tributary area during a time step is 51.6 ft^3/s, the inflow to the average detention pond would be 51.6/9.138= 5.647 ft^3/s. The outflow from the average detention pond is then multiplied by the number of average detention ponds. Thus, if the outflow from the pond were 1.25 ft^3/s, the flow that is applied to the branch, LPR, or boundary node would be 1.25*9.138=11.423 ft^3/s. The storage outflow relation for the pond is non-linear, especially at smaller depths so that this yields a different result than forming one large storage to represent all the ponds in the tributary area. Only when the storage- discharge relation is linear would aggregation across ponds of disparate size be valid. Default = 40 acres. WS Weir slope. The overflow weir is approximated by flow over a triangular broad-crested weir. WS gives the slope of the weir crest in horizontal distance over vertical distance. A large value of WS gives a weir crest that is nearly horizontal. Values in the range of 25 to 50 should give a fair representation of the flow over the banks of the pond. A triangular crest was chosen because the flow usually starts at a low point and then rapidly increases as more of the bank of the pond is overtopped by the rising water. Default = 50.0 OFWC Overflow weir coefficient. This identifies a dimensionless weir coefficient for the overflow weir. Values of 1.0 or less are probably in the correct range. The weir equation for this form and for the values defined here is Q = OFWC*0.64*SQRT(.4*GRAV)*WS*H^2.5. Default =0.9. ORIFICE_CD Orifice coefficient of discharge. If the orifice is sharp- edged or square-edged, the coefficient of discharge is approximated by 0.60 within about +/- 2 percent. The weir coefficient for flow over a circular sharp-crested weir is also close to 0.6 except at small heads relative to the diameter. Therefore, the same discharge coefficient can serve for both weir and orifice flow. Default = 0.6. ORIFICE_TAB Table number for the orifice flow function. This table number is 10001 in the file DETAIN.TAB in \BIN. This table is of type 4 and approximates the dimensionless orifice flow function covering both weir and orifice flow. The table has been computed and checked for a dimensionless range of head relative to orifice diameter from 0.0 to 542. The relative error in the table is less than 3.5 x 10^-6. The table is used to define the relation between storage and outflow for the pond. Default = 10001 LUI Land-use index for the impervious area. Needed to determine where the impervious area appears in the sequence of land uses given for each gage in the input. If the impervious area appears first then LUI=1. The value of LUI must be constant for all input of tributary area for a FEQ model. Default = 1. UADV_TAB Unit-area detention volume table number. This is table number 10000 already mentioned above. Default= 10000. The default values may be changed by the user. 5. The following steps defines the characteristics of the detention storage (either default values or user-given values may be used): 5.1 Compute the sum of areas for the land uses subject to detention to obtain the total area subject to detention. 5.2 Compute the fraction of imperviousness for the area subject to detention. 5.3 Using the impervious fraction and the value of unit area flow, UAQ, lookup in the unit-area detention volume table, given by UADV_TAB, to find the unit-area storage needed. 5.4 Compute the volume of storage needed by multiplying the unit-area storage found in step 5.3 by the value of AVDA, the drainage area for the average detention pond. 5.5 Assume that the pond is an inverted frustrum of a cone. That is, the basin is circular in plan and has a flat bottom or conservation pool. The volume computed in step 5.4 must be contained by this frustrum over a depth given by YD, the design depth, and the basin side slope, given by BSS. The volume contained in the frustrum with the values given is a quadratic equation in the radius of the conservation pool. Solve for the radius of the conservation pool. This value then defines the volume and surface area of the pond as a function of the depth of water in it. The volume is a cubic function of the depth. 5.6 Find the design discharge from the product of ADVA and UAQ. Then compute the diameter of the orifice such that the orifice function will give the design discharge when the head on the invert of the orifice is YD. Note that this value can deviate from the traditional Q=PI*D^2/4*SQRT(2*GRAV*YD) value because the flow through the orifice deviates from this relation when the relative head is less than 4 or 5. Orifice diameters can become as large as 20 percent of the design depth. 5.7 Compute a table of type 4 that gives the flow out of the pond as a function of the storage in the pond. This computation is done at 50 different levels extending from 0 to 1.5*YD. The storage in the pond is assumed to follow the same function even though the design depth is exceeded. We do this because ponds often have banks that are of irregular elevation and because the overflow weir has large capacity. Thus, it is unlikely that a detention pond will ever have a large head on the overflow spillway. The table resulting from step 5.7 is then used to define the response of the pond to inflows to it. An example input fragment from the Tributary Area Input block shows how detention is requested. The line numbers have been added for reference here. Note that column 1 of the input is at the "B" in "BRANCH". The tributary area for the branch is given first. If detention storage is requested the next line should start with the name "DTEN" as shown This name is followed in this example by the identifier name "FRACTION" which defines what fraction of the tributary area for this branch is subject to detention. This means that one-half of the tributary area will be subject to detention and one-half will not. The land-use distribution in the two areas remains the same. If FRACTION= 1.0, then the entire tributary area is subject to detention. . . . 01 BRANCH= 2 FAC=1. 02 NODE GAGE IMPRV FGRSS MGRSS SGRSS FRST AGRIC 03 0 2 0.150 0.000 0.390 0.000 0.030 0.000 04 DTEN FRACTION=0.5 05 BRANCH=... . . . If you wish to specify a detailed distribution of land uses in the area subject to detention, then use one of the following two formats: . . . 01 BRANCH= 7 FAC=1. 02 NODE GAGE IMPRV FGRSS MGRSS SGRSS FRST AGRIC 03 0 2 1.180 0.640 0.650 0.000 0.200 0.000 04 DTEN -0.50 -0.25 -0.15 0.00 -0.05 0.000 05 BRANCH= 8 FAC=1. 06 NODE GAGE IMPRV FGRSS MGRSS SGRSS FRST AGRIC 07 0 2 0.204 0.098 0.283 0.024 0.032 0.000 08 DTEN 0.12 0.04 0.15 0.01 0.02 0.000 . . . Lines 1-4 have the usual tributary area input which provide the TOTAL tributary area. Line 03 is the total tributary area, both detention and non-detention. Line 04 is the DTEN line that indicates the area subject to detention by giving the negative of the areas that are subject to detention. FEQ will then assign the areas accordingly, deducting the area subject to detention from the total tributary area to find the area that is not subject to detention. Lines 5-8 have the usual tributary area input give the area that is not subject to detention followed by the DTEN line that defines the area subject to detention. In this case the areas on the DTEN line are positive or zero. Line 07 shows the tributary area without detention. Line 08 gives the tributary area with detention. If you want to change one or more of the values that define the detention and these changes are to apply to only one tributary area, then give the PARM line followed by one or more of the identifiers given above. All of the identifiers must fit on the PARM line. The line can have 112 characters on it. These parameter values will only apply to the detention reservoir defined on the line above the PARM line. The PARM line must be AFTER the DTEN line. . . . BRANCH= 1 FAC=1. NODE GAGE IMPRV FGRSS MGRSS SGRSS FRST AGRIC 0 2 0.060 0.000 0.250 0.000 0.090 0.000 DTEN FRACTION=0.7 PARM UAQ=0.05 AVDA= 15. YD=3.0 WS=25. BSS=3. OFWC=0.8 . . . If you want the changes to parameters defining a detention pond to change and to apply to all subsequent detention ponds in the input, then put the DEF line, for default, BEFORE the first detention specification to which it is to apply. All detention ponds given after this point will have these changed values. . . . BRANCH= 1 FAC=1. NODE GAGE IMPRV FGRSS MGRSS SGRSS FRST AGRIC 0 2 0.060 0.000 0.250 0.000 0.090 0.000 DEF UAQ=0.05 AVDA= 15. YD=3.0 WS=25. BSS=3. OFWC=0.8 DTEN FRACTION=0.7 . . . Note that the key words DEF, DTEN, PARM must all begin in column 1 of their line. --FEQ supports linear delay of the flows computed from tributary area. These flows are computed from the product of an average unit-area runoff intensity and the tributary area. Thus, the sequence of average or mean flows is a step function, with the abrupt changes in the step function acting as small, but abupt dam-break flood waves imposed on the flow in the branches of the model. In order to smooth this function and, thus, improve the computational characteristics, the option to provide a linear delay of flows to a given branch has been developed. This linear routing can approximate the natural delay and attenuation of flows present in the physical system at a level of detail beyond what can be simulated with distinct reservoirs and branches. The user is referred to a study done at Purdue University and reported in Rao, R.A., Delleur, J.W., and Sarma, B.S.P., 1972, "Conceptual hydrologic models for urbanizing basins," Journal of the Hydraulics Divison, American Society of Civil Engineers, vol. 98, no. HY7, pp. 1205-1220 for an example of measured tributary lag times. Values derived from this study, shown below, are available in FEQ,in addition to the option to provide user-determined estimates. Lag times in hours (derived from Rao and others, 1972) Impervious cover as decimal fraction ------------------------------------------------------ Area 0.00 0.08 0.25 0.50 0.75 1.00 (mi^2) ---- ----- ----- ----- ----- ----- ----- 0.10 0.25 0.22 0.18 0.14 0.11 0.09 0.50 0.56 0.50 0.41 0.31 0.25 0.21 1.00 0.80 0.72 0.58 0.45 0.36 0.30 2.50 1.28 1.15 0.93 0.72 0.58 0.48 5.00 1.83 1.64 1.33 1.02 0.82 0.68 The delay is specified in a manner similar to detention. The delay specification line begins with DLAY as its first characters starting in column 1. This is followed by either an explicit specification of the delay time in hours, or by requesting an equation by name to compute the delay time from the tributary area and the fraction of impervious cover. The first case is shown here. The identifier to give an explicit delay time is K. BRANCH= 2 FAC=1. NODE GAGE IMPRV FGRSS MGRSS SGRSS FRST AGRIC 0 2 0.150 0.000 0.390 0.000 0.030 0.000 DTEN FRACTION=0.5 DLAY K= 1.0 In this case, the delay time will be applied to BOTH the area subject to detention and to the area not subject to detention. The identifier to give an equation name is KEQ. BRANCH= 2 FAC=1. NODE GAGE IMPRV FGRSS MGRSS SGRSS FRST AGRIC 0 2 0.150 0.000 0.390 0.000 0.030 0.000 DTEN FRACTION=0.5 DLAY KEQ= PURDUE In this case PURDUE is the name of the equation, derived from Rao and others as discussed above, for computing the delay time in FEQ. If a different delay time is needed for each of the two areas, detention and non-detention, then the key word for the line differs. DLAY_DTA is the keyword that selects the detention tributary area. The keyword DLAY_NDTA selects the non-detention tributary area. BRANCH= 1 FAC=1. NODE GAGE IMPRV FGRSS MGRSS SGRSS FRST AGRIC 0 2 0.060 0.000 0.250 0.000 0.090 0.000 DTEN FRACTION=0.7 PARM UAQ=0.15 AVDA= 15. YD=3.0 WS=25. BSS=3. OFWC=0.8 DLAY_DTA K= 1.0 DLAY_NDTA K= 2.0 An equation can also be selected using the KEQ identifier for either or both of the areas. --A new option for controlling a gate in the Operation Control Block has been added. This is the GATETABL option. This option allows direct control of the opening of the gate as a function of the water-surface elevation at three exterior nodes. The three nodes are named as follows: control node -- this is the location giving the water-surface elevation that is being sensed and/or controlled. An example location is downstream of a flood-control reservoir to sense flood stage in the stream. Another location that could be used as a control would be upstream of the flood-control reservoir to sense when a large flow has appeared. upstream node -- this is the node that is upstream of the gate or gate set that is being operated. downstream node -- this is the node that is downstream of the gate or gate set that is being operated. The upstream and downstream nodes should be those that define the flow through the 2-node control structure. The control node can be the same as the upstream or downstream node but it need not be. GATETABL uses one or more 2-D tables of type 10 to define the relative gate opening. A relative gate opening of 0.0 means that the gate is closed and a value of 1.0 means that the gate is open as wide as physically or administratively allowed. The argument on the rows of the table of type 10 is the head at the control node. The datum for head is defined in the table of type 10 as an optional input. The argument for columns of the type 10 table is the elevation difference in the sense of elevation at the upstream node less the elevation at the downstream node. The following table is an example illustrating the format of a gate control table. The column with the heading of Salt is the flood stage at the control node. The datum for flood stage is 673.5 ft. This table is part of an application to on offline reservoir. The upstream node in this case is in the stream and the downstream node is in the reservoir. The goal of operation is to divert water into the reservoir to reduce flood stages and to also drain some water by gravity out of the reservoir when the stream is below flood stage. The table has four subregions. The line of zero head for flood stage and the line of zero difference of elevation form the boundaries of these subregions. The upper left subregion has the stream below flood stage and the stream below the elevation in the reservoir. This is the gravity drainage part of the table. The upper right subregion has the stream above the reservoir and the stream is below flood stage. The gates should be closed because it makes no sense to drain water below flood stage and needlessly add water to the reservoir that must be pumped out later. The lower left subregion has the stream above flood stage and the reservoir above the stream. The gates should be closed because any release could make the flood worse than it would be without the reservoir. The last subregion, the lower right, has the stream above flood stage and the reservoir below the stream. This is the region of gravity inflow and the gates will generally be open. TABLE#= 5300 TYPE= 10 ; HDATUM gives the datum for the left-hand column: in this case the flood stage HDATUM= 673.5 LABEL= Dominant table for lower slide gates ; left hand column contains the head above flood stage for Salt Creek ; top row contains the difference: Salt Creek - reservoir Salt -20.0 -1.0 -.250 0.250 1.0 2.0 50.0 -5.0 0.15 0.15 0.0 0.0 0.0 0.0 0.0 -3.0 0.15 0.15 0.0 0.0 0.0 0.0 0.0 -2.0 0.10 0.10 0.0 0.0 0.0 0.0 0.0 -0.5 0.10 0.10 0.0 0.0 0.0 0.0 0.0 -0.10 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.10 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.5 0.0 0.0 0.0 0.0 0.10 0.10 0.10 15.0 0.0 0.0 0.0 0.0 0.10 0.10 0.10 -1.0 This table has a small region of gate closure near the points of zero flood stage or zero elevation difference. This region can be used to avoid having long durations water trickles entering or leaving the reservoir by gravity. The region can provide a null zone or a dead zone so that changes in water level at the control node as the result of initiation of pumping do not cause the gate to open. The values in the current table are not typical of the values in an actual application. These values have been picked to test the basic functioning of the software. However, the table shows the basic pattern that all tables will take. The regions showing the gates closed could have the gate open a small amount to simulate the effect of a leaky gate. Gate leakage may add to the burden of pumping the reservoir during long periods of flow below flood stage. The number of columns in the table is limited at present to a total of 20. One of these columns is for the argument on rows and the others are for the data. The default format is 6 characters per column. The line width for reading these tables is set to 120 characters. All the values for a row of the table must appear on a single line. An example input in the Operation Control Block is shown here. 01 BLK=00002 02 BLKTYPE=GATETABL 03 MINDT=3600. 04 PINIT=0.0 05 CTLN UPSN DNSN TABN DOM AUX 06 U20 U20 F29 5300 MAX MAX 07 -1 This input is simple example of a control block for a gate. Line 2 defines the type of control block. Line 3 gives the minimum time, in seconds, that must elapse before the gate setting will be changed. Line 4 is the initial gate opening at the start of the simulation. This will be overridden by the initial elevations at some future time because we can look up the gate opening in the control table for GATETABL. Line 5 is a line of mandatory headings. Line 6 gives the dominant control table. The first control point given is always taken to be the dominant table. There may be one or more auxiliary tables as well, but this example has none. If present, auxiliary control tables would appear on line 7 and later and the columns under the headings DOM and AUX would be left blank. The three exterior nodes given for each control table appear in the order of CTLN (control node), UPSN (upstream node), and DNSN (downstream node. The control table number appears next followed by the action rules of the control tables under the columns headed by DOM and AUX. The action rule of a table defines how FEQ will use the gate opening found in the table. If the action rule for the dominant table is MAX, this means that under no conditions will the gate opening ever be larger than that given by the dominant table. If the action rule for the dominant table is MIN, then under no conditions will the gate opening be smaller than that given by the dominant table. The action rule for the auxiliary tables is defined by the rule given for them with the rule for the dominant table. If the action rule of the auxiliary tables is MAX, then the maximum of all gate settings among the auxiliary tables is selected. On the other hand, if the action rule for the auxiliary tables is MIN, then the minimum setting of all the auxiliary tables is selected. This rule only determines which auxiliary table provides the gate setting. The setting defined by the operation of the action rule on the auxiliary tables is then tested against the setting for the dominant table to finally select what opening to use for the next time step. In general, the control node for the dominant table will be one of the upstream or downstream nodes for the structure being controlled. The auxiliary tables permit gate operations to be initiated from one or more nodes that are at a greater distance away. In this way, some anticipation of flows that are en route to the dominant control point can be built into the gate operation. The contents of the control tables must be designed with the particular structure and stream in mind. It is clear that a range of trial runs may be required to define the table such that the benefit of the flood reduction structure is made as large as possible. Once such tables are constructed they could also be used to guide or suggest the actual operation of the gates based on what worked well in the past. This could be a good starting point in operating the gates during an ongoing flood, even if simulation of the flood in real time is not done. The elevations at the three nodes would have to be made available in near real time and then the tables could be used manually or in a small interactive computer program to yield recommended gate settings. Version 9.01 October 15, 1997 --Corrected problem with detection of DEFINE MACROS block to terminate tributary input when one or more branches without tributary area were left out of the input. --Modified summary output of tributary area so that negative tributary area is not included in the basin totals. This area is included in the summary for each gage and the total for the basin. Thus, the total tributary area given for each gage and the basin will be the total of all positive area given in the input. --Removed some overlooked debugging output in processing macro instructions. --Removed output of detention and delay summary headings when no detention or delay was present. Version 9.02 October 17, 1997 --Found cases in which NEW NETWORK option would not properaly process the terminating -1 at the end of the Network-Matrix-Control Input. An incorrect count could result in an input format error and generate an error message. Version 9.03 October 20, 1997 --Changed XSECIN, in COMPROG.FOR, to read both the old and new formats for interpolated cross-section function tables. --Corrected problem if trib area is zero in delay computations. Version 9.05 December 10, 1997 --Corrected various format statements that contained a backslash instead of a forward slash. Found in compiling FEQ for a UNIX workstation. --Updated version string for HECDSS files to report correct version in future when version changes. --Added dual-source irrigation control tables. Dual-source irrigation assumes that the rainfall-runoff model providing the unit-area runoff intensity files also supplies a time-series of unit-area intensity of irrigation application. The rainfall-runoff model assumes that water for irrigation is always available to meet the irrigation demands. The name dual-source irrigation denotes that there are two sources for this irrigation water: near-surface and imported water. Near-surface water comes from surface-water bodies or shallow ground water that is hydraulically connected to surface-water bodies. Imported water comes from outside the watershed or from deep ground water that is not hydraulically connected to the near-surface sources. The control in FEQ is used to specify which of the two sources is used. This is accomplished by giving a control table that specifies the proportion of water coming from near-surface sources as a function of the water-surface elevation at an exterior node that reflects the level of the near-surface water source. If all applied irrigation water comes from the near-surface-water source, then the value of the function is 1.0. If no water comes from the near-surface water source, then the value of the function is 0.0. Values of the function less than 1.0 but greater than 0.0 specify a blend of the two water sources. The irrigation input block must appear immediately following the tributary area input block. A control table may be specified for the area tributary to a level-pool reservoir, a boundary node, or to a branch. The control table applies to the entire branch; control tables cannot be given for an element on a branch. Withdrawal of water is simulated in FEQ by using a negative area to compute a negative lateral inflow to the flow path. One of the land uses in the area tributary to the flow path must contain the area that is receiving the irrigation water. This area is used to compute the runoff, however, and not the irrigation. To compute the irrigation withdrawal, FEQ multiplies the irrigated area by the negative of the near-surface fraction taken from the control table for this flow path. This area is treated like another land use and the user must include this land use in the total land-use count. The unit-area irrigation intensity value from the time-series computed in the rainfall-runoff model is then multiplied by this negative area to obtain the withdrawal of water from near-surface sources. An example input appears below together with the input for tributary area. In this case, all of the tributary area was attached to level-pool reservoirs. The units of the areas given are in tenths of an acre so that the factor of 4356.0 converts them to square feet. The value of SFAC was 1.0 so that stations were given in feet. There are seven land uses including the one for computing irrigation: Irrig. Note that the area for the irrigation computation is given as 0. This area will be supplied by the computations in IRRIGATION block. The land use subject to irrigation in this case is Grove. Its land-use index is 3 because it is the third land use given, counting left to right. The land-use index (LUI) for Irrig is 7. 01 BRANCH= 0 FAC=4356.0 02 NODE GAGE UbImp UbPer Grove Pastr Forst WetLd Irrig 03 F200 6 3725 5587 0 0 418 997 0 A 04 F201 6 1202 1802 0 0 7 382 0 B 05 F202 6 692 1037 0 21521 3369 15315 0 C1 C2 C6 C7 06 F232 6 0 0 12679 0 0 0 0 C1 C2 C6 C7 07 F203 5 121 181 0 5585 150 1134 0 C4 C5 08 F233 5 0 0 12538 0 0 0 0 C4 C5 09 F204 5 148 221 0 31 480 1324 0 C3 10 F234 5 0 0 14825 0 0 0 0 C3 11 F205 2 3402 5103 0 0 19039 800 0 D 12 F235 2 0 0 3483 0 0 0 0 D 13 F206 5 0 0 2942 16 0 243 0 E 14 F207 5 17 26 3657 0 0 109 0 F 15 F208 5 0 0 0 0 255 3327 0 G 16 F238 5 0 0 6098 0 0 0 0 G 17 F209 5 288 432 0 34618 9062 4534 0 H1 H2 K3 18 F239 5 0 0 5143 0 0 0 0 H1 H2 K3 . . . 19 F270 3 50 75 82363 1807 110 2625 0 NFW6NFW7NFW8NFW9 20 -1 21 22 IRRIGATION 23 Node/ LUI LUI H2O Cntl 24 Bran Irig Comp Node Tab# 25 F232 3 7 D335 100 26 F233 3 7 D335 100 27 F234 3 7 D317 100 28 F235 3 7 D323 105 29 F206 3 7 D320 106 30 F207 3 7 D311 100 31 F238 3 7 D305 100 32 F239 3 7 D305 100 . . . 33 F270 3 7 D221 100 34 -1 The irrigation block has a heading of IRRIGATION starting in column 1. This block is followed by two heading lines defining the contents of the subsequent columns. The first column gives the exterior node id or the branch number that has some or all of its tributary area subject to irrigation. The second column gives the land-use index of the land use that is subject to irrigation. In the current example, this number is always 3. The third column gives the land-use index for the computation of the near-surface withdrawal of irrigation water. This number is always 7 in this example. The fourth column gives the exterior node id that defines the water-surface elevation controlling the withdrawal of near-surface water. The fifth and last column gives the table number of the function table containing the fraction of irrigation withdrawal that comes from near-surface sources. Two examples of these tables are given here, taken from the FUNCTION TABLES block: Table 100 is used for most of the irrigated area. When the water-surface elevation is at 15.0 ft or below, no near-surface water is used for irrigation. All irrigation water then comes from other sources. These other sources represent an addition of water to the basin. At an elevation of 15.25 ft, one-half the water for irrigation comes from near-surface sources and one-half comes from elsewhere. TABLE#= 100 TYPE= -2 REFL=0.0 ELEVATION FRACTION Control table for irrigation from project canal. -15.0 0.0 15.0 0.0 15.5 1.0 50.0 1.0 -1.0 TABLE#= 105 TYPE= -2 REFL=0.0 ELEVATION FRACTION Control table for F205 -15.0 0.0 17.0 0.0 17.5 1.0 50.0 1.0 -1.0 Table 105 applies to the area attached to the level-pool reservoir with a downstream node of F205. Near-surface water becomes unavailable at sources levels below 17.0 ft. At 17.5ft all irrigation water is obtained from near-surface sources. Version 9.05 (Same version as above) January 5, 1997 --Use of directory or file names longer than the DOS value of 8 characters. Found the following behavior: Extended-DOS executables fail to interpret long file names properly. The filename and all directory names must not be longer than 8 characters. Windows NT executables do interpret long file names properly. However, these executables continue to run more slowly than extended-DOS executables when both are run using Windows NT. In some cases, the extra run time approaches 50 percent of the extended-DOS run time. Version 9.06 February 3, 1998 --Added new option to GATETABL. If the downstream node is left blank in the Operation Control Block, then both the arguments in the 2-D table of type 10 are heads relative to the head datum for the table given in the header block for the table. --An analysis of non-convergence events is now appended to the output. This analysis was added to FEQ in a prior version but not described in the release notes at that time. FEQ counts the number of times that each node appears as the location of maximum relative error in the last iteration of a time step that fails to converge. Thus, if all time steps converge no analysis is given. The following example shows what is reported. The location is given for each occurrence as well as the number and fraction of the total number for each location. This example shows that only 6 time steps failed to converge with the selected time step. Analysis of Non-Convergence Events Exterior nodes appearing as last location of maximum relative correction. Node Count Fraction Branch nodes appearing as last location of maximum relative correction. Branch Node Count Fraction 93 9333 1 0.17 90 9010 1 0.17 89 8909 1 0.17 79 7913 1 0.17 79 7914 1 0.17 79 7915 1 0.17 The analysis is useful in troubleshooting large models with hundreds or even thousands of convergence failures. Nodes that have a larger proportion of a large number of failures should be the focus of troubleshooting efforts, because there is probably something at or near the node in question that requires closer examination. Version 9.07 February 27, 1998 --Added additional checking for tributary-area input. Heading lines for the definition of the distribution of tributary area must now have their first non-blank information be NODE or USTAT. Any deviation from these two standard labels will cause an error and the processing will stop. This has been done because omitting a heading line in past versions did not cause an error. This was true of reservoir input and the station-range input. FEQ would merely treat the first line of numerical input as a label and continue processing as if everything were in order. The only signal of trouble would be in the results or in a short fall in the total tributary area. Version 9.08 March 8, 1998 --Warning 22, discontinuous flow at a flow boundary, was issued in error. Changes to flow boundary handling when the detention and delay reservoirs were added caused this bug. The code has been corrected so that the proper boundary-flow value is used in the checking. Version 9.09 April 2, 1998 --Added an additional option to the generic underflow gate Code 5 Type 9 command for the Network Matrix Control Input. The addition is an optional gate-efficiency factor table and the flow node used for the lookup in the table. This was added for the weir-gate on the Elmhurst Quarry diversion weir in order to vary the gate capacity in keeping with the physical model test results. When water flows over the diversion weir, apparently the approach flow to the gate is distorted such that the flow through the gate is reduced significantly. The efficiency of the gate decreases as the flow over the weir increase. Eventually, at close to maximum flow over the weir, the gate allows water to flow back into Salt Creek. The added input is in integer position 9 and 10. The fragment below shows the format This is integer position 9 and it contains the table number of the gate-efficiency factor function table. | 5 9 F56 F95 F56 001ELMG 550 550 40 F96 660.00 7.0 | This is integer position 10 and it contains the exterior node id that defines the flow used to lookup the gate-efficiency factor. A gate-efficiency factor table example is: TABLE#= 40 TYPE= -2 REFL=0.0 DivertedQ Gatfactor Gate-efficiency factor for Quarry weir-gate -100.0 1.0 0.0 1.0 350.0 1.0 560.0 0.37 1420.0 0.26 1490. 0.24 1660. 0.0 5000. 0.0 -1.0 In this case, the argument is the total diverted flow, including both the flow over the weirs and the flow through the gate. The gate can discharge a maximum of about 350 cfs before water flows over the weirs. Notice the sharp decline in gate efficiency when flow over the weirs begins. The negative argument is used to make sure that a small negative flow which could arise via roundoff, does not cause an error condition. Also the high flow of 5000 cfs is also present to prevent table overflow. The gate factor is fixed during a time step at the value of the diverted flow that existed at the start of the time step. This avoids feedback onto the gate during the solution process. Given the time steps normally used and the flow variation normally encountered, this assumption should not distort the results significantly. Version 9.10 April 15, 1998 --Added additional check for cross-section table numbers to detect a missing table at the last node on a branch. Existing checks did not flag this as an error and the program later failed with a subscript out of range system error. Version 9.11 June 5, 1998 --Added new item to the Run Control Block, TAUFAC, a tributary-area unit factor, to allow FAC in the Tributary-Area Block to be used for purposes other than unit conversion. Many users have already used it in such cases. TAUFAC follows immediately after SFAC. If not present, FEQ will assign a value of 1.0 to TAUFAC. However, if it is present it must be spelled exactly as given here. Errors in spelling will result in a failed run or some other incorrect result. An input fragment follows: . . . MAXIT= 30 SFAC=1.0 TAUFAC=5280.0 QSMALL=10.0 IFRZ=00009 . . . The rule for setting the value of TAUFAC is as follows: 1. Determine the linear factor for tributary area required to convert it to the internal units used by FEQ: square feet if GRAV is in the range of 32.2 and square meters if GRAV is in the range of 9.8. For example, if the tributary area is in square miles, then the linear factor is 5280. 5280^2 then gives the conversion factor. If the tributary area is in acres, then the linear factor is 208.71033, that is, the square root of 43,560. 2. Divide the linear factor for tributary area by SFAC. This gives the value of TAUFAC. Here are some typical examples: Trib Trib SFAC TAUFAC Area Area Unit Linear Factor ------- -------- ------ ----------- mi^2 5280.0 1.0 5280. mi^2 5280.0 5280. 1.0 mi^2 5280.0 100. 52.80 mi^2 5280.0 1000. 5.280 acres 208.71+ 5280. 0.03953 acres 208.71+ 1000. 0.2087+ acres 208.71+ 100. 2.0871+ acres 208.71+ 1. 208.71+ The above values assume that FAC is 1.0 or has a value unrelated to the conversion of units. The logic for these values is as follows: 1. FEQ ALWAYS multiplies the tributary area values given by the user by the square of SFAC. The default assumption is that the area unit for tributary area is given by a square that is SFAC by SFAC feet (meters) in size. Thus, if SFAC is 10, the area units are assumed to be 100 ft^2. 2. The internal value of tributary area used by FEQ is (user input value)*SFAC^2*TAUFAC^2*FAC. This number should be the area in square feet (square meters). This addition permits a free choice of the values of FAC, SFAC, and the units for tributary area. This addition gives considerable power but must be done correctly. Be sure you have the correct values for SFAC, TAUFAC, and FAC for your application. IT IS THE USER'S RESPONSIBILITY TO UNDERSTAND WHAT THESE THREE FACTORS DO AND TO PICK THE CORRECT VALUES. Version 9.15 June 18, 1998 --A number of changes were made in the way the various source files are arranged: 1. The COMPROG directory was changed to SHARE 2. COMPROG.FOR was broken into many smaller units 3. ARSIZE.PRM for FEQ and FEQUTL were combined into one file 4. Several .COM files between FEQ and FEQUTL were the same or nearly so. These files were adjusted to be the same and moved to the SHARE directory to be used by both FEQ and FEQUTL. 5. One bug was found in FEQ that occurred on a DG compiler. All other compilers did not encounter the problem. These tasks were done by RS Regan, USGS, Reston. I have made check runs of the modified code and have found no differences. However, there may be options not tested that could cause errors. Version 9.18 14 July 1998 --The GATETABL option in the Run-Control Block has been modified to permit specification of a control-table selection table. This makes it possible to vary the control table with time to reflect the differing flood sizes and characteristics. A selection table is specified whenever the table number in the control-table input location is negative. The selection table must contain the table numbers for the control tables that are to be used. Here is an example of the input and the various tables: . . . BLK=00003 BLKTYPE=GATETABL MINDT=60.0 PINIT=0.0 CTLN UPSN DNSN TABN DOM AUX Lower gates for WDIT pumped LPR F28 F28 F32-9600 MAX MAX -1 . . . This portion of the Operation Control Block input shows a negative table number, -9600. An example of this table (with added line numbers) is shown below. 01 TABLE#= 9600 02 TYPE= -7 03 REFL=0.0 04 YEAR MN DY HOUR Selection table for low-level gates. 05 1925 1 1 0.000000 9601.1 06 1949 6 12 0.000000 9601.1 07 1949 6 12 0.250000 9602.1 08 1950 3 5 0.000000 9602.1 09 1950 3 5 0.250000 9603.1 10 1951 2 15 0.000000 9603.1 11 1951 2 15 0.250000 9604.1 . . . Line 05 is the start time of the dummy event in the TSF. Line 06 indicates that 9601 is the low-level control table for the dummy event. Line 07 indicates that 9602 is the low-level control table for the first real event. In line 09, note that each event except the dummy event used the control table from the previous event for the first 15 minutes. 1988 4 1 0.000000 9664.1 1988 4 1 0.250000 9665.1 1988 9 11 0.000000 9665.1 1988 9 11 0.250000 9666.1 1988 9 15 23.000000 9666.1 1925 1 1 0.000000 0.0 The reverse time step signals the end of the table. Examples of the control tables are shown below. TABLE#= 9601 TYPE= -10 '( 13A6)' '(1X, 13A6)' '(F6.0)' '( 13F6.0)' '(1X,F6.1, 12F6.2)' HDATUM= 673.500 LABEL=Low Level Event Start: 1925 1 1 0.00000 Fstage-9999. 0.00 0.10 50.00 -99.00 0.0 0.0 0.0 0.0 -4.17 0.0 0.0 0.0 0.0 -3.17 0.0 0.0 0.0 0.0 15.00 0.0 0.0 0.0 0.0 -1.00 TABLE#= 9602 TYPE= -10 '( 13A6)' '(1X, 13A6)' '(F6.0)' '( 13F6.0)' '(1X,F6.1, 12F6.2)' HDATUM= 673.500 LABEL=Low Level Event Start: 1949 6 12 0.00000 Fstage-9999. 0.00 0.10 50.00 -99.00 0.0 0.0 0.0 0.0 0.00 0.0 0.0 0.0 0.0 1.83 0.0 0.0 0.0 0.0 2.83 0.0 0.0 1.0 1.0 15.00 0.0 0.0 1.0 1.0 -1.00 Version 9.19 11 August 1998 --Corrected error in GENSCN related routine that placed the incorrect station and invert elevation for exterior nodes on branches into the FEO file. Version 9.20 14 November 1998 --Negative values for a fixed-flow boundary (Code 6 Type 1) were found to not work properly since version 9.0. Positive flows worked properly. Added code to detect negative values and treat them properly. Version 9.21 14 November 1998 --Found an undefined variable in TDLK15 that appears to have been present for some time. This would have affected all models that used underflow gates. The nature of the effect depends on too many factors to ensure a complete description here. HGR was used instead of HGL in the interpolation between tables of type 13 that define the flows for various gate openings. Version 9.22 30 November 1998 --Changed work space for processing the Network Matrix info so that it is of the proper type. This was needed to run the chk mode of the latest Lahey compiler. Added an equivalence overlay to make this possible. --Found that SYSGN was undefined in TDTCHK for side-weir tables. This only affected checking for overflow on a 2-D table used in Code 14 at the end of the run. Added the proper sign for this case. --Argument TIME to SET_INITIAL_OPER_BLK when called in FEQ was of the wrong precision. Added SNGL to the TIME argument at this call. --Found that HECDMY.FOR needed changes to permit null calls. Some program units are called even if the HECDSS is not being used. Added FULL = 0 to UPDATE_DSSOUT_JTIME in HECDMY.FOR. FULL was undefined if HECDMY.FOR was linked into the files. Version 9.23 9 December 1998 --Changed balance computation to provide more detail and to output a balance summary at the end of each event when running with DIFFUS=YES. Two balances are reported: one for the branches and level-pool reservoirs and the other for the tributary area that may include detention and delay reservoirs. --Added output of units to the detention basin summary output. --Changed some still existing carriage-control characters in the output of the DTSF description. Also changed upper case text to mixed case. --Placed cross-section-function-table lookup in line in the loop in SETINX. This reduced run times by 2-4 percent in two test cases. --Changed some carriage control characters in error and warning statements. Version 9.24 28 January 1999 --Added code to do a check on the state of 2-D tables used in Code 5 Type 6 instructions: 2-Node Control Structures. The state of the tables refers to the nature of the flow in the table when the ups or dns node is at its maximum elevation. The tables of concern are the type 14 tables whose contents are based on WSPRO bridge analysis results. As outlined in the documentation, such tables will probably not have valid free-flow conditions in them. WSPRO does not compute critical flow through the bridge opening and it often proves difficult to get the computations in WSPRO to succeed when the Froude number is close to 1.0. As a consequence of the above, all type 14 tables with contents defined by WSPRO analyses should never have a lookup in the free-flow state. Such lookup results will be incorrect. FEQUTL has been changed to output a source string following the HDATUM item in all 2-D tables of any type. This string is currently only used by the WSPRO checking code. Thus, only tables of type 14 with contents defined by WSPRO runs need to be changed. The source string for these tables is: WSPRO and should appear after the HDATUM entry. An example is, TYPE= -14 HDATUM= 685.120 WSPRO The spaces between the numeric response to HDATUM and the source string must be present. One space is sufficient if the numeric response is at the far right of its set of columns. This will be the case if the table has been created by FEQUTL. These changes can be made manually in order to avoid rerunning WSPROT14 in FEQUTL. The results of the table state analysis appear after each event in the FFFILE if diffuse inflow from a DTSF is used. Otherwise they appear in the standard user output after the extreme values are given. The analysis results look like the following: Two-D table states for Code 5 Type 6: Head- Tail- Table Table Flow Flow water water id Type state Ratio node node ----- ----- ---------------- ----- ------ ------ D142 U141 1543 14 Sub. D32 U31 1704 13 Free D54 U53 2503 14 Sub. 0.22+ D155 U154 2504 14 Free 1.07- *ERR:XXX* Table state in preceding line is invalid. Flow exceeds max flow in WSPRO computations by more than 5 percent. D56 U55 2505 14 Sub. 0.76+ D59 U58 2506 14 Sub. 0.99+ D60 U59 2507 14 Sub. 0.74+ D63 U62 2508 14 Sub. 0.32+ D67 U66 2511 14 Free 1.04* D68 U67 2512 14 Sub. 0.31+ The headwater node is the node from which water flows and the tailwater node is the node to which water flows. If the flow reverses at the structure, two entries will appear with the headwater and tailwater nodes reversed and perhaps with different table numbers (if different tables were supplied in the instruction). The entries are given in ascending order of table numbers. If the table type is 13, this information is reported only for user interest. The flow ratio will be blank for all type 6 or 13 tables. If the table is of type 14 and the flow ratio is blank, then the table is treated as NOT having its contents defined by WSPRO. This depends on all the tables with contents defined by WSPRO being marked as outlined above. Tables of type 14 with information in the flow-ratio column are those tables that were marked as having contents defined by WSPRO. If the ratio is followed by the plus (+) sign, then the table was in the submerged flow state. The flow used in the flow ratio is the flow through the structure when the maximum elevation occurred at the headwater node. This may differ from the maximum flow at the flow node for the structure. The denominator of this ratio is the flow in the table treated as the critical flow for the head at the tailwater node. However, if the table contents are defined by WSPRO, this will NOT be the critical flow, but only the maximum flow computed for the given fixed tailwater head. For example, at headwater node D54, the flow ratio was 0.22, well in the submerged state. If the flow ratio exceeds 1.05, the ratio is marked with a minus (-) and an error message is issued. If the flow ratio is between 1.0 and 1.05, no message is issued but the flow ratio is marked with an asterisk (*). Version 9.25 February 4, 1999 --Added internal flag on cross-section function tables to indicate the source of the table contents: read from input or interpolated within FEQ. Needed to support more options for GENSCN output. XTIOFF in arsize.prm increased from 20 to 21. --GENSCN output options have been extended from the limited options available in past versions. The past version option is retained without change. The new option is selected with a heading of NEW GENSCN OUTPUT. An example of the new options is: 1 FREE NODE TABLE 2 NODE NODEID DEPTH DISCHARGE ELEVATION SIGN 3 F1 RESERV 95.4 0.00 0.0 -1 4 F2 RESERV 95.4 0.00 0.0 +1 5 F3 OVERFLOW 2.0 0.00 95.0 -1 6 F4 OVERFLOW 2.0 0.00 95.0 +1 7 NEW GENSCN OUTPUT 8 FILE=NEWTEST 9 ADD=ALL_NODES 10 SUB=LPR_NODES 11 SUB=INTERPOLATED 12 SUB=FREE_NODES 13 SUB=BRANCH_EXN 14 ADD=NONE 15 OPT BRA NODE 16 ADD 0 F1 17 END 18 19 BACKWATER ANALYSIS 20 BRANCH NUMBER= -1 21 DISCHARGE= 54.0 Line 7 through 17 are the new GENSCN output specification. This block appears after the FREE NODE Initial Conditions table and before the specification of BACKWATER ANALYSIS. Lines 8 through 14 all have a keyword followed by an equal sign and followed by a response. The FILE input is required and gives the base filename for the three files created for GENSCN. In this case, the three files will be: NEWTEST.FEO, NEWTEST.FTF, and NEWTEST.TSD. The *.FEO file contains a description of the output so that GENSCN can find the information in the other two files. The *.FTF file contains the function table storage system from FEQ so that values from the cross section at a node can be plotted. The final file, *.TSD, contains the time-series data, flow, and elevation for each node defined in the input. Lines 9 through 14 specify the nodes by referring to various groups (classes) for nodes. The keyword, ADD, means to add the nodes in the class to the list that will be output to GENSCN. The keyword, SUB, means to subtract the nodes in the class from the list. Subtracting non-existent nodes will be ignored. Also adding the same nodes twice will not create duplicates--only the last request is retained. In the above example, line 9 adds all nodes in the model. Then line 10 subtracts all the level-pool reservoir nodes. Line 11 removes all nodes on branches that have a cross section interpolated within FEQ. Line 12 removes all free nodes and finally line 13 removes all exterior nodes on branches. The current classes are: 1. ALL_NODES-- all the nodes in the model 2. ALL_BRANCHES -- all the nodes on branches in the model 3. ALL_EXN -- all the exterior nodes ( those on branches AND free nodes) 4. INTERPOLATED -- all nodes on branches that have a cross section interpolated by FEQ. 5. INPUT_XSEC -- all nodes on branches that have a cross section input to FEQ. 6. FREE_NODES -- all free nodes, that is, exterior nodes not on a branch. 7. LPR_NODES -- level-pool reservoir nodes, that is, the downstream free node of the two nodes used to represent a level-pool reservoir 8. BRANCH_EXN -- all exterior nodes on branches 9. NONE - used to terminate specification of the output by class name and revert to detailed specification. These class names can be used with ADD and SUB to specify a wide variety of nodes. However, lines 15-17 show an example of adding an individual node. One can also add an individual branch by specifying the branch number under the BRA column and leaving the NODE column blank. This new version of the GENSCN output places the nodes in a particular order that is not under the control of the user. This is distinct from the old command. In the old command no class names existed and one added individual nodes or branches by specifying each one. The order of the information in the time series file was then the same as the order of specification. This order can no longer be maintained because with class names the order of specification is no longer maintained. Therefore, with the new command option, the order is exterior nodes first followed by branch nodes. The exterior nodes are given with the nodes on branches given first followed by the free nodes in ascending order based on their number. The order for the exterior nodes on branches is given by the order of input of the branches. If branch 1 is given before branch 2, then branch 1 will be given first. However, if branch 2 is given first, then branch 2 will be before branch 1 in the output order to GENSCN. Looking at the *.FEO file will give the order that was used. The branch nodes follow the exterior nodes. Here, the order is also dictated by the order of input of the branches to FEQ. This could become important in GENSCN when plotting a water-surface profile because profile plots crossing branch boundaries require that the branches be continuous and in the proper order, upstream to downstream, in the GENSCN output. Version 9.26 April 8, 1999 --Changed FAC to EPSFAC in subroutine INFO. Done to avoid conflict in later extensions. -- Forced OUTPUT to be 0 no matter what the user supplied. Also deleted the input of the optional values STPRNT and EDPRNT that dependent on OUTPUT being non-zero. --Added option to set QCHOP to default value when its input value is less than zero. Done to prepare for future additions. Version 9.50 August 6, 1999 --Changed processing of Run Control Block(RCB) to be namelist-like input. The new form requires an explicit heading be given, RUN CONTROL BLOCK, starting in column 1 as do the other block headings. The contents of the block then consists of variable names followed by an equal sign followed by the value to be used for the variable. A minimal Run Control Block would be: RUN CONTROL BLOCK NBRA= 5 NEX= 10 STIME= 1990/10/20:0.D0 ETIME= 1995/12/02/24.D0 All other variables given in previous versions still exist. However, they all now have default values that are used if they are not found in the input. The default values are given in the table below. The reference number gives the LINE number in the revised input description for the Run Control Block. The rules for giving the variables are as follows: 1. A variable and its response must all appear on one line of input. A line of input can contain up to 120 characters in the Run Control Block. 2. The equal sign must appear after the variable name. 3. The variable name must be given as in the table. In the future, alternatives for names will be given. 4. Some variables have more than one value in a response. These variables include standard date-time sequences and the specification of IFRZ. The date-time sequences need all four values to be present; none can be omitted. IFRZ requires that the values for the number of time steps be present. However, more than the required number of time steps is accepted and the excess information is ignored. For IFRZ, the number of steps AND the step values must all appear on one line. 5. The spacing within a name and its response is not restricted. However, adjacent name-responses must be separated by one or more spaces. For example, NBRA=5 and NBRA = 5 will both give the same result. However, NBRA=5NEX=10 is invalid. Table of Variable Names, Default Values, and Description Reference Variable Default Brief Description Number Name Value -------- ------------- ----------- ---------------------------------------------------------- 1 NBRA 1 Number of branches in the model 2 NEX 2 Number of exterior nodes in the model 3 SOPER NO Indicates presence of Operation of Control Structures Block 4 POINT NO Indicates presence of Point Flows Block 6 WIND NO Indicates presence of Wind Information Block 5 DIFFUS NO Indicates presence of tributary area and diffuse inflows 5 MINPRT 0 Flag to minimize size of the user output file 5 LAGTSF 0 Flag to signal lagging of diffuse inflows 5 DMYEAR 1925 Year of dummy event for diffuse flows 5 DMMN 1 Month number of dummy event for diffuse flows 7 UNDERF NO Flag for treatment of numerical underflows 8 ZL 0.0 Depth below which zero inertia approximation is used 9 STIME 1901/01/01:0.0 Starting time: year/month/day:hour 10 ETIME 1900/01/01:0.0 Ending time: year/month/day:hour 11 GRAV 32.174 Gravitational acceleration 12 NODEID YES Node ids MUST be used for version 9.30 and later 13 SSEPS 0.1 Convergence tolerance for water ponded above sewers 14 PAGE 24 Page size for file containing special output 15 EPSSYS 0.05 Global relative primary convergence tolerance 15 ABSTOL 0.000005 Global absolute convergence tolerance 15 EPSFAC 2.0 Factor defining secondary relative convergence tolerance 16 MKNT 5 Maximum number of iterations per time step 16 NUMGT 0 Count limit for secondary tolerance 17 OUTPUT 0 Output level-must be left at 0 17 PROUT 0 Output level-must be left at 0 18 PRTINT 1 Printing interval for detailed time-step output 18 DPTIME 9999/12/31/24. Start time for debugging print output 21 GEQOPT STDX Global governing-equation option 22 EPSB 0.0005 Relative convergence tolerance for steady flow computations 23 MAXIT 30 Maximum number of iterations per node in steady flow computations 24 SFAC 5280. Stationing factor for unit conversion 24a TAUFAC 1.0 Tributary area factor for unit conversion 25 QSMALL 1.0 Small flow used in computing relative change in flow 26 QCHOP 0.001 All flows < |QCHOP| are printed as zero 27 IFRZ 1 300. Number of frozen time steps AND the time step length(s) 28 MAXDT 1800. Maximum time step in seconds 28 MINDT 1.0 Minimum time step in seconds 28 AUTO 0.7 Weighting factor for computing running sum of iteration count 28 SITER 2.8 Initial value of running sum of iteration count 28 HIGH 3.2 Upper value for running sum of iteration count 28 LOW 2.4 Lower value for running sum of iteration count 28 HFAC 2.0 Factor for increasing time step 28 LFAC 0.5 Factor for decreasing time step 29 MRE 0.20 Maximum relative change in a variable during extrapolation 29 FAC 0.0 Extrapolation factor 30 DWT 0.1 Increment to BWT when max. iteration count is exceeded 31 BWT 0.55 Base value for time-integration weighting factor 32 BWFDSN File name for the initial conditions when DIFFUS=YES 33 CHKGEO NO Indicates checking of hydraulic geometry 34 ISTYLE NEW Style of exterior nodes MUST be NEW in version 9.30 and later 35 EXTTOL 0.0 Limit for reporting cross-section table overtopping 36 SQREPS 1E30 Action level for sum of squares of equation residuals 37 GETIC File name for initial conditions 38 PUTIC File name for saving state of the model OLD_SUMMARY YES Indicates old form of summary output ------------------------------------------------------------------------------------------------------------- --Changed the descriptive lines at the head of the input file. As many as 50 lines of 120 characters each can be given to describe the run. These lines are given as part of the description of output. An additional line giving the version number, version date, and the date and time of the run are given in the description. The occurrence of the heading for the Run-Control Block terminates the descriptive information. Thus, it is possible to have blank lines in this description. Every line above the line containing "RUN CONTROL BLOCK" is considered part of the descriptive input. The added line giving the version, the date of the version, and the date/time of the run has the following format: Version: 9.50 Version date: 3 June 1999 Date/time of run: 1999/07/21: 11.14.12.234 The time is given in the sequence of hour, minute, second, and milliseconds. Thus, the time in the example line is 11:14 and 12.234 seconds. --Function tables are now referenced with a table id using a maximum length of 16 characters. The old table numbers will still work, but they are treated as strings of characters and NOT as numbers. This means that table numbers of 0010 and 10 are now different when they were the same in earlier versions. The table id can be composed of the digits 0-9, the letters A-Z, and a-z, the underbar character, _, and the period. Spaces and other characters will cause problems. Do not make a table id look like a floating point number. For example, 11D3 will be treated as a double precision number, 11,000, and not a table id. Table ids, such as 1000d, 10D, 234E, or 376e, will also cause problems because they look like incomplete specifications of floating point numbers. In the same way, a table id that looks like a number with a decimal point will not be detected as an id but as a number. To allow these table ids would require reducing the level of error checking on numeric input. The best way to avoid problems like this is to switch to using table id's that start with an alphabetic character or the underbar character. These are clearly not numbers so that no confusion will result. In addition, the maximum branch number was increased from 999 to 9999. At the same time, support for the old style of exterior node designation, that is, by numbers, has been deleted. The new style for exterior nodes MUST now be used. Node ids must also be used and the maximum length of a node id has been increased to 16 characters. These changes in the size of various input items required changes to the input to FEQ. The Branch Description Tables Block, the Output-Files Block, Free-Node Initial Conditions Block, and the Pump option in the Operation of Control Structures Block, have been changed. The Special-Output Locations Block has one rarely used option at the end of an input line that was extended in field width. The Network-Matrix Control Input Block (NWMCI) must be invoked with the NEW option so that the input is column independent. This then allows for the extra space required by the longer table id's and the longer exterior-node names. The new format for the NWMCI has been an option for years; therefore, the format is documented and tested. The changes made to the Branch-Description Tables Block are: 1. The header for each block has been changed to name-list like input. The header is the line that starts with BNUM and is followed by the branch number and a series of optional input values. All these values must still appear on a single input line, but they are now of the form: =, just as for the Run-Control Block as outlined above. The variable names are the same as those used in the old form of input with the addition of BRANCH as an alias for BNUM and ADDNODE as an alias for ADDNOD. The following are valid example inputs for a branch header: BRANCH= 20 INERTIA=0.9 BNUM= 9000 ADDNODE=-1 2. The body of the branch table has been changed to use what we will call a heading-dependent format. The name-list form of input is order and column independent. Column-independent input is order dependent but column independent (within the line length). Using column-independent input for the body of a branch-description table is difficult because the pattern and number of non-blank items in the input is quite varied. Column-independent input works best if there are only a few variations in the number and meaning of the items of input. A heading-dependent format uses the headings to define the column range for each item in the branch-description table. For example, BRANCH= 4000 NODE ----------NODEID ------------XSID STATION ELEVATION KA KD HTAB AZM 400001 Deming USGS Gage R4_35.9978.93 TAB TAB R4_35.8081.93 TAB TAB 0.1 0.3 R4_35.6387.93 TAB TAB 0.1 0.3 -1 The heading line is the line immediately following the line: BRANCH= 4000. The valid range of columns for each item of input is from the first column after the preceding heading or column 1 if there is no preceding heading. For example, the valid range of columns for the node number is from column 1 through the column under E in NODE. The valid range of columns for the NODEID is from the column following the E in NODE to the column below the final D in NODEID. As in the Run-Control Block, 120 columns are available for each line of input. The dashes prefixing NODEID and XSID are used to show the maximum length of the item and do not delimit the valid column range for those items. The input item may appear anywhere in its valid range of columns. It is not necessary that the number of columns allocated to an item be the full length of a valid item. Thus, if the node id's are all 8 characters or less in length, the valid column range need only be that wide. Old inputs that have used headings in the proper manner for the Branch-Description tables will be processed without change. The Output Files Block also has been changed to a heading-dependent format. This means that all items must fall below the heading for the column, and the headings should be right justified. In most past inputs, the file-name heading has been left justified. The heading must be right justified, or else extended with other non-blank characters to be at or to the right of the right-most character in any file name specified in the column. The Free-Node Initial Conditions Block also has been changed to a heading-dependent format. All items must fall below the heading for the column and the heading is treated as being right justified. Information extending beyond and to the right of a heading will be considered as part of the next column of data. The Pump option and Gate-table option in the Operation of Control Structures Block has been changed to a heading-dependent format. The same rules apply as in the other blocks changed to allow for the increased lengths of table ids. The Special-Output Locations Block has been changed so that the input of the output location is heading-dependent. This change was done to permit longer than four-character-long names for pumps and gates. The names for gates and pumps can now be up to 16 characters long. The extra width of exterior node names when the larger branch range is used will fill some of the input formats to their maximum extent. This is true, for example, in the Backwater-Analysis Block. The width assigned to the CODE and EXN# columns is five. However, the input for the preceding column is such that a blank space can always be provided to visually separate the items. They need not be separated because of software requirements. However, the input is clearer if there is an intervening space or two between items on the same line of input. --Error messages. In processing table ids, FEQ assigns an internal table number to each table. This internal number is used everywhere like the id for the table. When an error message is given that contains a table number, FEQ is supposed to convert the internal table number to the external table id in the message. It is probable that some error messages have not been corrected and give the internal table number and not the table id. FEQ reports the internal table number assigned to each table that is read from an input file. If an error message appears with a table number that does not exist or does not make sense, then scan the output file for the section where the function tables are input. The reported number may be an internal number that was not converted. Also, make a note of the error message number and report the problem so that the message can be corrected. --Increased space for the node id in the summary output may confuse programs that read this output, therefore, a variable has been added to the Run-Control Block to request that the old form of summary output be used. If any node id's are longer than 8 characters, they will be truncated on the right to 8 characters. --Signs for free nodes may be omitted. Previously, if the sign was omitted, a warning message would be generated and the correct internal sign would be used. This was changed such that no warning message is given and the sign of free nodes is defined by the order of their appearance in the dummy branch instruction in the Network-Matrix Control Input. --A utility program, CONVERTFEQ, will convert input to FEQ to the new format. This program assumes that the input is post 7.0 and that headings have been used properly. CONVERTFEQ assumes that all input is in its proper heading range. In testing, one case was found that will require manual correction. The node id field is preceded and followed by a single column that is not read by FEQ. If a character of the node id falls into one of these columns, FEQ versions before 9.50 will process the input without reporting any errors. However, some node ids will be missing a character. This will probably be unnoticed unless the output is scrutinized with great care. However, when CONVERTFEQ processes such an input, the characters extending beyond the valid node-id columns will appear in the adjacent columns and WILL be treated as part of the table id. The table for this id will not be found and the run will fail. The input then must be manually scanned for node ids that fall outside the proper column range. --FEQ can now compute rainfall and evaporation on water surfaces. You must supply a time-series table or a time-series file that contains the rainfall or evaporation intensity. If the time series are given in function tables, these tables appear in the Function Tables Block as do all other function tables. If files are used, they are given in the Input Files Block. Note that if a HECDSS is used the values for both rainfall and evaporation MUST be INST-VAL. This is not the usual data-recording method for these data series. However, the initial support for evaporation and rainfall assumes that the times series are point valued. Currently, the only non-point valued time series supported by FEQ are those used in computing the diffuse inflows. FEQ point time-series files are by definition point valued. Thus, FEQ point time-series files can be created from a WDM using WDMUTL, which has been updated to create additional approximations from a mean-valued, time-series dataset in the WDM. The time series are associated with either a branch or a level-pool reservoir by giving the table id or the unit name for the input file after the other integer items for the Branch instruction code, CODE = 1, or the level-pool reservoir instruction code, CODE = 7. Because this version ONLY supports the NEW form of the Network-Matrix Control Input, the values in an instruction need only be separated by one or more blanks. If the time series are in time-series function tables (TYPE=7), we would have, for example, 1 25 RF_Fort_Pierce EV_Fort_Pierce if we wanted the effect of rainfall and evaporation on branch 25. Notice that the rainfall time series is given first and the evaporation series is given second. If only evaporation is to be computed, give a 0 (zero) as the value for rainfall time series designation. A 0 will be taken as indicating that no rainfall time series is given. These fluxes on the surface of a level-pool reservoir are requested by: 7 F5100 Capacity_Table 1 F5000 RF_Fort_Pierce EV_Fort_Pierce If the FEQ point time-series files are used, the currently recommended usage, then we give the file names in the Input Files Block, each with a unique unit name of length of 16 characters or less. Then, we give this unit name, prefixed by a minus sign with no intervening spaces, in the same position as the table ids. Using time-series files, the above examples become: 1 25 -RF_Fort_Pierce -EV_Fort_Pierce 7 F5100 Capacity_Table 1 F5000 -RF_Fort_Pierce -EV_Fort_Pierce The Input Files Block would then look like INPUT FILES BLOCK -------UNIT_NAME FILE_NAME--------------------------------------- FACTOR----- RF_Fort_Pierce D:\projects\florida\C24\rainfall\fort_pierce 2.31481E-5 EV_Fort_Pierce D:\projects\florida\C24\evaporation\fort_pierce 9.64506E-7 -1 The same time series can be reused as many times as needed for different branches or level-pool reservoirs. Each branch or LPR can also have a unique series associated with it. FEQ computes the flow at the water surface by the product of the rainfall or evaporation intensity (length unit/time unit) and the surface area of the branch or LPR at the start of the time step. FEQ does not attempt to include the effect of the change in surface area on these fluxes. The effect of rainfall and evaporation, in the short term, is small, especially during flood flows. During low flows, when the effect of rainfall and evaporation is larger, at least in a relative sense, the changes in surface area are small. Therefore, the error introduced by using the area at the start of the time step is small also. The units expected for the intensity are feet/second (meters/second). If the values in the function table are not in this unit, and normally, they would not be, the FAC option in the function-table header should be used to convert the values to the desired intensity. For example, if the rainfall is in units of inches per hour, a typical unit, then the value of FAC should be inch 1 hour 1 foot 1 feet 1 ----- x --------- x ------- = ------- ------- = (approx) 2.31481 x10^-5 hour 3,600 sec 12 inches 43,200 sec Because evaporation data are sometimes available on a daily basis, in inches per day. The factor to use in this case is inch 1 day 1 foot 1 1 ------- x ------------ x ----------- = -------- = (approx) 9.64506 x 10^-7 day 86,400 sec 12 inches 1,036,800 For a file, you can add an optional factor following the file name as shown in the example above. The goal is that after the application of the factors, the intensity is in feet/second (meters/second). However, the values stored in the table or in the file can be in any other convenient intensity units. Two additional items have been added to the water-balance computations: WSI and WSQ. WSI is the cumulative total volume of inflow to the water surface from the rainfall source, and WSQ is the same for evaporation. These values are printed at the end of each detailed output in the main output file for FEQ. The units are cubic feet (cubic meters). Version 9.51 August 20, 1999 --Updated various error messages. Converted additional messages to mixed upper and lower case. --Discovered bug in computation of detention parameters. Apparently, this bug did not reveal itself with the Lahey 4.5 compiler. However, the Lahey/Fujitsu compiler produced code that gave incorrect answers. The program would not properly compute the split of tributary area between non-detention and detention areas. The impervious fraction was always shown as 1.0 when it should have been less than 1.0. Version 9.52 November 9, 1999 --Factor on value in HECDSS files in the Input Files Block had a default of zero when it should have had a default of 1.0. Version 9.55 December 14, 1999 --Two new items have been added to the Run Control Block: GISID_TO_NODEID and TABID_TO_NODEID. If GISID_TO_NODEID=YES, then FEQ will place the GISID string from a cross-section function table into the NODEID for any branch-node node id that is blank. If TABID_TO_NODEID=YES, then FEQ will place the TABID string from a cross-section function table into the NODEID for any branch-node node id that is blank. IF both items are YES, then GISID_TO_NODEID dominates. In order for the GISID to be available, the Function Tables Block must be given before the Branch Tables in the input. --Added bottom-slot depth to cross-section function tables to remember the original invert elevation. The slot depth is used to convert the depth value in the printed results so that the depth will be computed relative to the original invert. This yields some negative depths in the results. A negative value indicates that the water surface is below the original invert level. --Interpolated sections now include an interpolated value of bottom-slot depth, Easting, and Northing. --Added additional error detection code for the beginning node for the formation of the Network Matrix. Invalid beginning nodes were not detected and caused enigmatic errors later in the processing. Version 9.56 February 1, 2000 --Fixed bug in output of Free nodes in the summary of extremes at the end of the run. The last digit of a free-node variable name was truncated if OLD_SUMMARY=YES was used in the Run-Control Block. Version 9.58 February 16, 2000 --The GATETABL control option in the Operation-Control Block has been extended to permit two tables per control point instead of one. The first table (in left to right sequence) is for rising elevation at the control point, and the second is for falling elevation at the control point. An elevation increment is also provided to reduce switching between the two tables too rapidly. If one control point in a control block has two tables, then all control points for that block must have two tables. If only one table is desired, give the same table in both locations with the same selection rules. An example of the new input follows. This is adapted from above where the original GATETABL option was introduced. 01 BLK=00002 02 BLKTYPE=GATETABL 03 MINDT=3600. 04 PINIT=0.0 05 CTLN UPSN DNSN TABR DOM AUX DIREPS TABF DOM AUX 06 U20 U20 F29 5300 MAX MAX 0.03 5310 MAX MAX 07 -1 The input in lines 5 and 6 are heading dependent. That is, each heading defines the columns available for the items below it. The headings are treated as being right justified. If only one control table is given be sure that no extra headings are given. The number of headings found by FEQ determines the input expected. Version 9.60 February 18, 2000 --Changes were made to the interpolation routines to correct a bug in FEQUTL. FEQ and FEQUTL use common code for interpolation of cross-section function tables. In order to correct the bug in FEQUTL, I had to generate internal table ids for each request for an interpolated table that did not contain a table id from the user. The internal tabids have been generated in a form that is invalid as an external table id. Thus, they will never conflict with a user table id. The internal table ids can appear in error messages about an interpolated table. The internal table ids contain a sequence number that is unique for each id. The interpolation routine adds the starting and ending internal tabid number to its output. Thus, it is possible to locate which table is the source of the error even though the table was generated from a request for interpolation with the minus sign in the tabid field. Version 9.65 March 7, 2000 --Fixed significant errors found in the STDCX governing-equation option. There was an error in the computation of some of the derivatives of the residual function, as well as an error in one part of one function-table lookup routine. The governing-equation options other than STDX have had little use and, therefore, some errors have been present for some time. Currently, the STDCX option appears to be working consistently with STDX. Version 9.66 April 14, 2000 --Increased the length of a fully qualified file name in the Function-Tables Block to 128 characters. The previous limit of 64 was too short for large projects with many levels in the directory tree. --The standard input and output file names given on the command line are now written to the standard output file. This will help in keeping track of files used to create the standard output file. Version 9.67 April 20, 2000 --Added option of another column of input values in a branch. These follow the standard flood-elevation field and give an adjustment factor on conveyance. For example, a value of 1.1 will multiply the conveyance by 1.1. This implies a REDUCTION in Manning's n of close to 10 percent across the section. That is, all values of Manning's n are reduced by this amount. The Manning's n values are in effect divided by the factor. If Manning's n within a subarea (subsection) of the cross section varies with depth, then the adjustment becomes more complex. Be careful with repeating table ids and using this adjustment factor. FEQ remembers which tables have been adjusted and will adjust once only. FEQ outputs a record of each adjustment made and of adjustments requested and not made. If you are adjusting roughness this way, be sure that all tables being adjusted are unique. Version 9.68 April 26, 2000 --Found case where only a single node was given on a branch and FEQ did not issue any warnings and the model appeared to compute without error. Added detection for this case to prevent it from occurring in the future. Version 9.69 May 18, 2000 --Changed handling of error reporting for negative depths in the Free-Node Initial Condition Block. Now a warning is given if the depth is negative and the depth datum is zero. This was done to permit giving an initial elevation for the level-pool reservoir that is negative and still have the depth datum as zero. Having a depth datum of zero for both nodes of a level-pool reservoir has been the signal that the depths are really elevations. However, a negative depth is invalid. At the point of reading the inflow node for a LPR we do not know that it is an inflow node to a LPR. Thus, we issue a warning that a negative depth is invalid if the node is not an inflow node to an LPR. --Added two new options to Code 14, Side Weir Flow, in the Network-Matrix Control Input Block. Two optional time-series table ids can be given following the table ids for the flow tables. The first of these time-series tables gives an adjustment factor on the flow taken from the flow tables as a function of time. This permits adjusting the flow computed, either as a calibration tool or to simulate structures with changes taking place during a flow event. If the table is omitted the multiplying factor is taken as 1.0. The contents of the time-series table is not checked so be sure that the multiplying factor is in a reasonable range. Errors in the factor in the table can cause computational failure. The second of these optional time- series table ids references a crest-adjustment factor as a function of time. When this table is present, the third floating point value should also be given. This value gives the elevation of the toe of a levee that is subject to failure. FEQ computes the current levee crest elevation using: HC = HCREST + P*(HCREST - HTOE) where HCREST = the original elevation of the crest that defines the reference point for heads used in the flow tables. This would be the minimum elevation for the length of levee crest used in defining the flow tables using EMBANKQ or, in some cases, CHANRAT. HTOE = the elevation of the toe of the levee on the protected side of the levee. P = the factor stored in the time-series function table. The meaning of P is P = 0 -> levee crest is unchanged P < 0 -> the levee crest is shifted downward by the given decimal fraction of the levee height, where the levee height is defined as HCREST - HTOE. For example, if HCREST = 14.0 and HTOE = 10.0, the levee height is 4.0. If at a given time point P= -0.25, then the new levee crest is 13.0. P = -1.0 indicates that the levee is completely eroded to the toe elevation. P > 0 -> the levee crest is shifted upward in the same manner as P < 0 shifts downward. P > 0 can be used to represent the raising of a levee crest by sandbagging or other means during a flood fight. Please note that this means of simulating changes to a levee during a flow event is simple and limited in scope. The shift in levee height occurs uniformly over the complete length of levee represented by the flow table. In typical applications the flow table could represent several hundred feet of levee. If only a 50 foot section is to fail, then two courses of action are available. 1. Use a flow table for only a 50-foot length of levee. This leads to many short branches. 2. Modify the longer levee crest for the flow table so that only a 50-foot portion is available for overflow. That is, we assume that the remaining portion of the levee represented by the flow table is not subject to overflow under any circumstances. 3. Use a time-dependent multiplying factor, from a time-series table, to reduce the flow to approximate the 50-foot failure length. Version 9.7 August 2, 2000 --Added an option to create an IntelliCAD script file to draw a geographic schematic. A geographic schematic mimics the true length and location of branches. It is distinguished from a topological schematic that only shows the connectivity among branches and nodes without regard to their true length or location. In order for this schematic to be drawn, several additional items of information must exist in the input to FEQ: 1. Every cross section must have an EASTING and NORTHING value given that agrees with the actual location of the cross-section invert. These values are used to draw the branch. 2. Level-pool reservoirs must have their location given in the Free-Node Initial Conditions Block. This block has been modified to accept this information. 3. Dummy branches that connect between junctions that each have at least one branch-exterior node in them do not need to have their (x,y) location given. FEQ uses the branch-exterior node locations to define the location of the dummy branch ends. 4. Dummy branches that have only one end connected to a branch are reported to the output file in a format that is recognized by a utility program, MODFEQIN. This program will place these values in the proper locations in the Free-Node Initial Conditions Block with the location of the other end of the dummy branch filled with a standard character string that the user must replace with a valid location description. 5. Dummy branches that connect between junctions with at least one branch or dummy branch node in the junction with a known location do not need location information in the Free-Node Initial Conditions Block. Here is a fragment from the Free-Node Initial Conditions Block showing how the locations of free nodes may be given. The first two lines of the input show blanks in the three right-most columns of information. Blanks in these columns denotes that no location is given. Blanks for the last two columns do not mean zero! The last two lines in the fragment show the two ways of defining a location for an exterior node. The first, for node F1009, has blanks in the column under Base_node and, therefore, the values for the following two columns are the coordinate location of the node. The first value given is the EASTING value and the second is the NORTHING value in most cases done so far. If the columns under Base_node contain a NODE designator, then the values in the following two columns are offsets from the location of that node. In the example below, the offsets are from F1009 the other end of the dummy branch. NODE NodeId---------- Dpth/Elev DISCHARGE DpthDatum Base_node X_or_Xoffset Y_or_Yoffset F1008 RAAHAG_M.3LU 2.0 0.0 0.0 F1508 RAAHAG_M.3LD 2.0 0.0 0.0 F1009 RAAHAG_M.3RU 2.0 0.0 0.0 1216954.36 675618.77 F1509 RAAHAG_M.3RD 2.0 0.0 0.0 F1009 -400.0 0.00 --Two options have been added to the Run-Control Block that affect the datum for head used in the two-node control structure instruction (Code 5 Type 6) and in the side-weir instruction (Code 14). All previous versions have always used the datum given in the instruction. It has proved useful to provide for the option of always using the datum in the table or tables given in the instruction. If CD14_TAB_DATUM is given, the value of YES in the Run-Control Block, then the head datum given in the outflow table in the instruction will be used as the head datum in the instruction. The default value is CD14_TAB_DATUM=NO to reflect the behavior of past versions. The other option, CD5T6_TAB_DATUM, is the same for the Code 5 Type 6 instruction. These options were added to make it easier to represent changes in levee- crest elevations for major floods occurring in different years. Repair and enhancement operations could change the levee-crest elevation at many locations. Facilities exist for the semi-automatic recomputation of the flow tables but any changes in crest level would have to be made manually in previous versions. With CD14_TAB_DATUM=YES, the changed crest levels, if any, in the flow tables for the levee will be used in the instruction. Thus, a manual change in the input is avoided. Version 9.71 August 15, 2000 --Added optional input field for a free-node station in the Free-Node Initial Conditions Block. The station given to a free node is user defined. The station is printed with the summary output of extremes at the end of the run. --Used new feature of the latest Fortran compiler from Lahey to check for undefined variables and subroutine interfaces. Found various undefined variables that had not been initialized. Most of them were values that were not used in the computations but were transferred from one variable set to another at the end of a time step. The internal details of the pattern of storage is often ignored at these points to make the transfer faster and simpler. --Found subtle problem in the two-D table checking that sometimes omitted a table from the list. Also, fixed a bug in that code for multiple tables in one instruction. Version 9.73 September 27, 2000 --FEQ now forces the invert elevation for both nodes on a dummy branch to match the invert elevation of the branch node attached to one end of the dummy branch using the Equal-Elevation instruction in the Network-Matrix Control Input. This will make subsequent checking easier and will suppress many warning messages in models that make extensive use of dummy branches to represent connections between parallel flow paths in a stream system. --A new optional value in the Run Control Block selects the format for the *.FEO file in the NEW GENSCN block. The default value is set to the old format. The value need not be changed until and if the new *.FEO format is created. Version 9.74 October 12, 2000 --Changed the Network-Matrix Control Input instructions that refer to two-D tables for flow ratings to use TAB or tab for the datum elevation of the table. Thus, there is no need to hard code the datum values. However, for this to work properly, the Function Tables Block must be given before the Branch Description Block so that the function tables will be known. This is a good standard practice for this as well as earlier versions. In order for FEQ to sense the TAB or tab value, it must be preceded with a semicolon. This signals the end of the previous group of input which is all integers or character strings. TAB or tab is detected as a character string and FEQ becomes confused when the semicolon is not added. This does not currently work for 1-D tables because we have no standard method for giving the datum for them. If a datum is not given in the instruction, an error is reported. --Adjusted various aspects of the newest GENSCN output to agree with the latest revision of the output. Pumps and gate status reporting is supported in the files for GENSCN. GENSCN modifications to be released near the end of the year may support these as well. --Added checking of side-weir instructions. Most often, side-weir flows should be zero at the initial condition in the model. If they are not, the initial elevation difference across the side weir should be small or the flow and the difference should be in reasonable agreement. Currently, the check will disable any side weir for which the water-surface elevation across it differs by more than 0.1 foot. All "problem" weirs are reported in a table shortly before the start of unsteady-flow computations. Problem weirs are those with non-zero flow and those with the invert of the source or destination node being above the datum for the flow over the weir. The latter indicates some sort of error in the data--one that is often difficult to see when many side-weir instructions are involved, such as when modeling flow over levees or high banks along a river. Problem weirs are disabled in the computations and must be corrected before they will be enabled in the computations. Version 9.75 October 25, 2000 --Changed checking of code 4 type 3 to take place in EXIN instead of in CONTRL. Needed to support changes in handling nodes on dummy branches and level-pool reservoirs. --Initial values for all free nodes on dummy branches are now set by FEQ. Both ends of the dummy branch are set based on connection by any EqZ instruction. In some cases, models may exist in which a dummy branch is not connected to any other node using EqZ. In this case, the dummy branch must be explicitly set in the BACKWATER block because the elevation/depth value given to the nodes in the Free-Node Initial Conditions Block is ignored. The elevations for level-pool reservoirs must either be given in the Free-Node Initial Conditions Block or defined in the BACKWATER block. All branches must be defined in the BACKWATER block as well. FEQ now checks for missed branches. Version 9.76 November 2, 2000 --Fixed bug in TAB option for code 5 type 6 when multiple flow paths are involved. Only the TAB option for the first path was done. --Changed FEQ's response when the TAB option is used in the NMCI and the function tables are unknown. It is now called an error and a request is made to move the Function Tables Block ahead of the Branch Description Block. --Added additional messages to the output to describe what is being checked. Version 9.78 November 20, 2000 --Fixed bug in handling the conversion between tabids and internal table numbers for the two default tables used in modeling detention storage. --Added ability to include a HOME name value in the Function Tables Block. The string in the HOME name is prepended to any file name given after the definition of the HOME-name value. This makes it possible to change to a different drive letter by changing one input value. It also makes it possible to shift to a Linux-style directory structure with a minimum of change required to the input file names. The home-name value is defined by the key word HOME followed by = and followed by the value. To define a drive letter the following form: HOME= d: would work if all the subsequent file names start with a \ or / depending on the operating system. Note that Lahey Fortran compilers are able to process a slash (/) in Microsoft Operating Systems (DOS/NT/W2K/XP/95/98/Me). I recommend using / instead of \ if there is a chance that a major input will be transferred to Linux/Unix. --Added Easting/Northing output to the new-format FEO file. Version 9.79 January 2, 2001 --Found and fixed bug in the automatic adjustment of the invert elevation of nodes on dummy branches when the node on the dummy branch is in an EqZ instruction with a node on a branch. The depth value was not correctly adjusted at the same time. --Corrected some formats in error messages so that larger numbers can be printed for the interior nodes on a branch. --Added checking for a carriage return character in each line of input. This makes it possible to copy files from Microsoft Operating Systems to Linux without having to convert the file, which ends each line of a character file with a carriage return-line feed, to Linux, which ends each line of a character file with a line feed. Without this change, unconverted files would fail when running in Linux/Unix because the carriage return would be in the data line given to the Fortran program. FEQ and FEQUTL would not process that character properly. Now, the character is converted to a space before any input processing takes place. Version 9.80 15 March 2001 --Found problem in interpolation of cross sections when a bottom slot was present. Corrected problem and introduced the restriction that interpolation can take place only when: 1. The sections bounding the interpolation interval both have a bottom slot 2. The sections bounding the interpolation interval have no bottom slot Interpolation intervals in which only one of the bounding sections has a bottom slot are invalid and will cause an error. --Found an error in the computation of the Hager side-weir correction value. This error, in limited testing, has the greatest effect on flow estimates when the head on the side weir is small, on the order of tenths of a foot. In this range, it appears that the error caused the flows for the weir to be too large by about 25 percent. As the head on the weir increases to about 1 foot, the overestimate is about 5 percent and at heads of 3 or more feet on a side weir (not normally expected on most side weirs), the overestimate is about 2 percent. The effect of this error on the results for a model depends on the head on the weir, the time spent at various heads, and the effect of the side-weir on the stream system. --Added additional options for debugging models. Three options in the Run-Control Block add new features: 1. DTMIN_OUT gives a minimum time step for detailed output. Whenever the time step is <= DTMIN_OUT, detailed printout for that time step is given no matter what the print interval. Currently, DTMIN_OUT ignores DPTIME. However, that may change. 2. START_EQ and END_EQ gives the starting and ending equation numbers in the Network Matrix for detailed printout of the equations but only if the current time is at or greater than the time given in DPTIME and the current time step is <= to DTMIN_OUT. You should be aware that printing the equations in part or in whole can increase the size of the output file to large limits. Adding the equation printout added substantially to the memory requirements for FEQ. If they do not prove to be useful, these options may be partially or completely disabled. --Disabled the shallow-depth correction in the STDCX governing equation option because it appeared to be counter productive in applications to cross sections with a bottom slot. More work is needed on how to improve computations with bottom-slotted cross sections. --Problems in code 13 were revealed in a model with highly detailed cross sections. Situations arose in which there was no inflow or outflow but the solution converged with a noticeable difference in water-surface elevation. This was caused by there being a solution with such a difference due to variation of alpha in the energy equation. Further, it was found that the rate of change with depth of alpha and beta were often greatly in error so that the derivatives in the Newton linear system were in error. This caused convergence failure or slow convergence. The following changes were made: 1. Critical flow is now required in the cross-section function table used at the code 13 junction. Heretofore, only the first moment of area about the water surface was required. It turns out that the required derivatives with respect to depth are simple functions of the Froude number. The critical flow is computed with some effort for consistency within FEQUTL. Therefore, using the critical flow in the code 13 computations to compute the Froude number avoids the problems of the invalid rate of change of alpha or beta with respect to depth. 2. Heretofore, the case with no inflow or outflow was computed using the same code as for inflow and outflow. This has been changed so that the inflow/outflow must be greater, in absolute value, than (abs(QL) + abs(QR) + QSMALL)*1.E-4, where QL is the flow at the upstream node and QR is the flow at the downstream node, before either the momentum or energy conservation principle is used. If the inflow/outflow is too small, FEQ forces equal water-surface elevation at the junction. So far, this has avoided the problems outlined above. A side benefit is that the computations are slightly faster if there are many cases of potential inflow/outflow in the system. Most of the time there will be no inflow or outflow and the equal-elevation option avoids any need for table lookup and involves minimal computation. --Problems in running a large model, more than 1,000 branches, with major flows over levees and high ground adjacent to the stream, revealed two errors in the computations of some partial derivatives in FEQ. These derivatives are used in Newton's method for solving a system of non-linear equations. Convergence was slow or failed at locations that did not have any of the typical signatures of computational problems. The partial derivatives for eddy losses (given by KA and KD in the Branch-Description Block) should have been multiplied by the parameter, WT. Because WT is always <=1, this made these derivatives, on average, too large. However, the eddy losses are generally small, so this effect was small. The error had an effect when there was a larger change in velocity, say from 8 feet per second to 4 feet per second in a short computational element. The computations would fail with a time step too small. Setting KA and KD to zero at this location would permit successful computation. The second error in partial derivatives was found for the velocity-head derivative in Code 13 for the conservation of energy option. If the outflow was large, the velocity head became large and this error also caused computational failure with too small a time step. Correction of this error allowed the large model to compute at reasonable time steps (1,800 seconds), whereas before the correction any time step much larger than 240 seconds would cause failure. --A significant problem was found when running the model with more than 1,000 branches. There were thousands of lines of warnings about table overflows of various kinds with overflow amounts so large that the number could not be printed within the format provided for it. The model could not provide a reliable computation of streamflow. The sparse matrix methods used in FEQ do not provide correct solutions in single precision for very large (greater than 250-450 branches or with more than 28,000-80,000 matrix elements) models. Consequently, a double- precision version of FEQ was compiled and is available to users. The use of double-precision has two drawbacks: 1. It increases the runtime storage for the software. Various larger vectors used in the program have doubled in size. However, the relative increase for the whole program is not large. Typical PC's today should have enough RAM. However, if you try to run a 1,000 branch model on a PC with only 64 MB of RAM, you may find the computation to be too slow. The active memory foot print for the 1,000-branch model appears to be about 26MB. 2. Runtimes are increased for models that run properly using single precision. The table below gives some statistics on the various models that have been compared. The models above the break in the table run properly using a single-precision solution for the linear system. Those below the break have increasing problems. The last two models in the table could only be run in single precision after the models were refined and tuned using the double-precision version. However, even then the behavior of the computations was erratic with many spurious variations in the time step (recognized as such only after running with the double-precision version). Number of Number of Number of Number Number of Element Single Double Approx branches Ext. Nodes Nodes on of Elements Count per Precision Precision Percent Branches Equations In Matrix Equation Time Time Increase --------- ----------- --------- ----------- ---------- --------- ---------- ----------- ----------- 25 82 1741 3546 20104 5.7 97.3 105.7 9 192 814 711 2282 34403 15.1 20.7 23.8 15 255 812 781 2166 27469 12.7 17.6 20.1 14 ----- 506 1960 1622 5140 89203 17.4 58.9 70.4 20 554 2006 1450 4696 131960 28.1 369 227 ---- 1066 3966 3123 9934 223598 22.5 772 397 ---- A comparison of the extreme results for the last model showed that they were essentially identical. This comes about from the nature of the iterative solution of non-linear equations. The error caused by the ill-conditioned matrix when single precision was used affected only the corrections to the unknowns. If these corrections become small enough, then convergence is declared. Apparently, when the corrections became small, the errors in them became smaller as well, so that the final results of the two runs were essentially the same. However, it probably would have been impossible to run the final model with the single-precision corrections. The first model in the second group did run without apparent problems even though a detailed analysis of the solution showed that it had some significant errors caused by ill-conditioning in the single-precision solution. It is not clear what characteristic measure of model size should be used to select between the two solutions. Part of the problem is that the behavior of a model with many computational problems in it appears similar to the behavior of a model with an ill-conditioned matrix. If the user finds a large model simulation to have computational difficulties (many warnings, large table overflows, spurious time-step changes or failure to complete), a definitive test would be to do a test run of the model using the double- precision solution to see if the problems disappear. If they do, then it is probable that ill-conditioning of the matrix at single precision is the source of the problem. Version 9.81 3 April 2001 --Changed method of applying corrections to depth in Newton's method. In previous versions only large negative corrections were adjusted in order to prevent or minimize the occurrence of negative depths in the final value of depth. There was no limit on the increase in depth. However, now the change in a depth value caused by the application of the correction from Newton's method is limited to 50 percent of the value of the depth. Currently, this value is hard coded and not accessible from input to FEQ. That will probably change in the near future. --Corrected an error in a format in the end-of-run report on 2-D tables. If the final extreme flow used as an argument to a 2-D table of type 14 was greater than the maximum flow in the table of type 14, then a system message would be issued and the run terminated because an improper format was encountered in the process of printing the end-of-run report. Version 9.83 9 July 2001 --Added check for hour of day in time-series table. The hour must be between 0.0 and 24.0. --Found differences in treatment of reading an integer value using list format, that is * for a format, between two Fortran compilers. Programs compiled using the Lahey compilers generated an error if the item being read was not an integer. This functioned the same under both Microsoft Windows and Linux. Programs compiled using the Portland Group, Inc. (PGI) compiler under Linux did not generate an error if the first character encountered in the item was a D. It is assumed this would probably happen if the first character was an E as well, or if the two letters were lower case, but this was not tested. The value returned to the integer was zero. This resulted in FEQ failing to run a model that had a time-series table id at the upstream-most boundary that started with D. This table-id was ignored when using PGI and as a result the run failed when zero flow at the upstream boundary caused shallow depths to occur in the main channel. I outline this in detail to remind users that there are always potential problems when shifting from one compiler to another. The standards cannot specify every detail. I have changed the read statement in question to use an I16 format code. Using this code gives the same behavior for both compilers. Version 9.84 1 October 2001 --Changes have been made to the input for Tributrary area. Named-item and heading-dependent input has been used to replaced the remaining fixed formats. --Compiling with LF90 revealed some unexpected stack requirements in the double-precision version of the linear-system solution in FEQ. Thus, the handling of the temporary WORK vector in INFO2 was changed. This vector is now a local variable declared in INFO2 only and not part of an overlay on PDAVEC, as it was before the double-precision, linear-system solution was implemented. This still requires a large stack in LF90 but now the messages make sense, whereas before the messages from the compilation of the double-precision version were confusing. LF95 handles local variables in a different manner and the stack-size issue does not arise. --The Point Flows Block is no longer supported. Point flows should be connected to the model at a junction created for each point flow with a dummy branch added to provide the boundary node to reference the point flow. Version 9.85 8 October 2001 --Fixed problem in processing delay and detention with the updated input for the Tributary Area Input Block. --Problems with non-printing characters in table ids is a recurrent problem that can be difficult to find. A new error message has been added to flag that possibility. It does not appear in the documentation at this time: *ERR* 408} FEQ has found nn function tables but nn unique table ids. These numbers differ thus some table ids with non-printing characters may appear somewhere in the input. One or more tables may also be missing. Note: The number of function tables and the number of table ids found by FEQ should match. When they do not match, a function table may be missing from the input even though you have referred to it at some point in the input. On the other hand, there may be some non-printing characters in a table id so that FEQ sees it as being different even though it will print as being identical to the human eye. If the internal table number assigned to a function table when it is read by FEQ and the internal table number given for the table id when it is reported as missing differ, then one instance of the two occurrences of the table id must contain one or more non-printing characters. Your editor might have the ability to show certain non-printing characters, such at tab characters, and, if so, can be used to find them. Otherwise, delete what look like spaces after the table ids that are suspect and try the run again. Version 9.86 21 November 2001 --Changed code in RDFIN to perhaps give better error messages on user errors in the Input Files Block. Certain user errors would cause a message that did not seem to be correct when, in fact, it was correct. Be sure that there are no spaces, minus or plus signs, or any other special characters in a file name other than the underscore character. Although the operating system may allow these, FEQ does not! Version 9.87 14 December 2001 --Corrected an error in an error message that caused an abnormal termination of the program when using the LF90 compiler. This termination gave no details on the location. Version 9.88 27 December 2001 Relaxed requirement on Code 13 so that old models would run. However, they may not give exactly the same answers as before. You should change the cross-section table type to 22 or 25 as soon as possible. In a future version, these cross-section table types will be required. An error/warning message is printed when tables of the undesired type appear. The model may run, but it may fail if the Froude number gets too close to 1.0. Version 9.89 11 January 2002 --Added a new feature to the side-weir instruction. If the table defining the failure/fight option is present, then if F(4), the fourth floating point entry, is non-zero, FEQ will compute the elevation of the levee toe by subtracting F(4) from F(2), the datum for heads. Any value given for the toe elevation of the levee will be ignored. This makes it possible to vary the effective crest level for side-weir flows in absolute terms. For example, if F(4) is given as 1.0, then FEQ will treat the levee height as being 1.0 because the toe of the levee is then 1.0 below the levee crest. The value of p in the failure/fight table will then give the change of crest in the same units as the levee height. If the levee height is 1 foot, then p= 0.25 will raise the levee by 0.25 feet. Note that the levee height computed this way is false and is only a convenient way to control the elevation of the surface that was used to compute the values in the flow table. The values in the flow table can be from any source, not flow over a levee at all. The reason for doing this is that the flows over side weirs are sensitive to changes in the elevation of the water surface or the control surface. Thus, a change of only 0.25 feet in the control surface could change the computed overflows by 30 percent or more. When attempting to sort out the uncertainties of a stream system, the ability to vary key items controlling the model results is vital. The elevation of real control surfaces is often uncertain by 0.25 feet or more. --Add QCHOP operations to the output of flow files to avoid problems with flows that result from round-off and truncation errors in the computations. QCHOP has been applied to the printed output since version 7.0! --Added optional time-series table to Code 6 to allow an adjustment factor as a function of time. Use with care! If table is not given, the adjustment factor is taken as 1.0 from this source. Version 9.90 4 February 2002 --Changes to the processing of Tributary-Area input in October 2001 introduced problems not found in initial testing. The tributary areas for level-pool reservoirs were processed correctly. However, those for branches were not and the tributary areas were returned as zero. This results in a dramatic reduction in the computed flows in most cases! --Also, make sure that any labels you have added following the required headings in the Tributary-Area Input Block are prefixed with a single quote. This signals that the remainder of the line is a comment. The tributary-area values are now in heading-dependent format so that the contents of the heading is significant in defining the range of columns below it. Version 9.91 17 May 2002 --Modified the side-weir checking to include the effect of any bottom slot computed in FEQUTL. This was added to prevent side-weir flow when the water was still in the slot. This came about because the crest for the side weir was in error. However, in some models, having hundreds of side-weir flows, it is not possible to check all crests. Thus, FEQ checks and disables any that do not make sense. The user can then review the list and decide what should be done with the problem. --Found a subtle bug in the computations of one derivative in the side-weir code. This only appeared if the weight factor on head in the source channel differed from 0.5. Most models have used 0.5 so that the error never manifested itself. --Added option to check high-water marks against maximum results at the end of a run. FEQ will seek a file with the exact name: hwmark.loc in the current directory, the one from which you invoked FEQ, and, if found, will process the file. An example of such a file is: ; Listing of high-water mark locations for Reach 5 ; Offset is in the same units as stationing in the model. If ; offset > 0 then it is a distance dns of the given node on a branch. ; If offset < 0, it is a distance ups of the given node. Note that ; offset is ignored if Bran is 0. In this case, the Node must be an ; exterior node either on a branch or a free node. Location/description-------------------- Bran Node Offset Elevation S-1 in southwest Sumas 0 D650 0.0 39.6 S-2 west of BNRR Sumas 0 D5554 0.0 43.5 S-3 west of BNRR Sumas 5562 556201 0.014 43.5 S-4 in south-west Sumas 651 65106 0.0 40.2 S-5 west of BNRR Sumas 5564 556401 0.019 43.6 S-6 central Sumas 0 D641 0.0 40.8 S-7 northwest Sumas 0 U641 0.0 40.4 S-8 north Sumas 626 62603 0.0 37.4 S-9 northeast Sumas 628 62802 0.0 38.5 S-10 east Sumas 0 D626 0.0 37.8 S-11 southeast Sumas 643 64306 0.0 38.9 S-12 far southeast Sumas 653 65302 0.0 38.7 S-13 far southeast Sumas 0 D652 0.0 39.0 S-14 south Sumas 652 65206 0.0 40.6 S-15 far northeast Sumas 630 63005 0.0 36.7 10.84 m B. C. 0 D636 0.0 35.56 9.37 m B. C. 0 F202 0.0 30.74 Max stage Huntingdon Gage 0 D362 0.0 33.1 ; Following points are in overflow corridor Top utility box at Shuksan and EvrGrn 6210 621002 -0.021 80.74 ;The following mark appears to be so high relative to others that it is in error ;East edge corridor south of Tom Rd 6230 623002 0.025 80.37 Near barn dns of looong culvert 0 D5074 0.0 70.25 Top step at Jim Glass's house 5118 511802 0.011 68.40 Mud stain in Jim's barn 5118 511802 0.011 68.54 Near Johnson Creek and Clearbrook Rd 5190 519002 0.0 58.43 North and west of Clrbrk and Nksk Rd 0 F5804 0.0 56.40 Sth Badger Rd in Trib 1 on barn????? 6438 643802 0.0 65.71 END This format must be followed exactly, columns are important. Use the heading line as the template. A summary is given as follows: High-Water Mark Summary High-Water Mark Location Bran Node Elevation Sim Elev Diff S-1 in southwest Sumas 0 D650 39.600 39.961 0.361 S-2 west of BNRR Sumas 0 D5554 43.500 43.330 -0.170 S-3 west of BNRR Sumas 5562 556201 43.500 43.292 -0.208 S-4 in south-west Sumas 651 65106 40.200 39.719 -0.481 S-5 west of BNRR Sumas 5564 556401 43.600 43.272 -0.328 S-6 central Sumas 0 D641 40.800 39.451 -1.349 S-7 northwest Sumas 0 U641 40.400 40.048 -0.352 S-8 north Sumas 626 62603 37.400 37.768 0.368 S-9 northeast Sumas 628 62802 38.500 37.158 -1.342 S-10 east Sumas 0 D626 37.800 37.367 -0.433 S-11 southeast Sumas 643 64306 38.900 39.130 0.230 S-12 far southeast Sumas 653 65302 38.700 39.273 0.573 S-13 far southeast Sumas 0 D652 39.000 39.280 0.280 S-14 south Sumas 652 65206 40.600 39.445 -1.155 S-15 far northeast Sumas 630 63005 36.700 36.127 -0.573 10.84 m B. C. 0 D636 35.560 35.299 -0.261 9.37 m B. C. 0 F202 30.740 30.874 0.134 Max stage Huntingdon Gage 0 D362 33.100 32.972 -0.128 Top utility box at Shuksan and EvrGrn 6210 621002 80.740 80.251 -0.489 Near barn dns of looong culvert 0 D5074 70.250 71.075 0.825 Top step at Jim Glass's house 5118 511802 68.400 68.996 0.596 Mud stain in Jim's barn 5118 511802 68.540 68.996 0.456 Near Johnson Creek and Clearbrook Rd 5190 519002 58.430 57.899 -0.531 North and west of Clrbrk and Nksk Rd 0 F5804 56.400 56.833 0.433 Sth Badger Rd in Trib 1 on barn????? 6438 643802 65.710 64.973 -0.737 Difference-distribution summary Difference range Count Proportion -100.00 < Diff <= -1.00 3 0.12 -1.00 < Diff <= -0.50 3 0.12 -0.50 < Diff <= -0.25 6 0.24 -0.25 < Diff <= -0.10 3 0.12 -0.10 < Diff <= 0.00 0 0.00 0.00 < Diff <= 0.10 0 0.00 0.10 < Diff <= 0.25 2 0.08 0.25 < Diff <= 0.50 5 0.20 0.50 < Diff <= 1.00 3 0.12 1.00 < Diff <=100.00 0 0.00 Analysis of 25 high-water marks completed. Currently, the ranges for the distribution of differences are hard-coded and in feet. In the future, they will become controlled by the user. --An additional item of information is placed at the end of processing of the Branch-Description Tables. This is a line that gives the total length of all branch flow paths in the model. As example output is: Length of all branch flow paths= 103.940 The units of the value are the same as given in the branch input. In this example, the units are miles. --Added options to the Output Files Block to add or subtract values at different locations in the model to create the value to be stored in the file. This is useful when there are multiple flow paths that form and vanish along a stream. It becomes difficult to debug in some cases when the flows change in a flow path because an additional flow path becomes active. Only modest changes were needed to the input to accomplish this. Old inputs should continue to work as before. The following example block illustrates the new features. The comments explain what is happening. Note that the UNIT column, which is no longer required, has taken on a new role and that is to define the action to be taken with information on the input line. OUTPUT FILE SPECIFICATION ACTN BRA NODE ITEM TYPE NAME---------------------------------------- ; The action field is blank so that the given location will be output to ; the named file as in the past. U4000 FLOW STAR demingq.sim ; The flow at exterior node D4142 will be output in the named file. OUTA ; requests output AND adding the value to an internal location for use ; in the QUAD action. Using OUT would give the same result. ; The following line requests that the flow at D4142 be integrated (QUAD ; stand for quadrature, an older term for numerical integration) numerically ; to yield a time series of the cumulative flow at D4142. OUTA 0 D4142 FLOW STAR blweverson.sim QUAD 0 0 FLOW STAR cumblweverson.sim ; Here, we want to output the sum of flows at D3577, D3177, and D3677 to ; the file given. Note that the file field is left blank for the ADD action ; because no output is requested. Again the QUAD action computes the cumulative flow ; from the sum of these three nodes. ADD 0 D3577 FLOW STAR ADD 0 D3177 FLOW STAR OUTA 0 D3677 FLOW STAR abvgdmrdn.sim QUAD 0 0 FLOW STAR cumabvgdmrdn.sim ; The first two lines merely output the flow at the given node. ; The ADD action adds the flow at D4541 to an internal location. ; The SUB action subtracts the flow at F4501 from the same internal location. ; The QUAD action computes the cumulative net flow at the two nodes and places ; it in the named file. F3006 FLOW STAR stcknyislndrd.sim D4541 FLOW STAR mnstrtevrsnq_2002.sim ADD 0 D4541 FLOW STAR overflow + baseflow SUB 0 F4501 FLOW STAR take out baseflow QUAD 0 0 FLOW STAR cumqover.sim -1 I have a simple plot routine that then reads these files and creates plots of the results. --Found an error in the forced-boundary instruction, code 6. The multiplying factor was applied to both the base value and the time-series value. It has been changed so that the multiplying factor only applies to the time-varying value. The base value serves as a constant lower limit that provides a floor on the flow or elevation at that location. If you always left the multiplying factor at 1.0, the usual course of action, then this bug made no difference. This is probably why no one has reported a problem. --Added option to permit one file name as a command-line argument to FEQ. If only one name is given, FEQ strips off the final extension of that name, if there is one, and appends .out to it to form the user-output file name. --Added optional home directory values in the Run Control Block, in the Tributary Area Block, in the Special-Output Locations Block, in the Input Files Block, and in the Output Files Block. In each case the variable is called HOME. The following rules apply to the use of these values: 1. A value of HOME given in the Run Control Block is considered to be a global value, that is, in the absence of additional information, it applies everywhere that a home directory has meaning. Such a directory has meaning for file names in a special format. 2. If a value of HOME is given in one of the other blocks, then that value applies in that block until another value appears in that block. The global value of HOME is ignored in a block if a local value is given in that block. 3. The home directory is only added to a file name if that file name begins with either a / or a \. Thus, you can still give file names that are local to the directory from which FEQ was invoked. In Microsoft Operating Systems you can also prevent application of a home directory by using a full path name including the drive letter. However, that will not work for Linux/Unix because they have no drive letters. 4. If HOME is not defined anywhere, then file names are not changed anywhere. 5. HOME must be given in the first four characters of the input line. The following equal sign and value must also occur in that order before the end of the line which should be considered to be at most 80 characters long. 6. File names, except in the Function Table Input or in FTABIN, are limited to 64 characters in length including drive letter, colon, slashes, or back slashes. The file name can be 96 characters long in the Function Table Input or in FTABIN. This allows for a deeper directory structure. However, using long directory names with a deep structure can exceed these limits. At some point, I will probably increase the file name length to 128 or more characters. Here are some examples: 1. Special-Output Locations Block. Notice that here the home value contains all of the path except the final /. The final / must be placed on the file name in order to have FEQ add the home directory to the name. Notice that deleting HOME and deleting the slash on spout yields the same location for the file. Special OUTPUT LOCATIONS HOME=D:/nooksack/lower/feq FILE=/spout BRA NODE 12345671234567 0 D4541 EvrsnMnStrt -1 2. Output files: This is the same as for the Special-Output Locations Block. Again, deleting HOME and the leading slashes on the file names gives the same location for the file. OUTPUT FILE SPECIFICATION ACTN BRA NODE ITEM TYPE NAME---------------------------------------- HOME = D:/nooksack/lower/feq U4000 FLOW STAR /demingq.sim OUTA 0 D4080 FLOW STAR /upsoverq.sim QUAD 0 0 FLOW STAR /cumupsoverq.sim OUTA 0 D1010 FLOW STAR /ferndaleq.sim QUAD 0 0 FLOW STAR /cumferndaleq.sim F3006 FLOW STAR /stcknyislndrd.sim D4541 FLOW STAR /mnstrtevrsnq_2002.sim ADD 0 D4541 FLOW STAR /overflow + baseflow ADD 0 F3006 FLOW STAR /flow over Stickney Island Rd. SUB 0 F4501 FLOW STAR take out baseflow QUAD 0 0 FLOW STAR /cumqover.sim ADD 0 F4501 FLOW STAR QUAD 0 0 FLOW STAR /cumqbase.sim -1 The purpose of adding these features is to make porting of input files to Linux/Unix easier. First of all, note that the Lahey Fortran compilers running under Microsoft Windows will properly process slashes, even though Microsoft Operating Systems use a backslash. I am starting to use slashes in all my inputs so that I have one less thing to change in the input to run under Linux. By a careful design of the file layout, I can eventually transfer an entire project and only have to change a global HOME value in the user-input to FEQ and FEQUTL in order to run under Linux/Unix. FEQ and FEQUTL already properly process the end of line differences from files that come to Linux/Unix from Microsoft Windows. The files are left unchanged and the editors that I use in Linux do not change the end of line and any added lines have the same end of line. Thus, it is possible to move such a project back to Microsoft Windows again with only a change in a global HOME value in each user-input file. However, if the project is initiated and developed under Linux, the transfer to Microsoft Windows is more complex. The lack of the "extra" carriage-return character at the end of each line will cause problems for most Microsoft programs. Version 9.92 12 June 2002 --Corrected improper handling of the two-digit form of the year in the date/time strings in the Extreme-values summary output. The year 2200 was being used for a run and the year was printed as 200 instead of 0. --Added a Fortran 90 CASE statement to handle output formats in order to test the INTEL Fortran compiler. This compiler no longer supports the ASSIGNED GOTO statement. Version 9.93 14 August 2002 --Added selector variables to the master input file for FEQ. The master input file to FEQ is the input file given as the first command-line argument when invoking FEQ. An FEQ model may involve hundreds of input files but only one of these files is the master input file. This file contains the various blocks that describe the model. The other files are referenced in some of these blocks. Thus, the name "master input file" is used because it contains the references to all other files needed to fully specify the model. All the other files will be called slave files because they depend on the master file. Selector variables are provided to help manage the complexities of applying an unsteady-flow model to a stream system. Selector variables are provided to help bring order to a potentially confusing process. By careful definition of a small set of selector variables, and the inclusion of selection blocks, we can create a single master input file for FEQ that contains the descriptions of all scenarios. Thus, a change to the master input file will affect all scenarios that select that part of the master input file. A master input file will have large sections of its contents outside of any selection block. These are the parts of the input file that describe conditions that apply to all scenarios. There will then be one or more selection blocks that contain input specific to only certain scenarios. The following rules apply to the use of selector variables: 1. A selector variable can be up to 16 characters in length with no spaces. Although not required with this version, starting the selector with an alphabetic character is a good idea to fit with possible future changes. 2. Every line within the selection block is involved in the transfer. Thus, all lines must be valid for input to FEQ at that location in the file. 3. FEQ does not check during the transfer of selector variables to the actual input to FEQ. You must make sure that the final result creates a valid input to FEQ. 4. Currently, FEQ opens and retains a file named: f_e_q_i_n_temp.default to save the actual input used by FEQ. This file should be checked because it contains the values of the selectors and selection blocks used. You can give your own name for the file to be created by specifying the name in the Set-Selectors Block, for example: FILE= myoutputfilename The file specification must all appear on one line of input. The file name should not contain spaces and should be less than 128 characters long including the drive letter, colon, and / or \. 5. The keywords IF, ELSE, ELSEIF, and ENDIF must be in upper case. The selector variables can use upper and lower case, and are case sensitive. That is, Q1990 and q1990 are not the same selector, they are different! 6. It is an error to have more than one ELSE in a given IF-ENDIF block, and if it is present with ELSEIF's, the ELSE must be the last option. For example: IF xyz ELSEIF ABC ELSEIF CDEE ELSE ENDIF is valid. However, IF xyz ELSE ELSEIF CDEE ENDIF is invalid and FEQ should give an error or warning message. 7. The tilde, ~, is the "not" prefix operator. For example, if G2002 has the value true, then ~G2002 has the value false. This option was added to make the logical operations a bit more complete; the "or" and the "and" logical operators may be added if a need develops. The following discussion illustrates cases when the use of selector variables is advantageous. An unsteady-flow models is used to analyze a variety of situations. A model may be calibrated on one or more observed events of interest. These events may be historical floods or may be flow periods during which special measurements were made. Once the model is deemed to be suitably calibrated, it will be used in one or more of the following ways: 1. Approximate flows and water-surface elevations for a historical flood. 2. Approximate the flows and water-surface elevations when a historical flood is applied to a model modified to represent conditions different from those that existed when the historical flood occurred. These differences could involve, among others, modified flow paths, changes to bridges, culverts, dams, levees, changes in levee failure assumptions, changes in operation of gates or pumps, and so forth. In order to discuss such applications, we will call such a flood a transposed flood. That is, we take the point of view that the configuration of the stream system, which we will call the geometry of the system, is the principal factor being modified. It also means that we use the timing of the historical flood. That is, if the flood occurred in October of 1995, then the date/times in the FEQ model run will be in October of 1995 even though some of the structures in the model or the geometry of some of the flow paths were not present until 1999. 3. Approximate the flows and water-surface elevations when a design flood is applied to the model. A design flood has been selected to represent some conditions deemed important by a regulatory group. For example, in the USA, the 100-year flood is used to estimate water levels for purposes of flood-insurance rate mapping. In this case, the geometry used may not actually exist yet. It may represent a proposed future condition. We must then select some date/time sequence for the flood to be able to run FEQ. It is often convenient to select a date in the future that relates to the return period assigned to the event. FEQ can process dates at least to the year 9999, so that using a year of 2100 for a 100-year flood is convenient. A 50-year flood could be assigned to the year 2050 for computational purposes. We will call a run of FEQ for one of the above purposes a scenario. It is possible that a project could have 25 or more scenarios that need to be developed and analyzed. In this process, confusion can occur with the many different master input files for FEQ. In most cases, the changes from one scenario to the next are of limited extent in the master input file. Consequently, when a change is required that affects all of the scenarios or a large subset of them, we must make changes to many different files. This is prone to error and oversight. The pressure of deadlines often results in changes being made to the current scenario only. Later, confusion results when other scenarios are run under the assumption that the changes have been made, when they have not been made to that master input file. These typical applications of an unsteady-flow model show the three major sets of factors that define a given scenario. A scenario for an unsteady-flow model is defined when the geometry, boundary conditions, and system performance are established. We first discuss these sets of factors from the point of view of defining selector variable names, and then for possible directory-structure implications. 1. The geometry of the system includes any surfaces over which or through which water flows. Thus, each scenario will have a predefined shape and size of flow paths, bridges, culverts, levee crests, dams, gates, and so forth. Failures in levees and dams, as well as flood fighting and changes in gate openings, are included in the system performance discussed below. The geometry will be dated by the year and, if needed, smaller divisions of the year. A selector variable in FEQ can be up to 16-characters long and should begin with an alphabetic character. We suggest that the selector variables that relate to geometry should all start with a G and the year should be the full four digits of the year. Thus, the selector variable for geometry that is unique to a flood in 1995 would be G1995. If the geometry varied in the year and there are two or more floods of interest in the year, a selector variable like G1995.4 might be used to denote April in the year 1995. It is also possible to use G1995.April, but considerations discussed below suggest that the selector variables be kept short. 2. The boundary conditions describe the nature of the flows and stages that are imposed at the boundary nodes in the model. We will have to break this into two subgroups: hydrology and tides. We do this because there could be variations on tides that have nothing to do with the hydrology. The choice of which tidal record to use may be present when no gaged record is available. The hydrology may be an approximation to the conditions during a particular historical flood, or it may be based on a design flood. The hydrology refers to flows imposed on the model, so that we will denote any particular hydrology set using a selector variable that begins with a Q. Thus, the flows at the various boundary points for the 1990 flood would be in the set selected by Q1990. The 1995 flood would be selected by Q1995. Design floods do not have a year associated with them but we must associate a year to run FEQ. As outlined above, the 100-year flood could be placed at some point in 2100, so that flows associated with this event would imply a selector variable of Q2100. In some cases, the smaller tributaries may not have the same return period flows as the main stream. In that case, the date to use must correspond to the return period of the flows applied to the main stream of the stream system. The tides would be selected by using a selector variable starting with a T. A tide sequence may cover many years, so that we may not always define the selector variable on the year of the event. If the tide series is an approximation or a record of a historical series, then the selector variable will have some indicator of the gage used or the means by which the approximation was created. For example, the tide record at Cherry Point could be denoted by Tchrry. The tide period used for a design flood must have the same year as used for the design flood. 3. System performance is a collective term for changes in the geometry that take place during an event. These changes include levee failures, flood fights, gate operations, and so forth. Because these changes take place during a flood, they will be denoted by the year as well. We will use the letter P as the initial character of the selector variable. There is a complication with describing performance, for example, how should the performance be defined when a transposed flood is applied? Option 1. Shift the transposed flood's time of occurrence to match the approximate time of occurrence of the flood for the geometry. For example, if we want to apply the 1995 flood to the 1990 geometry, we shift the flows in 1995 so that it occurs close to the time that the 1990 flood occurred. We might choose to match the time of peak as one way of defining the shift. Then the actual performance of the levee system in 1990 might apply, at least approximately, but only if the floods are similar. This is unlikely to be true. Option 2. Model the transposed flood at its time of occurrence using the geometry. In this case, the performance of the levee system during the occurrence of the transposed flood might be an approximation to the performance to use. However, as in option 1, differences in the floods or geometry will make such application suspect. Option 3. Assume that no levee fails and no flood fighting occurs whenever a transposed flood is involved. This condition may be unrealistic, but it creates a consistent pattern so that comparisons can be made. Option 4. Use Option 3 as a basis for comparison, and compare the results using the transposed flood to the actual flood results. This involves reviewing the extent of flows over levees and levee failures in the actual flood, and then constructing a performance for the transposed flood that appears to make sense. For example, if flow over a levee during an actual flood was prevented by a successful flood fight, and if the levee is overtopped by the transposed flood, then apply flood fighting to that location for the transposed flood as well. Thus, based on past actual floods, we must infer some typical flood response, as well as the success of that response, which we then apply to a transposed flood. This can probably be done in some cases for flood fighting, but with levee failure we encounter increased difficulty. If consistent failures occur along a given levee, we can make some assignment of levee failure for an alien flood. However, most failures do not follow a consistent pattern from flood to flood. The location, timing, and size of a levee failure could be critical in any comparisons using transposed floods. Clearly, the performance of a stream system for a given flood can be assigned to the year of the flood. For example, we would have G1990, Q1990 and P1990. The no-fail no-fight performance assumption can be denoted as Pnoff. However, how shall we denote the performance of a system when a transposed flood is applied? The selector variable is limited to 16 characters, however, we do not want these variables to become too long because we will also use the selector variable name as part of a directory naming convention to help keep track of the output files from the different scenarios. I propose the following: give both the date of the flood and the date of the geometry in the performance selector name. For example, if we apply the 1995 flood to the 1990 geometry, and we use other than the no fail/fight performance, then the performance selector name is P1995on1990. This name contains 11 characters and is not too cryptic, the use of it in the recommended directory naming convention is discussed below. There may be a need to define additional selector variables to describe the scenario with sufficient detail. If so, we will select names that make sense in the context being studied. Below is an example of a Set-Selectors Block wherein we define selector variables: SET SELECTORS ; Devise a set of selector names so that we have one master input ; file for FEQ for R1-R4 for various geometries and flows. ; ; Variable name Meaning ; ------------- ------------------------------------------------------------ ; G1990 Hydraulic geometry for 1990 flood ; G1995 Hydraulic geometry for 1995 flood- assumes new bridges at ; Everson, Lagerway Dike. ; G2002 Hydraulic geometry for 2002- currently taken to be same as ; for 1995 pending information. ; Q1990 Flow for 1990 flood from USGS as modified and tributaries as ; computed from rainfall by way of rainfall-runoff modeling. ; Q1995 Flow for 1995 flood from USGS and tributaries as computed ; from rainfall by way of rainfall-runoff modeling. ; Q2002 Flow for 2002 floods from USGS at gaged locations with ; factored values applied to other tributaries. ; Q2200 Flow at Deming is a 200-year event, tribs at 1990 estimates ; with timing as for 1990 (approximately) ; P1990 Levee failure/flood fighting approximately like that in 1990. Has 1990 timing. ; P1995 Levee Failure/flood fighting approximately like that in 1995. Has 1995 timing. ; P2200 No levee failures nor flood fighting for 200-year flood with ; same time of occurrence within the year as 1990. Only ; the year was changed from 1990 to 2200. ; Tfixed Tide at Bellingham Bay and Lummi Bay held fixed. ; Tnos Tide at Bellingham Bay based on recommended source from NOS. ; Tchry Tide based on Cherry Point gaging. G1990 = false G1995 = false G2002 = true Q1990 = false Q1995 = false Q2002 = false Q2200 = true Tfixed = true Tnos = false Tchry = false P1990 = false P1995 = false P2200 = true FILE = feq.in END SELECTORS This block appears as the very first set of lines in the master input file to FEQ, ahead of the title lines. In this example, I have defined four groups of selectors that relate to the four sets of variables comprising a scenario. Note that each selector variable must appear on its own line. The equals sign is required. The value for the variables, true or false, can be in all lower or all upper case. An example title, or Run Description Block, with the first part of the Run Control Block appears in the following lines: Model for Lower Nooksack River: Deming to Bellingham Bay Model for Reach1, Reach 2, Reach 3, and Reach 4 July 23, 2002 IF G1990 Geometry: Flow geometry approx 1990 conditions of importance to the overflow: Lagerway Dike is not present, and old bridges on Everson Main Street. ELSEIF G2002 Geometry: Flow geometry approx 2002 conditions of importance to the overflow: includes Lagerway Dike, and new bridges on Everson Main Street. ENDIF IF Q1990 Boundary conditions: Run of 1990 flood hydrograph as revised: Use 1990 tribs from rainfall-runoff modeling. ELSEIF Q2200 Boundary conditions: Run of 200-year flood hydrograph: Use 1990 tribs with 1990 timing. Use 200-yr flows at Huntingdon. (Affects flow at Main Street with 0.03 of Huntingdon allocated for base flow.) ENDIF IF P2200 Levee/high ground performance: No levee failures or flood fighting for 200-year event. ENDIF IF Tfixed Tide at Bellingham and Lummi Bays: Tide is fixed. ENDIF RUN CONTROL BLOCK IF G2002 NBRA=1481 NEX= 5362 ELSEIF G1990 NBRA=1481 NEX= 5334 ENDIF . . . The keywords associated with the selectors are: IF, ELSE, ELSEIF, and ENDIF. These keywords must be in upper case. They do not have to start in column 1, but each must be on its on input line. Also, the first condition that evaluates to true in an IF-ELSEIF-ELSEIF ... ENDIF sequence will be selected. For example, if by some error both G2002 and G1990 were true in the Run-Control Block fragment, the G2002 values would be the ones selected. The final example is for the Function Tables Block, where one or more slave files are referenced. . . . IF G1990 FILE= /nooksack/lower/futl/r4/xsecdn.tab ' Pre-Lagerway Dike ELSEIF G2002 FILE= /nooksack/lower/futl/r4/xsec/lgrwy/xsecdn.tab 'Post-Lagerway Dike ENDIF . . . In this example, we have changes to some cross-section files that occurred when the Lagerway Dike was built. --Added an option to the Set-Selectors Block to define a home directory. This home directory can serve as the global home directory if no home directory is given in the Run-Control Block. A global home directory given in the Run- Control Block will override a home directory given in the Set-Selectors Block. In that case, the home directory in the Set-Selectors Block applies only to the file used for storing the results of the selection process and the master output file; that is, the output file given as the second command-line argument when invoking FEQ. The option is invoked by: makehomename= D:/nksk or MAKEHOMENAME= D:/nksk The home name or directory is constructed as follows: 1. The left-most part is taken as the character string given in the option. In these two cases, the leftmost part is: D:/nksk. Note that I have specified a slash instead of a back slash because the Lahey Fortran compilers are able to process both forms when running under Microsoft Operating Systems, which typically use the back slash. I use the forward slash because it is also compatible with Unix and Linux. 2. The remainder of the name is formed by scanning the values given for the selector variables, and adding each selector variable name whose value is true to the base name given by the user. Note that the selector variables are scanned in the order they are given in the Set-Selectors Block. 3. The first selector-variable name added is prefixed with a slash to form a subdirectory under the base name given by the user. All subsequent selector-variable names are prefixed with an underscore as they are added. For example, if we added: makehomename= D:/nksk to the Set-Selectors Block given above, we would get: D:/nksk/g2002_q2200_tfixed_p2200 as the global home directory. Alternatively, if we added: MAKEHOMENAME= D:/nksk to the Set-Selectors Block given above, we would get: D:/nksk/G2002_Q2200_Tfixed_P2200 The difference being that all alphabetic characters in the first instance are forced to be lower case, and in the second instance, the case is left unchanged. If FEQ were invoked with: FEQ feqin out where "feqin" is the master input file and "out" is the master output file; then, if the Set-Selectors Block were present as given above with "makehomename= D:/nksk" added, and a home name was not given in the Run- Control Block, then the following applies: 1. The result of the selection block processing would appear in the file "feq.in" and this file would be under the path name: "D:/nksk/g2002_q2200_tfixed_p2200". This file is ran by FEQ to compute the results for the given scenario. 2. The master output file, "out" also would appear under this same path name. 3. In order to read the various slave files in their blocks, we would have to give a local home name to override the global home name. Note that the global home name defined in the Set-Selectors Block is for output files and not for input files. The only exception is the file, feq.in, which is both an output file, from the Set-Selectors Block, and an input file, for FEQ. 4. The file names given in the Special-Output Locations Block, the Output Files Block,and the GENSCN blocks that were prefixed with a / or \ will also appear under the path name created by makehomename. If they are not prefixed by a / or \, then they will appear in the directory in which FEQ was invoked, which is the standard default location for all file names with no path name given. I am assuming that a local home name was not given in these blocks. Recall that a local home name always overrides a global home name. Final notes: 1. FEQ has a limit of 64 characters for file names except in the Function Tables Block, where the limit is 128 characters. 2. The names of the selector variables should be carefully chosen so that one can store each scenario's results in a unique subdirectory for later access. 3. FEQ does NOT create the home-name directory, the user must do this. If the directory does not exist, FEQ will report an error when it tries to create or access the file. 4. THIS IS IMPORTANT: If one of the makehomename options in the Set-Selectors Block is used, then you must look in that directory for the master-output file for the run. A truncated master-output file, created when the Set-Selectors Block was being processed, will appear in the directory from which FEQ was invoked. This truncated file can be used to sort out problems in the Set-Selectors Block and in the processing of the selection blocks. Errors encountered in processing these blocks will appear in the master-output file in the directory from which FEQ was invoked (often called the current directory). The contents of this file, if the selection- block processing goes without error, also appears in the master-output file in the directory created by the makehomename option. 5. Currently, the CAD script file that can be used to create a CAD schematic will be stored in whatever global home directory applies. The script file is created after all of the input for the model has been processed. Therefore, the location appears as follows, where: Set-Selectors Block(SSB) HOME name given in Directory in with a makehomename Run Control Block which schematic.scr is present (RCB) is placed. -------------------- ------------------ -------------------- no no current directory no yes RCB home directory yes no SSB home directory yes yes RCB home directory Remember that the Run Control Block home directory name overrides the Set-Selectors Block home directory name. --Added automatic counting of the number of branches and the number of exterior nodes in an input file. The input of NBRA and NEX is still allowed, however, the values that FEQ finds in the input will be ignored. This eliminates a source of error in the entry of the NBRA and NEX numbers when modifying a model. However, certain input errors may be more difficult to find now that these numbers are not given. This change has not been thoroughly tested. I have tested the code using a variety of master-input files I have on my system, however, all of these files had the correct number of branches and exterior nodes. Only experience gained from building models with this version will reveal if there are special problems in detecting certain errors when FEQ counts the number of branches and the number of nodes. I have tested it by leaving out a branch and the messages made sense. I also have tested it by leaving out a simple junction and the messages made sense. Finally, I also have tested a larger model under Linux with this version, and it worked as expected. --A reminder: it is recommended that FEQ tables of type 22 or 25 be used at all locations using Code 13: conservation of energy/momentum. FEQ will accept other table types but issues a ERR/WRN message. I have encountered problems with convergence using Code 13, and the code was reorganized to avoid those problems, but doing so made use of the velocity-head correction factor, alpha. If alpha is not given, FEQ uses 1.0, and in some cases, this can result in non-convergence or convergence to an invalid solution. --Made an internal change that disables additional I/O units based on information found on the Lahey users forum. This change has no effect on the end user. Disabling units 1 through 7 should avoid all "hard-coded" assumptions that may exist in some compilers and operating systems. --Added reporting of the compiler/operating system used to create the executable. This may be needed to enable handling of files between compilers in the future. Also, this may be able to intercept certain strange errors when incompatible unformatted files are opened. Version 9.94 23 August 2002 --Added support of the global home directory to miscellaneous slave output files created by FEQ. These files are defined in blocks of input that do not have the option for setting a local home directory. Therefore, if a home directory is invoked by prefixing the file name with a slash or backslash, it will be taken relative to current global home directory. The files affected are those given in the Run Control Block input options: BWFDSN-file name for storing initial conditions when using a DTSF; GETIC-file name for obtaining initial conditions; and PUTIC-file name for storing initial conditions. Version 9.95 21 October 2002 --Changed reporting of the executable to include more information. This may be used in the future to tailor other operations to the compiler or operating system. Currently, this is used to inform the user of the compiler, precision of solution, and prefetch options. --Fixed problems in addressing nodes on branches in the NEW GENSCN OUTPUT block. --Added reporting of computational element length to the Branch-Description Block output. Version 9.96 23 October 2002 --Fixed bug in the processing of the OPTIONS line in the Special-Output Locations Block. --Fixed bug in reporting that an odd number of exterior nodes was found. Version 9.98 21 April 2003 --A new message is written to the master-output file when a synchronizing time step is computed. Without this message, the output can be confusing because the time step is first set to the maximum, and then it is changed to something smaller. FEQ tests for the need to synchronize the time with the maximum time step, that is, to update the time so that the values are in agreement with the maximum time step. For example, if the maximum time step is 1800 seconds, then the computation points should be shifted so that the results are at every hour and every half hour. The synchronizing time step must be at least as large as the minimum time step before it is used. --The node field in Output Files Block was processed for nodes on a branch using only five character positions; therefore, valid node numbers of six characters were not read properly. Currently, the node number for a node on a branch is limited to six characters. Descriptions of changes made to FEQUTL -------------------------------------- Version 4.70 July 7, 1997 --Added the GIS id string to the values stored with cross section tables. If no GIS id string is given the value is stored as blanks. --Added the location of the invert of a cross section in a coordinate system in the plane. One coordinate is called EASTING and the other at right angles to it is called NORTHING. If not given the values are stored as 0.0. --Changed the processing of the cross section header for the FEQX and FEQXEXT commands. The header is the information from the line after the FEQX or FEQXEXT command through the roughness values. In previous versions these values were required to fit within prespecified fields on each line. This has been changed to allow greater freedom in entering the values. Here is an example taken from a cross section input to FEQUTL. The line numbers are not part of the input but are used for reference in this discussion. 01 FEQX 02 TABLE#= 153 SAVE22 OUT22 MONOTONE 03 STATION= 1.E4 04 NAVM= 0 05 NSUB 3 0.08 0.055 0.09 06 OFFSET ELEVATION SUBS 07 -750. 720. 1 The header is lines 2 through 5 in this example. As shown, the header is suitable for all versions of FEQUTL. The order and position of the values can be changed to anything the user desires with the following rules: 1. All values must be spelled as shown in the input description. This was not true in the past. The only requirement was that the values appear in the correct columns on the correct line of input. 2. Any value followed by an equal sign must have its response, that is the value following the equal sign, must appear on the same line. For example STATION = 1000.0 is valid. However, STATION = 1000.0 is invalid. 3. The roughness values must always be last in the items entered. The roughness values serve as the end-of-header indicator. 4. The other values in the header can be entered in any order and at any point on the line. The line may be as long as 120 characters. 5. Values needed by FEQUTL but not appearing in the header will be assigned default values. Here are the defaults: TABLE#= 0 NOSAVE OUT21 OLDBETA LEFT = 1.E30 RIGHT = -1.E30 SCALE = 1.0 VSCALE = 1.0 SHIFT = 0.0 HSHIFT = 0.0 NAVM = 0 VARN= NCON STATION = 0.0 EASTING = 0.D0 NORTHING = 0.D0 Values for the table number, the station, and the roughness options should always be given because their defaults will not be reasonable in most cases. 6. The roughness values must always be the last given in the header for the cross section. The keyword, NSUB should appear on a line by itself. 7. A blank will not be taken as zero. If some value is to be zero you must explicitly give the value as 0 or 0.0 depending on the nature of the item. Here is an example of a header that FEQUTL can now process: TABLE#= 153 SAVE22 OUT22 MONOTONE STATION= 1.E4 NAVM= 0 NSUB 3 0.08 0.055 0.09 Notice that the various values are separated by one or more spaces. If more than one line of input is required for the roughness values they should appear in order following the line containing NSUB. Here is an example taken from an existing input. NSUB 11 0.015 0.030 0.015 0.030 0.015 0.030 0.015 0.030 0.015 0.030 0.015 This could be given as NSUB 11 0.015 0.030 0.015 0.030 0.015 0.030 0.015 0.030 0.015 0.030 0.015 with all values appearing on one line. FEQUTL will count the number of subsections if the count value is left out. For example NSUB 0.015 0.030 0.015 0.030 0.015 0.030 0.015 0.030 0.015 0.030 0.015 If there are too many values to fit on one line and the count value is left out, then a continuation signal must be given to tell FEQUTL that the values continue on the next line. If the count value is given, this continuation signal is not needed and must not be given. Here is multiline input when the count of subsections is left out: NSUB 0.015 0.030 0.015 0.030 0.015 0.030 / 0.015 0.030 0.015 0.030 0.015 The continuation signal is a single slash following after every line of roughness input but the last. FEQUTL will count the number of subsections if the count is left out. The GIS id string is given at any point in the header just as any of the other values. It must be identified with the variable GISID in a manner analogous to the other variables assigned values using the equal sign. The same rule applies to the coordinate location of the cross section invert, given by the optional inputs EASTING and NORTHING. Here is an example with these three values added to an existing FEQX command: FEQX TABLE#= 938 EXTEND SAVE22 NEWBETA NOOUT GISID=291APVQ0938 STATION= 11.290 EASTING=1868573.55 NORTHING=569013.79 NAVM= 0 NSUB 5 0.100 0.120 0.030 0.080 0.032 X-SEC 38 60 FT. U/S OF THE BNRR (DEL TO LEFT & RIGHT OF TOP OF BERMS) RERAN W/ DZLIM=.10 FOR CONTRACTION -73.00 666.6 1 1868587.65 569085.42 TOB 13A -49.10 665.8 2 1868583.04 569061.97 TOB 13 -25.40 655.0 3 1868578.46 569038.71 WEO 12 -21.90 654.6 3 1868577.78 569035.28 CHP 11 0.00 653.9 3 1868573.55 569013.79 CFL 10 8.20 654.4 3 1868571.96 569005.75 CHP 9 11.10 655.0 4 1868571.40 569002.90 WED 8 15.00 658.3 4 1868570.65 568999.08 TOB 7 33.90 663.63 5 1868567.00 568980.53 CLP 6 44.60 666.0 -1 1868564.93 568970.03 TOB 5 The GISID may not contain blanks and should not contain any character other than the digits 0 through 9 and the alpha characters A through Z. Lower case alpha will also be accepted but may not function in other software that analyzes the GISID. FEQUTL only reads the GISID and places it in the resulting cross-section function table. The items need not be in the order shown so the GISID, EASTING, and NORTHING can appear on any line between the line containing FEQX and the line containing NSUB. Thus, the follow order is also valid: FEQX TABLE#= 938 EXTEND SAVE22 NEWBETA NOOUT GISID=291APVQ0938 EASTING=1868573.55 NORTHING=569013.79 EASTING=1868573.55 NORTHING=569013.79 STATION= 11.290 NAVM= 0 NSUB 0.100 0.120 0.030 0.080 0.032 This gives considerable flexibility in designing the pattern to follow in the input to make the information clearer or easier to modify. --The location of the information message for interpolated cross-section function tables has been changed to a comment to avoid interfering with the processing of the options as outlined here. Old interpolated cross sections can be retained if FEQ version 9.03 or later is used because they will be detected. If earlier versions of FEQ are used the interpolation message will cause an error in processing the cross-section function table. Version 4.72 November 5, 1997 --Discovered problem in CULVERT in computing a type 2 limit when the return flag from one routine was not properly tested in the calling program unit. This created nonsense values that caused a BUG message to be issued. --Discovered problems in CULVERT in computing the type 2 limit which sometimes exists above the type 1 limit. That is at low flows the flow type is type 2, as the flow increases the flow type shifts to type 1 and remains there until some upper limit is reached and the flow type again shifts to type 2. The initial estimate of the local elevation at section 1 was set so that the root-finding scheme concluded that no upper type 2 limit existed. This then caused problems later as the upper limit of type 1 flow was surpassed but the limits for type 5 or type 6 flow had not been reached yet. This change may allow culverts to be computed that failed in previous versions. It may also cause some culverts to fail that computed with previous versions. Version 4.73 November 10, 1997 --CULVERT got confused in a case of type 1 flow detection. CULVERT finds all matches between the bottom slope of the culvert and the critical slope to seek boundaries for type 1 flow. If there is only one match and the match is greater then the maximum depth allowed at section 2, then type 1 flow is rejected even though a match is found. This is done because the critical slope for a closed conduit approaches infinity as the water level in the conduit approaches the soffit of the conduit. This means that some matches of the bottom slope with the critical slope may occur essentially at the soffit. Thus, it is unlikely that type 1 flow will prevail. The default limit for the depth at section 2 is 0.95 of the vertical diameter of the culvert. There may be more than one match between bottom slope and critical slope in the culvert. This may mean that the low-head flow starts as type 2 and then changes to type 1 as the smallest depth for a match is encountered. The flow type will remain at type 1 until the upper match for type 1 flow is encountered. CULVERT tries to find both limits. If there is more than one match CULVERT assumed, incorrectly, that the lower match would be less than the depth limit for the culvert. CULVERT now checks to make sure that the smallest depth for a match of slopes is less than the depth limit before it concludes that type 1 flow prevails. Version 4.75 November 11, 1997 --Problems with CULVERT in computing the submerged flow part of flow type 51 were uncovered. The submerged flow computations would fail with the message: FRFT7: Minimum depth= ffff at section 43 found seeking a negative residual. where ffff was some real number. Flow type 51 is a transitional type between the upper limit of type 1 and the lower limit of type 5. As such various coefficients are set to force close matches at each end of the transition. In order to match at both ends of the transition these coefficients must be given values that sometimes are non-physical, that is, they would never be found from a measurement. The momentum-flux coefficient was not properly computed in the submerged flow computations when the free flow type was 51. Since type 51 is a transition between types 1 and 5 its submerged flow computations must transition between these types. That is, at the lower limit, near type 1, the submergence levels should be close to those for the upper limit of type 1. In the same way, when the flow is close to the lower limit of type 5, the submergence computations should be similar in their result to that obtained from the type 5 computations. The free flow computations must also match at the two limits. The approach taken in CULVERT is to force the high-head free flow equation, in this case type 5, to match the flow conditions at the low-head, in this case type 1, upper limit. This also means that the submergence limit must be matched. In order to do this a special value of the momentum-flux coefficient must be used with the type 5 free-flow equation when it is pushed to the upper limit of type 1 flow equation. The coefficients for intermediate type 51 flow are computed linearly between the two limits of type 1 and type 5. Now when type 5 flow is drowned by tailwater, CULVERT assumes that the submerged flow type becomes type 4. Thus, to be consistent and to produce a smooth transition, the submergence of type 51 over its range must be by type 4 flow. Thus, the exit of the culvert is assumed to be flowing full for all submergence computations involving type 51 flow. At the limit of free-flow, that is, at the initiation of submerged flow, CULVERT computes a special value of the momentum-flux coefficient so that the momentum flux exiting the culvert barrel assuming a full barrel will match the value that exited the barrel during the free-flow submergence computations. The barrel may have been part full for the free flow computation and there might have been the complication of a hydraulic jump at the barrel exit as well. This special momentum-flux coefficient only applies at the limit of free flow. Previous versions assumed that the tailwater at the free-flow limit would be below the exit soffit of the culvert barrel. Then, if a special momentum-flux coefficient were present, the momentum-flux coefficient would be interpolated between the special value and the true full-barrel value based on the tailwater level between the free-flow limit tailwater and the soffit of the barrel at the exit. However, submergence of type 1 flow can require submergence of the barrel exit. Type 5 flow may be submerged with the barrel only part full at the exit. Thus, there are cases in which the transitional flow type 51 will have a free-flow limit tailwater above the soffit of the barrel exit. Thus, the special momentum-flux coefficient was not used because the rule for interpolation only applied for tailwater levels below the soffit of the barrel exit. Thus, the true full-barrel value of the momentum-flux coefficient was used and the simple momentum balance could not be balanced because the momentum flux computed for the culvert barrel was incorrect. An additional interpolation rule for special values of momentum-flux coefficient was added to the submerged flow computations. This rule comes into play if there is a special momentum-flux coefficient from the free-flow limit computations and if the free-flow limit tailwater is above the exit soffit of the culvert. The momentum-flux coefficient under this rule is interpolated linearly from its free-flow limit value at the free-flow limit tailwater to the true full-barrel value at 0.25 of the distance to the tailwater that causes zero flow for the culvert. The effect on pre-existing culvert computations is hard to evaluate. The failure to converge occurred at upstream heads close to the type 1 upper limit. If the upstream head were close to the type 5 lower limit, then the computations for submerged flow for type 51 would complete. Version 4.75 will probably give different answers in this region than did early versions. However, the head range between the type 1 and type 5 limits is generally a small part of the range of either type. A search of the CULVERT code shows that special values of momentum-flux and related coefficients are used for flow types 5, 51, 52, and 62. These flows could be affected in some cases by the correction made in Version 4.75. Version 4.76 January 15, 1998 --Found an error in the value of submerged orifice flow shown in the user output file. The correct value was placed in the table file. This error in the user output file appears to have entered at version 4.68 when a statement was converted to a comment in error. Apparently usage of the new version and for UFGATE is small so that no one reported the error. Version 4.77 January 28, 1998 --Changed root finding routine RGF5 so that complete suppression of argument collapse convergence is impossible. Forced a local minimum of a relative difference of 0.5*10^-6 for argument collapse convergence. This was done because an instance of failure to converge on critical depth occurred with the residual function nearly converged but the two arguments still differed slightly. This problem can occur in cases where the residual function is increasing rapidly near the root. Then roundoff in the computation of the intermediate argument is such that the residual function does not fall within the residual-function convergence tolerance. This change in the computation of critical depth may result in slight differences in flow at some points. However, the differences should be on the order of the cumulative roundoff and truncation error. These errors appear to be on the order of 1 part out of 10,000 to 1 part out of 100,000. Version 4.80 April 27, 1998 --A new command has been added to FEQUTL to assist in developing tables to control underflow gates using the GATETABL option in Operation Control Block. This command is named INV_GATE because it inverts an underflow gate rating and creates the skeleton of a control table. The basic problem with the GATETABL option is that it is so flexible that it is difficult to establish the values in the control table. INV_GATE will help the process. However, the process still takes considerable patience and skill. The approach taken is as follows: 1. Add a fixed level overflow weir to the model at the site of the proposed overflow gate. 2. Run this model writing output files, on the FEQ connection file option is supported at this time. HECDSS may be added later if there is demand for it. These output files should be as follows: 2.1 A file contains the times series of water-surface elevation at the control node for GATETABL. 2.2 A file containing the times series of water-surface elevation at the upstream node of the sluice gate. If this is the same location as the control node, then this file is not needed. 2.3 A file containing the time series of water-surface elevation at the downstream node of the sluice gate. 2.4 A file containing the time series of flow over the weir. 3. These files are then used in the INV_GATE command together with the rating tables for the underflow gate to develop an initial control table for the GATETABL option. The INV_GATE command does the following: 3.1 Reads the rating tables for the underflow gate and a collection parameters used to control what is done. 3.2 At each time step in the time series compute what the gate opening would have had to be in order to match the flow over the weir using the sluice gate with exactly the same upstream and downstream water-surface elevation. In some cases no solution is possible because the weir is not as subject to tailwater as is the sluice gate. In this case the gate is set to its maximum setting and a flag is set internally to note that no solution is possible. 3.3 Keep a record of the gate opening and the appropriate range of head difference as well as the flood stage at the control node. The input values for INV_GATE are as follows: UD_TABLE table number giving the type 15 rating table for the gate for flow from the upstream node to the downstream node. DU_TABLE table number giving the type 15 rating table for the gate for flow from the downstream node to the upstream node. CPNT_ELEV_TS file name for the time series of elevation at the control point for the structure. The control is the exterior node giving the elevation defining the operation of the gate. UPS_ELEV_TS file name for the time series of elevation at the node upstream of the gate. Upstream is defined by the user with the rule that the flow, given in another time series, must always be positive for flow from the upstream node to the downstream node. Given only when the control-point location is different than the upstream-node location. DNS_ELEV_TS file name for the time series of elevation at the node downstream of the gate. GATE_FLOW_TS file name for the time series of flows through the gate. FLOOD_ELEV the elevation at the control point that defines the zero point on the sequence of arguments for rows of the control table. This is the elevation at the control point that often signals flood hazard at some point downstream. FLOOD_FLOW the flow at the control point when the elevation is at FLOOD_ELEV. Used to estimate the gate operation for draining of the reservoir. The difference between the FLOOD_FLOW and the current flow at the control point gives the maximum flow release possible. CONTROL_TAB the table number for the control table computed by the GATE_INV command. COL_BDYS the sequence of boundary values defining the cells for the columns of the type 10 table. The midpoint of the cell will become the argument value in the table. ROW_BDYS the sequence of boundary values defining the cells for the rows of the type 10 table. The midpoint of the cell will become the argument value in the table. See the description of the GATETABL option above for more details on the arguments for the type 10 table. DRAIN_LOC the location defining the node that represents the reservoir to be drained. Has two values UPS or DNS. MIN_FLOW all flows less than this value are treated as zero flow. OUTPUT_LEVEL user control on output level: MIN gives the minimum level with the summary tables only. MAX gives the results for each flow greater than MIN_FLOW. REVERSE_FLOW if YES, reverse flows are included. If NO reverse flows are excluded. If YES, MIN_FLOW refers to the absolute value of the flow. An example input follows: INV_GATE CONTROL_TAB= 125 UD_TABLE=530 DU_TABLE= 630 FLOOD_ELEV= 673.50 FLOOD_FLOW = 1234. MIN_FLOW= 1.0 DRAIN_LOC = F13 . . . CPNT_ELEV=F:\WDIT\D19ELEV DNS_ELEV=F:\WDIT\F32ELEV GATE_FLOW=F:\WDIT\F32FLOW . . . COL_BDYS= / -6.0 0.0 0.2 0.4 0.6 0.8 1.0 1.5 2.0 4.0 6.0 50. ROW_BDYS= -5.0 0.0 0.2 0.4 0.6 0.8 1.0 1.25 1.50 1.75 2.0 2.25 2.50 2.75 3.0 3.25 3.5 3.75 4.0 4.25 4.50 4.75 / 5.0 END INV_GATE In this example all of the input is defined by a keyword followed by an equal sign and followed by one or more items of information. The keywords MUST be spelled as shown. Any deviation will be flagged as an error. Note that the COL_BDYS and ROW_BDYS keywords take more than one item. The forward slash, always set off from other items by one or more spaces, is a continuation signal. If a continuation signal is not given, the input processing software expects to find all responses to the keyword to appear on a single line of input. For this command a line of input is limited to 120 space per line. The command is ended with an explicit END command, followed by the command name. This example, taken from Salt Creek in DuPage County Illinois created the following skeleton for the type 10 table for the GATETABL option: TABLE#= 125 TYPE= -10 '( 12A6)' '(1X, 12A6)' '(F6.0)' '( 12F6.0)' '(1X,F6.1, 11F6.2)' HDATUM= 673.500 LABEL= Replace with desired value Fstage -3.00 0.10 0.30 0.50 0.70 0.90 1.25 1.75 3.00 5.00 28.00 -2.50 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.10 0.0 0.091 0.0 0.0 0.0 0.0 0.007 0.0 0.006 0.0 0.006 0.30 0.0 0.168 0.087 0.0 0.0 0.0 0.045 0.032 0.022 0.0 0.021 0.50 0.0 0.289 0.151 0.126 0.108 0.109 0.080 0.059 0.055 0.0 0.047 0.70 0.0 0.390 0.280 0.197 0.137 0.126 0.117 0.115 0.089 0.084 0.080 0.90 0.0 0.451 0.362 0.276 0.265 0.234 0.199 0.159 0.129 0.113 0.119 1.13 0.0 0.531 0.473 0.420 0.353 0.317 0.279 0.239 0.175 0.178 0.178 1.38 0.0 0.608 0.523 0.518 0.495 0.450 0.364 0.306 0.272 0.248 0.246 1.63 0.0 0.604 0.652 0.0 0.543 0.0 0.463 0.417 0.353 0.346 0.339 1.88 0.0 0.638 0.0 0.671 0.0 0.562 0.555 0.530 0.458 0.417 0.449 2.13 0.0 0.692 0.0 0.760 0.691 0.647 0.600 0.588 0.584 0.586 0.570 2.38 0.0 0.730 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.627 0.636 2.63 0.0 0.758 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.88 0.0 0.786 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3.13 0.0 0.833 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3.38 0.0 0.886 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3.63 0.0 0.945 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3.88 0.0 0.971 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 4.13 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 4.38 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 4.63 0.0 0.997 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 4.88 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -1.0 Note the many zero entries. This shows that no water levels occurred in those parts of the table. Since the values in the table are relative gate openings, this table, used as produced would keep the gate closed in those regions. Also, note that the precision of output is far greater then the accuracy of the output. Some of the cells in the table only had one observation. The significance of each cells average value can be judged from the detailed output from the command. This follows this example: Summary of Results Cell Type -6.0 0.0 0.2 0.4 0.6 0.8 1.0 1.5 2.0 4.0 6.0 Bdys 0.0 0.2 0.4 0.6 0.8 1.0 1.5 2.0 4.0 6.0 50.0 -5.00 AvrP: 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 MinP: 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 MaxP: 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 N : 0 0 0 0 0 0 0 0 0 0 0 0.00 AvrP: 0.0 0.091 0.0 0.0 0.0 0.0 0.007 0.0 0.006 0.0 0.006 0.20 MinP: 0.0 0.02 0.0 0.0 0.0 0.0 0.00 0.0 0.00 0.0 0.00 MaxP: 0.0 0.19 0.0 0.0 0.0 0.0 0.01 0.0 0.01 0.0 0.01 N : 0 60 0 0 0 0 35 0 34 0 766 0.20 AvrP: 0.0 0.168 0.087 0.0 0.0 0.0 0.045 0.032 0.022 0.0 0.021 0.40 MinP: 0.0 0.05 0.05 0.0 0.0 0.0 0.02 0.02 0.01 0.0 0.01 MaxP: 0.0 0.27 0.12 0.0 0.0 0.0 0.06 0.05 0.03 0.0 0.03 N : 0 199 40 0 0 0 4 31 12 0 1128 0.40 AvrP: 0.0 0.289 0.151 0.126 0.108 0.109 0.080 0.059 0.055 0.0 0.047 0.60 MinP: 0.0 0.14 0.10 0.09 0.09 0.09 0.05 0.04 0.03 0.0 0.03 MaxP: 0.0 0.39 0.21 0.16 0.13 0.12 0.11 0.09 0.07 0.0 0.06 N : 0 196 21 18 10 10 38 15 28 0 1194 0.60 AvrP: 0.0 0.390 0.280 0.197 0.137 0.126 0.117 0.115 0.089 0.084 0.080 0.80 MinP: 0.0 0.33 0.22 0.15 0.13 0.13 0.11 0.09 0.07 0.06 0.06 MaxP: 0.0 0.46 0.33 0.23 0.14 0.13 0.12 0.13 0.13 0.10 0.10 N : 0 129 14 13 3 1 9 15 42 20 1221 0.80 AvrP: 0.0 0.451 0.362 0.276 0.265 0.234 0.199 0.159 0.129 0.113 0.119 1.00 MinP: 0.0 0.37 0.28 0.24 0.24 0.22 0.18 0.13 0.10 0.10 0.10 MaxP: 0.0 0.49 0.45 0.34 0.28 0.25 0.23 0.19 0.18 0.14 0.14 N : 0 165 17 19 3 4 15 51 55 50 1201 1.00 AvrP: 0.0 0.531 0.473 0.420 0.353 0.317 0.279 0.239 0.175 0.178 0.178 1.25 MinP: 0.0 0.44 0.40 0.32 0.32 0.27 0.23 0.20 0.14 0.15 0.14 MaxP: 0.0 0.64 0.51 0.49 0.39 0.37 0.34 0.29 0.22 0.21 0.21 N : 0 180 9 20 4 5 11 28 38 75 1243 1.25 AvrP: 0.0 0.608 0.523 0.518 0.495 0.450 0.364 0.306 0.272 0.248 0.246 1.50 MinP: 0.0 0.50 0.50 0.49 0.41 0.39 0.30 0.28 0.24 0.21 0.21 MaxP: 0.0 0.80 0.58 0.53 0.52 0.48 0.45 0.33 0.31 0.29 0.29 N : 0 164 9 4 8 5 19 10 27 16 1320 1.50 AvrP: 0.0 0.604 0.652 0.0 0.543 0.0 0.463 0.417 0.353 0.346 0.339 1.75 MinP: 0.0 0.53 0.56 0.0 0.53 0.0 0.41 0.38 0.29 0.30 0.29 MaxP: 0.0 0.87 0.76 0.0 0.56 0.0 0.51 0.46 0.41 0.40 0.40 N : 0 198 7 0 3 0 5 4 26 45 853 1.75 AvrP: 0.0 0.638 0.0 0.671 0.0 0.562 0.555 0.530 0.458 0.417 0.449 2.00 MinP: 0.0 0.56 0.0 0.64 0.0 0.56 0.53 0.50 0.41 0.40 0.40 MaxP: 0.0 0.90 0.0 0.70 0.0 0.56 0.58 0.57 0.50 0.44 0.52 N : 0 215 0 2 0 2 5 6 23 18 683 2.00 AvrP: 0.0 0.692 0.0 0.760 0.691 0.647 0.600 0.588 0.584 0.586 0.570 2.25 MinP: 0.0 0.59 0.0 0.72 0.66 0.60 0.58 0.57 0.52 0.52 0.51 MaxP: 0.0 0.94 0.0 0.79 0.73 0.69 0.61 0.61 0.62 0.62 0.62 N : 0 182 0 3 4 3 11 9 34 28 556 2.25 AvrP: 0.0 0.729 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.627 0.636 2.50 MinP: 0.0 0.62 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.62 0.62 MaxP: 0.0 0.98 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.63 1.00 N : 0 197 0 0 0 0 0 0 0 6 46 2.50 AvrP: 0.0 0.758 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.75 MinP: 0.0 0.66 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 MaxP: 0.0 1.00 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 N : 0 140 0 0 0 0 0 0 0 0 0 2.75 AvrP: 0.0 0.786 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3.00 MinP: 0.0 0.73 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 MaxP: 0.0 0.99 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 N : 0 119 0 0 0 0 0 0 0 0 0 3.00 AvrP: 0.0 0.834 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3.25 MinP: 0.0 0.77 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 MaxP: 0.0 0.99 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 N : 0 126 0 0 0 0 0 0 0 0 0 3.25 AvrP: 0.0 0.887 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3.50 MinP: 0.0 0.83 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 MaxP: 0.0 1.00 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 N : 0 81 0 0 0 0 0 0 0 0 0 3.50 AvrP: 0.0 0.945 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3.75 MinP: 0.0 0.86 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 MaxP: 0.0 0.99 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 N : 0 51 0 0 0 0 0 0 0 0 0 3.75 AvrP: 0.0 0.971 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 4.00 MinP: 0.0 0.80 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 MaxP: 0.0 1.00 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 N : 0 30 0 0 0 0 0 0 0 0 0 4.00 AvrP: 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 4.25 MinP: 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 MaxP: 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 N : 0 0 0 0 0 0 0 0 0 0 0 4.25 AvrP: 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 4.50 MinP: 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 MaxP: 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 N : 0 0 0 0 0 0 0 0 0 0 0 4.50 AvrP: 0.0 0.997 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 4.75 MinP: 0.0 1.00 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 MaxP: 0.0 1.00 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 N : 0 1 0 0 0 0 0 0 0 0 0 4.75 AvrP: 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 5.00 MinP: 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 MaxP: 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 N : 0 0 0 0 0 0 0 0 0 0 0 Each cell has four values printed per row. The first value is the average value of the relative gate opening, P. The second value, on the second row, is the minimum value of P found. The third value is the maximum value of P found, and the last value is the number of cases occurring in that cell. The cell boundaries for columns are given in two rows at the top of the table. Each column is headed by the cell boundaries for that column. Adjacent columns share a boundary in common. Each row has its cell boundary given as well and again adjacent rows share a common cell boundary. As an example, if the flood stage at the control point was between 3.75 and 4.00 feet and if the difference: elevation at upstream node minus elevation at downstream node was between 0 and 0.2 feet, then there were 30 cases. The average gate setting was 0.971, the minimum gate setting was 0.80 and the maximum was 1.0. The type 10 table given above was modified manually to read as follows: ; Modification of the table created by the INV_GATE command in FEQUTL. ; Control table for the low-level gates TABLE#= 5300 TYPE= -10 '( 13A6)' '(1X, 13A6)' '(F6.0)' '( 13F6.0)' '(1X,F6.1, 12F6.2)' HDATUM= 673.500 LABEL= Control by Salt Creek Level at the gate. Fstage-9999. 0.00 0.10 0.30 0.50 0.70 0.90 1.25 1.75 3.00 5.00 50.00 -9999. 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.10 0.0 0.0 0.091 0.047 0.040 0.034 0.034 0.007 0.006 0.006 0.006 0.006 0.30 0.0 0.0 0.168 0.087 0.073 0.063 0.063 0.045 0.032 0.022 0.022 0.022 0.50 0.0 0.0 0.289 0.151 0.126 0.108 0.109 0.080 0.059 0.055 0.050 0.050 0.70 0.0 0.0 0.390 0.280 0.197 0.137 0.126 0.117 0.115 0.089 0.084 0.084 0.90 0.0 0.0 0.451 0.362 0.276 0.265 0.234 0.199 0.159 0.129 0.113 0.113 1.13 0.0 0.0 0.531 0.473 0.420 0.353 0.317 0.279 0.239 0.175 0.175 0.175 1.38 0.0 0.0 0.608 0.523 0.518 0.495 0.450 0.364 0.306 0.272 0.248 0.248 1.63 0.0 0.0 0.604 0.652 0.635 0.543 0.532 0.463 0.417 0.353 0.339 0.339 1.88 0.0 0.0 0.638 0.689 0.671 0.574 0.562 0.555 0.530 0.458 0.417 0.417 2.13 0.0 0.0 0.692 0.747 0.760 0.691 0.647 0.600 0.588 0.584 0.584 0.584 2.38 0.0 0.0 0.730 0.788 0.802 0.729 0.683 0.633 0.620 0.616 0.616 0.616 2.63 0.0 0.0 0.758 0.818 0.833 0.757 0.709 0.660 0.644 0.640 0.640 0.640 2.88 0.0 0.0 0.786 0.848 0.864 0.758 0.735 0.684 0.668 0.664 0.664 0.664 3.13 0.0 0.0 0.833 0.899 0.916 0.803 0.779 0.725 0.708 0.704 0.704 0.704 3.38 0.0 0.0 0.886 0.956 0.974 0.854 0.829 0.771 0.753 0.749 0.749 0.749 3.63 0.0 0.0 0.945 1.000 1.000 0.911 0.884 0.822 0.803 0.799 0.799 0.799 3.88 0.0 0.0 0.971 1.000 1.000 0.936 0.908 0.845 0.825 0.821 0.821 0.821 4.13 0.0 0.0 0.981 1.000 1.000 0.946 0.917 0.854 0.833 0.829 0.829 0.829 4.38 0.0 0.0 0.990 1.000 1.000 0.955 0.925 0.862 0.841 0.837 0.837 0.837 4.63 0.0 0.0 1.000 1.000 1.000 0.965 0.934 0.871 0.849 0.845 0.845 0.845 10.00 0.0 0.0 1.000 1.000 1.000 0.965 0.934 0.871 0.849 0.845 0.845 0.845 -1.0 Note that the region for the gate being closed was enlarged. During test runs the Salt Creek model had some rather erratic values that went below the ranges in the original table. Thus, they were extended to larger negative values. The rough rule of thumb for filling in the values was as follows: 1. In table 125, the one produced by the INV_GATE command, each row appears to decline to a nearly constant value. 2. The zero entries will filled in using ratio between non-zero values in adjacent rows that were to the left of a missing value or values. For example the row with argument 0.30 has a string of missing values following 0.087. The row below it, with argument 0.5, has values in each of the columns that contain a zero in the row above. Thus, the first missing value in the row with argument 0.30 was estimated as follows: 0.087 ------ x 0.126 = 0.073 0.151 This pattern was followed in nearly all cases. If the computed ratio was greater then 1.0 it was set to 1.0. Previously computed values were used as needed to fill in missing values as well. In this manner crude estimates of values could be defined for each cell in the table. This process does not define an optimum table for any purpose; all it does is define a valid table. However, the parts of the table based on the performance of the overflow weir will produce results that are similar to those produced by the overflow weir. As test runs are made with the sluice gate, performance of the operation will probably be not quite what was desired. One can then find the general range of the deficiency in performance in terms of flood stage and the difference in water-surface elevations upstream and downstream of the gate. Once this is found appropriate adjustment can be made to the gate setting in that region. Thus, with multiple test runs an improved gate operation rule can be developed. Note: The INV_GATE command has only been tested with the inputs shown in the example. Other input options are generally not supported yet. It possible, however, to have input of all four time series when the control node differs form the upstream node. That option is supported in the code but has not been tested yet. Also, the number of cells for columns has a limit of 19. Increasing tha can be done if needed. The number of rows is currently set at 40. Changing the number of rows only requires changing the parameter value, MRDT10 in the file arsize.prm. Increasing the limit for columns above 19 has broad implication. Formats need to be changed and the line length of the process of input of type 10 function tables must be changed to increase that limit. Version 4.81 May 22, 1998 --Modified output of cross-section tables to the standard user output to include the average value of Manning's n at each depth in the cross section. This can be helpful in checking on variation of Manning's n with changes in depth. Version 4.85 June 18, 1998 --A number of changes were made in the way the various source files are arranged: 1. The COMPROG directory was changed to SHARE 2. COMPROG.FOR was broken into many smaller units 3. ARSIZE.PRM for FEQ and FEQUTL were combined into one file 4. Several .COM files between FEQ and FEQUTL were the same or nearly so. These were adjusted to be the same and moved to the SHARE directory to be used by both FEQ and FEQUTL. 5. One bug was found in FEQ that occurred on a SG computer. All other compilers did not encounter the problem. These tasks were done by RS Regan, USGS, Reston. I have made check runs of the modified code and have found no differences. However, there may be options not tested that could cause problems. Version 4.86 July 21, 1998 --Found and corrected error in CHANNEL command when stationing in the sinuosity definition table was decreasing. Apparently no one had ever used this option here to for. Version 4.90 September 8, 1998 -Modified routines for finding critical and normal depth to improve the search for an interval containing a root. May change results slightly since the improved search now locates a smaller interval before the final solution is sought. --Modified the lower limit for the high-head flows in CULVERT to be (TY1HTD +0.1001)*D. If TY1HTD is kept at its default of 1.4 the results will be the same as in previous versions. --Modified the limit used for calling the low-head routines in CULVERT to use the computed limit if either type 1 or type 2 have a limit set. In previous versions both type 1 and type 2 had to have a limit set before the computed limits were used in the decision of calling the low-head or high-head routines. Might change some flows close to the boundary between high-head and low-head flows. --Added a new command, UFGCULV, that models a sluice gate on the upstream face of a box culvert. There are many restrictions on this command as it is implemented: 1. The barrel of the box culvert must not have an adverse slope. 2. The sill elevation must match the invert of the box culvert at its upstream end. There is currently no provision for a drop from the sill into the culvert. 3. The barrel of the box culvert must be prismatic: constant shape, roughness, and slope. 4. The departure reach must be horizontal and prismatic. In order to support this new command, the CULVERT command was also modified. Flows through the culvert not in contact with the lip of the sluice gate, are computed in a CULVERT command. The results of the CULVERT command computations are stored internal to FEQUTL when the user gives two options following the table number for the CULVERT command. An input fragment is: CULVERT TABLE#= 128 PUTQ= 976 PUTY2= 977 TYPE= 13 LABEL=SIMPLE CULVERT EXAMPLE . . . The two options must appear on the same line as TABLE# but they can be in any order and spacing. The equal signs must also appear and BE sure the names are spelled exactly as shown. PUTQ gives a table number to be used to store the flows for the culvert in a table of type 13 and PUTY2 gives the table number for storing the depth of flow at the entrance to the culvert, section 2, in another table of type 13. These tables do not appear in the function-table file. They are stored in the function-table storage system within FEQUTL. The CULVERT command must appear before the UFGCULV command so that these two tables are known. The CULVERT command must use the same tables for approach and departure sections as the UFGCULV command as well. The barrel description must match and there can be NO flow over the roadway. Any flow over the roadway must be modeled using a separate EMBANKQ command. The input for UFGCULV is a slightly modified form of the input for UFGATE. An example follows: UFGCULV TABLE#= 580 GETQ= 976 GETY2= 977 LABEL= Sluice gate on culvert APPTAB= 4 DEPTAB= 5 SILLELEV= 50. GATEWIDTH= 30. CD= 0.98 CCTAB=200 FWFOTRAN= 0.1 MAXHEAD= 25.0 MINHEAD= 0.2 PRECISION= 0.02 Opening 2-D Table Cc Value Lip Angle 0.2 5801 0.2645 5802 0.3497 5803 0.4625 5804 0.6116 5805 0.8087 5806 1.0694 5807 1.4142 5808 1.8701 5809 2.4730 5816 3.2702 5819 4.3245 5818 5.7186 5810 7.5621 5817 10.0000 5811 -1.0 SFAC= 1.0 NODE NODEID XNUM STATION ELEVATION 100 TESTCLVU 999 0.00 50.00 TESTCLVD 999 100.00 50.00 -1 Partial free drop parameters MINPFD= 0.01 BRKPFD= 0.5 LIMPFD= 0.99 FINPOW= 2.0 Two new options follow the table number input: GETQ and GETY2. These are the counterparts to PUTQ and GETY2 in the CULVERT command. The same rules apply as in the CULVERT command. The table numbers obviously must match those used in the CULVERT command. The other input is the same. Currently the input for CD is required but UFGCULV does not use the value. The lip angle options for tainter gates is not currently supported. It may be added in the future if a tainter gate is placed on a box culvert entrance. The barrel description follows the end of the gate opening table. It follows a format similar to that used in the CULVERT command. However, special losses are not supported at this time. Since the barrel must be prismatic only two points on the barrel can be given: the barrel entrance and the barrel exit. You must be sure that the length, cross section description and the invert elevations match those used in the CULVERT command. The contraction coefficient for the gate can vary with the gate opening relative to the head at section 1. Head is always measured from the entrance invert of the culvert which must match the sill elevation for the gate. The invert elevation for the approach cross section and the departure cross section comes from the cross section tables. Also, the distance between the approach cross section and the culvert entrance is determined from the difference between the station at the entrance and the station of the approach cross section. Currently the stations along the barrel must increase from entrance to exit. In the above example, the station of the approach cross section would be negative to yield a positive distance when the difference is computed. Some special problems have been encountered in convergence of the iterations for the non-linear system of equations. This appears to arise because the tables for the UFGCULV command are not at smooth as those for the UFGATE command. In the UFGATE command it proved possible to force continuity at all of the transitions between flow types or patterns. It has not proved possible to do the same in UFGCULV because the flow in the barrel complicates the flow patterns. Thus, some transitions are rather abrupt, especially in the changes that take place in the first derivatives used in the solution of the non-linear equation system. It has proved necessary to use the SQREPS option. This option is usually set to 1E25 or so that it does not have an effect on the solution. The value to set will depend on the nature of the model and the nature of the convergence failure. In the small model used for testing, the failure was clearly located at upstream and downstream exterior nodes involved in the specification of the underflow gate. The table for the UFGCULV command is used just as if it came from the UFGATE command. The pattern in the iteration log was that the maximum relative correction stayed about the same size iteration after iteration. An example is: SIMULATION ending at 1990/ 1/ 2/ 20.501 with time step of 3.5 sec ITER RCORECT BRA NODE MXRES LOC SUMSQR NUMGT 1 1.5E-01 1 117 3.0E+02 35 1.1E+05 3 2 1.4E-01 2 201 3.6E+02 35 1.3E+05 3 3 1.7E-01 1 117 3.2E+02 35 1.0E+05 3 4 1.4E-01 1 117 3.6E+02 35 1.3E+05 3 5 1.7E-01 2 201 3.2E+02 35 1.0E+05 3 The control structure was located between the dns end of branch 1 (node 117) and the ups node of branch 2 (node 201). Note that the relative correction (RCORECT column heading) is nearly constant. The maximum residual (MXRES column heading) was also nearly constant, and the location of the maximum residual wasconstant. The column headed by SUMSQR is the key to the selection of the value of SQREPS value to try to force convergence. In the current example a value of 1E4 was used for the value. This forced a few partial Newton corrections and the computations converged quickly and the times step was only reduced for a short time. The value may be much larger in a larger model. If the value for SQREPS is made too small, there will be too frequent partial Newton corrections and the time step may again become small enough to abort the run. If it is picked too large, no partial Newton corrections will be forced, and the run will fail. In the current test model, a value of SQREPS of 1E3 worked and a value of 5E4 worked. However, a value of 1E5 failed since it is essentially the same as the values reported in the iteration log. There are some special problems that arise when the gate is fully open to the vertical diameter of the box. The default parameters for type 1 flow may cause the flow type in the CULVERT computations to change from type 1 too soon. The default parameters are now set for both pipe and box conduits. For a box culvert the values can be increased so that the depth at section 1 is nearly the same as the vertical diameter before the flow type switches from type 1. This discussion is assuming that the barrel of the culvert is steep so that type 1 flow will exist until the entrance is submerged leading to type 5 flow in most cases. Optional parameters exist in the CULVERT command to change the default limits for type 1 flow. The input fragment from a CULVERT command example shows the position, names, and approximate values for the parameters that have worked. TY1YTD gives the ratio of depth at section 2 to vertical diameter, D, of culvert at the type 1 upper limit. TY1HTD gives the ration of head at section 1 to the vertical diameter of the culvert at the upper limit for type 1. The former limit often determines a limit that is smaller than the latter limit. For example, the initial output from CULVERT showed that the upper limit at section 1 for Type 1 flow was 1.549D as defined by the section 2 value. Making the section 2 value closer to D , that is, 1.00, will yield larger values for the head to diameter ratio. However, care must be used. If TY1YTD is made too close to 1.00, we will be computing critical flow in the slot. Thus, the value of TY1YTD should be such that the top width at that level from the cross section table produced by the BOX option in MULCON will still be the width of the box and not the smaller widths as vertical diameter is approached. The BOX option puts a small slope on the top of the BOX to avoid a discontinuity in the top width. The maximum value for TY1YTD should be at or below this level. . . . TYPE 5 PARAMETERS RBVALUE= 0.00 BVANGLE= 0.00 WWANGLE= 0.00 LPOVERD= 0.00 TYPE5SBF= 0.75 TYPE 1 PARAMETERS TY1YTD=0.997 TY1HTD=1.59 Roadway Description . . . There are also some special output messages that may appear in the tables written to the user output file. In some cases submergence of a FW flow by the tailwater will lead to an submerged orifice (SO) flow based on the relations among the headwater, tailwater and gate opening. However, computation assuming that the gate is submerged will show that it is not quite submerged. If such is the case, a message "SO FLOW is not SO!" will be given. In some cases a flow value will be followed by an asterisk, (*), indicating that the assumption of SO flow lead to critical depth at the culvert exit. The former message would normally appear for steep culvert slopes, and the * may appear for mild slope or horizontal culvert barrels. In the former case, the flow is given the free-flow value and the computations continue. In the later case the flow computed assuming critical depth at the culvert exit is reported and the computations continue. The UFGCULV command does not include the effect of gravity on the momentum balances used in the sloping culvert barrel. This may distort the results, especially for steep barrels. Experimentation with the adjustments for the effect of barrel slope will indicate whether the results are reasonable. The hydraulic jumps are assumed to have zero length. In some cases, especially with submerged jumps, the high-velocity jet of water persists from some distance downstream often exceeding the length of a free jump. This means that the momentum flux in the culvert barrel is not well defined. The error estimation and control for the UFGCULV command is not as well developed as for the UFGATE command. Therefore, the additional points in the computation that are used to estimate the errors are retained in the output of the table, which generally results in error estimates that are too large. The UFGCULV computation takes more run time, therefore, intermediate output is printed to the user terminal during execution. Version 4.91 October 28 ,1998 --Found error in UFGCULV in which SFAC was not applied to the station found at the approach section. If SFAC > 1, this caused problems. --Found a case at low heads at which a zero flow was computed when the drop was small. This caused a failure during the computation of the local power, P, involving logarithms. Made a quick fix. Must check latter why the flow was computed as zero when it should have been slightly greater than zero. However, this portion of the table is unlikely to be used and the error in any case is small. Version 4.92 January 27, 1999 --Modified various commands that output 2-D tables to place a source string after the head datum value. This is used to check tables created from WSPRO data in FEQ. Other source values were added in case they prove useful . Version 4.93 4 February 1999 --Added source flag for cross-section function tables. If flag is 0, the table was input to FEQUTL, and if the flag is 1, the table was interpolated within FEQUTL. Caused change of XTIOFF to 21 in arsize.prm. Version 5.00 30 July 1999 --The change to using a function-table id rather than a table number requires that many references to tables in the input to FEQUTL be changed. In some cases, no change is needed if the standard headings have been used. In some cases the standard headings must be modified to give the correct result. FEQ and FEQUTL currently use a variety of input formats. Here is a brief summary of the various formats and their definition and distinctions. Format name Description ------------------ ---------------------------------------------------- fixed format response items MUST fall within predefined fixed ranges of columns. This was the only format in the early versions of these programs. This format has a fixed order for the items AND a fixed set of columns for each item. It is the most rigid and has no user control available. heading-dependent This is the name given to a format in which items are given in a fixed order and each must appear in a given range of columns but the user defines the range of columns by giving a heading for each item. The heading must not have any spaces in it. The range of columns for an item is from the last character in its heading to the column following the last character in the heading for the preceding item (using left to right order) If there is no preceding item, the first column for the item is column 1 on the line. This option has greater flexibility than the fixed format in that the range of columns for each item is under user control even though the order of items remains fixed. However, the user must make sure that each item falls inside its valid column range. column-independent Column-independent input is still has a fixed order that cannot be changed but now the items need only be separated by one or more spaces or by a single comma. This gives the greater flexibility but blank fields do not exist so that no item can be skipped in the sequence. Either a value must be given or a place holder, the asterisk in our case, must be given. In some cases the context cannot make sense of an asterisk a value must be given for each item, even if the item is to default to zero. named-item Named-item input has the most flexibility because items can be in any order or can even be left out if the default value is suitable. However, the penalty for this additional flexibility is that each item's name, a predefined identifier, must be given with the item. All previous forms use an implied item by virtue of the order of the input. Now that the order is under user control, the name of the item must be given so that the software can associate the value with its proper internal variable. The following commands have had changes in their input processing. In some cases the old inputs will be processed properly without change. In other cases the old inputs must be changed because they are incompatible with the new method of processing the input. CULVERT The culvert barrel description must use NODEID=YES. The NO option is no longer supported. In addition, the input of the barrel is now taken to be in heading-dependent format. The standard headings, if used should be suitable. The DEPTAB specification may have to be changed if you gave a value for BEGTAB or RMFFAC. If either of these two optional values have been given, you must use named-item input. That is each of the three items must have the standard name followed by an equal sign followed by the value. For example, DEPTAB= 568 BEGTAB=570 RMFFAC= 0.8 Default value for BEGTAB is 0 and for RMFFAC is 1.0. EXPCON The standard input will work without change but the left justified heading for the column that contains the label for each table will result in the label being truncated to five characters. The heading for that column should be extended to the right so that the heading for the label column has 50 or more characters in it. Dashes, underlines, or any other printing character on the keyboard will probably work. Dashes and underlines have been tested. Of course digits and letters will work. AXIALPUMP and PUMPLOSS These inputs have been changed to heading-dependent format. These commands have four lines in each heading; three lines of descriptive text with a fourth line of intermittent dashes. The line of dashes above each column of the input is taken to be the heading for that column. Therefore, the standard headings will work without change. ORIFICE This input has been changed to heading-dependent format. The standard headings will work because none of them contains spaces. UFGATE The standard headings for the sequence of gate openings DO NOT work because they have spaces in them. The space should be replaced by a dash or an underline. XSINTERP The input for XSINTERP actually uses the barrel input routine for the CULVERT command. Consequently XSINTERP now requires that NODEID=YES. NODEID=NO is not supported. Thus, this input must have a column for the node id added. The contents of this column may be blank. FEQX, FEQXLIST, FEQXEXT The input for the header block, that is, all input that precedes the specification of the points on the boundary, has been named-item since version 4.70. However, most users have probably not taken advantage of all of the flexibility this offers. Complications have been found in some cases for the STATION and for the information following NSUB. If the STATION response is a left blank, interpreted as 0.0 in versions before 4.70, the newer versions will complain that an input item is missing. Recall from the definition above that a blank response does not exist in named-item input because one or more spaces are used to separate the various items and their responses. The default value for STATION is 0.0. However, there are cases, such as the old bridge routine, in the bridge-opening table, where the STATION has been left blank. That will now cause an error and the STATION may not be able to be left out. I have not tested that option since the old bridge routine is not actively supported at this time. The problem with NSUB occurs when more values of Manning's n are given than the number of subsections. This will cause failure with the new code. The correction is to delete the extra values of n or increase the count or leave out the count of subsections completely. In the latter case, remember to use the slash, /, at the end of each line if additional values of n appear on the following line. EMBANKQ The newest code will fail in processing the line of input that gives the table number(id) for the command if the optional TYPE, HLCRIT, or HLMAX are given. FEQUTL used to treat these as fixed-format input. FEQUTL now treats the line as being named-item input. The TYPE defaults to 13 and can be omitted if that is your desired type. HLCRIT and HLMAX normally only appear if some non- embankment weir flow is being computed. Any items that appear on this line must be in the named-item format. A program, CONVERTUTL, is available to convert existing input to the new form. This program reads all the lines of the input and detects the cases that need changing and makes the changes in its output file. The program assumes that headings have been used and that they are in the proper position on the line. Deviations from this assumption may cause the program to fail or to produce an output that will cause an error when FEQUTL processes the new input. Some problems may have to be fixed manually as the new version of FEQUTL is run. --Error messages. In processing table ids, FEQUTL assigns an internal table number to each table. This internal number is then used everywhere like the id for the table. When an error message is given that contains a table number, FEQUTL is supposed to convert the internal table number to the external table id in the message. It is probable that some error messages have not been corrected and will give the internal table number and not the table id. FEQUTL reports the internal table number assigned to each table the is read from an input file. If an error message appears with a table number that does not exist or does not make sense, scan the output file for the section where the function tables are input. The reported number may be an internal number that was not converted. Also, make a note of the error message number and report the problem so that the message can be corrected. --The first three lines of the header block for an FEQUTL input can be deleted. These lines have not been used for some time and now FEQUTL has been changed to recognize their absence. Again the ability of FEQUTL to do this depends on the standard names being used, that is, the first line MUST start in column 1 with UNITS=. --The option to specify exact or nearly exact values for the Manning's n factor NFAC,and for GRAV, the acceleration due to gravity, has been enhanced a bit. These specifications occur as options after the UNITS= options. Note that this input looks like named-item input but it is fixed-format input. The input UNITS= ENGLISH will get a value of NFAC=1.49 and GRAV=32.2. If the metric system is requested by the option UNITS= METRIC FEQUTL will use NFAC= 1.0 and GRAV=9.8146. The value for NFAC for the metric system is exact but its value for the English system is approximate. Thus, if we try to make a comparison between the results obtained with FEQUTL for the same geometry in the English and in the metric system of units, we will find small differences from this source alone. Consequently, the option to get exact or nearly exact values for NFAC and GRAV was implemented. In order to define GRAV, which technically varies from point to point on the surface of the earth, we require an input of the latitude and elevation. These values can be some mean value for the area being modeled. This definition of GRAV is, of course, optional since the variation of gravity at the elevations normally encountered for unsteady-flow simulation is a fraction of one percent. The latitude is given in degrees and the elevation in feet(meters) above the sea-level datum. To request these values we give the line UNITS= METRIC EXACT 45.0 0.0 The first number given is the latitude and the last is the elevation. Thus, here we are requesting the value of GRAV at latitude 45 degrees and elevation 0.0 meters. The resulting value is GRAV=9.8062. If we change to the English system we use UNITS= ENGLISH EXACT 45.0 0.0 and get NFAC=1.485919 and GRAV=32.1726. In order to show the proper location of the responses the, example file for FEQUTL, FEQUTL.EXM, contains the following line UNITS= ENGLISH NOMINAL 45.0 0.0 which gives the same values when UNITS= ENGLISH is used alone. --FEQUTL now reports the version number, the version date, and the date/time of the run at the start of the user output file. The time is given in the format: hour.minutes.seconds.milliseconds. Version 5.01 17 November 1999 --Found a problem in subroutine RDUP when the range being used included negative values that were small enough that the arithmetic average of the minimum value and maximum value was negative. Changed the code to take the arithmetic average of the absolute value of the minimum and maximum value of the range as the basis for comparison. This should not affect the decision for eliminating duplicates from a list of elevations. Version 5.03 15 December 1999 --Bottom-slot depth is now included in the cross-section function tables so that the depth relative to the original invert can be reported in FEQ. If no slot was added the bottom-slot depth is 0.0. --Added interpolation of bottom-slot depth, Easting, and Northing for interpolated cross sections. --Modified the initial root interval definition in SBFCHN, the routine used to define the submerged flows for CHANRAT. The previous root interval definition sometimes caused false root-finding failures. The new method, however, may fail in yet other cases. Decreases in critical flow with increases in maximum flow depth MAY cause failure or strange results. Version 5.04 5 January 2000 --Modified CHANNEL command to allow setting and clearing of the bottom slot within the command. The presence of the slot will affect the curvilinear elements slightly but the effect should be small if the slot is small relative to the cross section. Version 5.05 13 January 2000 --Found minor bugs: 1. Dummy argument length and actual argument length did not agree in processing input for cross section headers. Could have caused problems if a cross had more than 98 subsections. 2. If the interpolation accuracy tables were missing, CHANRAT would enter an endless loop and not report the error. The run is now stopped if the tables are missing. Version 5.06 18 January 2000 --Roughness factor greater than 0.60 in CULVERT command will cause failure of a table lookup. Warning message is issued and the factor is limited to 0.60. --Found bug in the default values in the EMBANKQ command. CSHIFT was defaulted to 1.0 when it should have been defaulted to 0.0. This resulted in an increase of crest elevation for the embankment of 1.0 feet. --Changed warning 557 for cross-section computation so that it appears the first time it applies and then is suppressed. Typically a vertical wall extensions that is a single subsection will cause ten or more nearly identical messages to be issued. Now only one will be issued. --Changed behavior of EMBANKQ and CHANRAT with automatic assignment of breakpoints to avoid a system error when the max number of breakpoints was found. FEQUTL now halts instead of trying to continue. Version 5.07 4 February 2000 --Found problems with CHANRAT using the modified root-interval finding method in 5.03. Replaced with the original method. Added a root interval search to check the results of the old method. Tested on about 75 CHANRAT commands. With positive slopes there were some cases where the detailed search did not find a root interval but the original method converged to a value that appeared to make sense with regard to the adjacent values. The convergence would be obtained with argument collapse, that is, the arguments at the ends of the interval that is thought to contain a root are so close together that they it no long makes sense to continue the computations. The root finding routine issues a warning if the residual at argument collapse is more than twice the relative function tolerance, EPSF. Version 5.09 18 February 2000 --Corrected a bug in tracking internal accounting of function tables. Added internal generation of tabids to correct the problem in FEQUTL. The bug would sometimes cause failure of a run with a spurious duplicate function-table number message. The internal tabids have been generated in a form that is invalid as an external table id. Thus, they will never conflict with a user table id. The internal table ids can appear in error messages about an interpolated table. The internal table ids contain a sequence number that is unique for each id. The interpolation routine adds the starting and ending internal tabid number to its output. Thus, it is possible to locate which table is the source of the error even though the table was generated from a request for interpolation with the minus sign in the tabid field. Version 5.10 13 March 2000 --Added new command, MKWSPRO, to create a WSPRO cross section from an FEQ cross section. Does not support Manning's n varying with water level. An example command is: MKWSPRO NAME=EXIT FEQXEXT . . . where the triple dots following FEQXEXT indicate the remainder of the FEQXEXT command input. The name filed gives a name to the WSPRO cross section. This must be chosen to match the names needed in the WSPRO runs and their subsequent analysis using the WSPRO set of commands in FEQUTL. --Fixed a minor bug detected by the LF95 compiler when running with the chk option turned on. This did not affect any results and only appeared because the chk option highlights things that might cause problems. --Add a command to create the EMBANKQ embankment crest description from the (x,y,z) trace of the crest and the approach taken from a digital terrain model or from ground survey. See the example input file for the format details. Version 5.11 14 April 2000 --Increased the length allowed for fully qualified file name in FTABIN command to 128 characters. Previous limit was 64 characters. Version 5.2 11 July 2000 --Added option to XSINTERP for definition of the coordinate location of the intersection of the cross sections with the river-mile defining flow line. This means that the river-mile defining flow line must be given to the XSINTERP command! This flow line must be the exact flow line used in defining the stations for the cross sections used in XSINTERP command as the basis for the interpolation. Example input with optional flow line data provided: XSINTERP SFAC= 5280. NODEID=YES FLNAME=RAFLMM FLFILE=\nooksack\lower\futl\r1\xsec\flbnk.xy NODE NODEID -----------TABID STATION ELEVATION 100 RAXSAB_M.93 4.6189 TAB 101 -RAABBK_M.1 - 102 -RAABBK_M.2 - 103 -RAABBK_M.3 - 104 RAXSBK_M.93 4.4781 TAB . . . Two additional lines are inserted after the NODEID-option input line. The first line gives the six-character name for the flow line. This name must appear in the file given by the following line of input. This file will contain one or more flow lines used by the RMASSIGN program. A fragment of this file is: . . . RAFLMM ;This is the main river-mile flow line 10 RM_ORIGIN S_ATRMORG RM_FACTOR S_BEGIN 1.3554 5413.95 5280.0 0.0 RAFLMM1000 1215725.3561 653637.4225 0.00 PI RAFLMM1001 1216036.9169 653932.5854 0.00 PI RAFLMM1002 1216209.0953 654260.5441 0.00 PI RAFLMM1003 1216323.8809 654449.1204 0.00 PI RAFLMM1004 1216479.6613 654596.7019 0.00 PI RAFLMM1005 1216774.8242 654711.4875 0.00 PI . . . END The river-mile defining line is used to define the (x,y) location of the interpolated cross sections. These data are needed for drawing a geographic schematic of the model. If these items of information are missing, the schematic cannot be produced. Version 5.21 5 August 2000 --Found a bug in error detection for non-increasing offset in the roadway-profile-description-input routine for EMBANKQ. Statements existed for checking for non-increasing offset but one statement was missing so that non-increasing offsets would not be detected. Such offsets are in error and sometimes cause enigmatic failures later in the computations. The additional statement was added to the code so that EMBANKQ inputs that did have non-increasing offsets will no longer complete successfully. Version 5.23 6 September 2000 --Modified the input to CHANRAT so that a response of TAB or tab for the middle elevation value will find the invert elevation from the cross section table defining the rating. This option was added to save time in defining the input for the command. Otherwise one has to search for the minimum elevation when the cross-section function table is being computed and does not already exist. Version 5.25 3 October 2000 --XSINTERP did not output the slot depth value for interpolated sections even though the depth was calculated. Added output of slot depth based on linear interpolation. Version 5.30 30 October 2000 --Added new command LPRFIT to compute estimates of surface area for a level-pool reservoir when only the storage capacity is known. The following example outlines the pattern of the input. The line numbers in the first two columns are NOT part of the input. Column 1 is given by the starting character in LPRFIT. 01 LPRFIT 02 TABID= LPR1 03 FIT_WITH = VLSPLINE 04 CHK_OPTION =NATURAL 05 LEFT_SLOPE= 0.002 06 RIGHT_SLOPE = PARABOLIC 07 INFAC = 27.0 ' Input values are in cubic yards. 08 OUTFAC = 43560. ' We want the output values to be in acre-feet 09 10 Elevation Storage 11 57.00 0.00 12 58.00 15.96 13 59.00 357.20 14 60.00 2153.56 15 61.00 10858.47 16 62.00 31139.85 17 63.00 65670.90 18 64.00 109513.92 19 65.00 161828.04 20 66.00 223869.39 21 67.00 294804.77 22 68.00 373155.19 23 69.00 457841.99 24 70.00 548079.82 25 -1.0 The items of input in lines 2 through 8 are named-item. TABID gives the table id of the table of type 4 created by LPRFIT. FIT_WITH gives the fitting option used to estimate the missing values of surface area. The options are: CSPLINE-use a cubic spline fit to the elevation and storage data and compute the slope at each point as an estimate of the surface area; VLSPLINE-use a variation limited cubic spline fit where the slopes computed using a cubic spline are adjusted to force the storage to be a monotone variation function between tabulated points; and PCHERMITE- estimate areas by using three-point derivative estimates at each tabulated value using the central of the three points at locations away from the boundary. The most reliable methods appears to be a VLSPLINE because that yields at least increasing values of storage if the given data is also increasing. CHK_OPTION gives the checking that is to be done on the results. The options are: NATURAL- assumes a natural LPR in which the storage and surface area always increase with an increase in water-surface elevation; CONSTRUCTED- in which only the storage is known to increase with elevation; and NONE- no checking of either storage or area is done. LEFT_SLOPE gives the initial area at the first argument in the data series. The options are: LINEAR-compute the slope (area) from the first two points in the data series and impose it at the left end; PARABOLIC- compute the slope from the first three points in the data series and impose it at the left end; and numeric where the user gives an actual numeric value for the slope. The slope is given in the proper units as they exist in the final table. RIGHT_SLOPE gives the final area at the last argument in the data series and has the same options as LEFT_SLOPE. INFAC is a conversion factor for the values of storage as input. In this example the storage values are in cubic yards. Thus, INFAC will convert the storage to cubic feet on input. OUTFAC is a conversion factor to use on output of the values. In this case we want acre-feet tabulated in the final table and Thus, OUTFAC=43560. ARGFAC, not used, is a conversion factor on output of the argument sequence. It can be used to convert units in the argument sequence. All of the named-item options have default values. Some are designed so that their use will cause an error. This is to prevent invalid dependence on default values. The defaults are: TABID= that is blanks-using this will cause an error. FIT_WITH = VLSPLINE CHK_OPTION = NATURAL LEFT_SLOPE = LINEAR RIGHT_SLOPE = LINEAR ARGFAC = 1.0 INFAC = 27.0 OUTFAC = 43560. You may have to adjust the LEFT_SLOPE and RIGHT_SLOPE to get valid variation of area if the CHK_OPTION is NATURAL. It may not be possible to get valid variation of area in this case given the data and the tabulation interval. If the decrease in surface area is small relative to the area, then allowing decreases in surface area will probably not distort the results greatly. Generally the best way to get a valid variation of the surface area is to retain the zero storage point and delete intermediate points that have small storage. These small storage are often uncertain in any case. The minimum non-zero storage in the input should be one acre-foot or more for maximum capacities on the order of a few hundred to a few thousand acre-feet. Version 5.31 21 December 2000 --The low-head weir coefficient in EMBANKQ and in flow over the road in CULVERT was improperly allowed to vary with the submerged total head as an argument in computing submerged weir flow. The argument should have been held fixed at the free-flow total head value. Limited testing shows that the effect is generally smaller than 0.05 percent of the flow. Some submerged flows less than 10 cfs, appear to to affected by less than 0.5 percent. The only values affected are those in which a significant approach velocity head is present. The flows are unaffected if the velocity head is negligible. Version 5.32 22 January 2001 --An erroneous error message was printed if UFGATE had less than two gate openings given. The correct message was added. --An error existed in computing LEFT and RIGHT options for cross sections whenever either or both points of intersection did not match an existing point on the cross section. A check for consistency of elements in the cross-section function table before it was output should have caught any of these errors that caused significant differences in the computed elements. --Additional screening code was added to FEQUTL to make transfer of a project to UNIX/LINUX easier. FEQUTL has been modified to change any end-of-line carriage-return character to a space. --Relaxed the tolerance in CHKTAJ when checking the consistency of area and top width. Small changes in area would cause a reported bug when the differences were clearly roundoff noise. Version 5.35 9 February 2001 --Discovered that interpolation for intermediate cross sections when a bottom slot was present did not work well. The slot was sometimes greatly distorted at interpolated sections. The interpolation process was revised to take into account a bottom slot. In order to make this process reliable, a slot must be present at both sections bounding the interpolation interval if any slot is present at all. That is, two cases are valid for interpolation: no slot in either section, and a slot at both sections. An error is reported and execution halts if only one of the sections bounding the interpolation interval have a bottom slot. --EMBANKQ now defines the minimum head for automatic argument defintion using a MINQ input in the standard header. If omitted, MINQ is set to 0.2 cfs and its equivalent in cms for SI units. You must still give a minimum head and it is used in defining the new minimum head. This was done because having too large a minimum head when flow over the structure begins has led to computational problems. This is especially true of surfaces that cause the initial flow to increase at approx the 2.5 power of the head. The global value of MINQ can be overridden in the input to EMBANKQ by giving the value of MINQ=. The angle braces should be replaced by your desired value. For example, MINQ= 0.5. This must be given on the same line as TABID. Version 5.36 15 March 2001 --Changed precision of alpha and beta printout in Type 25 cross-section function tables. Also, changed the output to standard output for FEQUTL for computation of estimates of critical flow. --CHANRAT now defines the minimum head for automatic argument definition using a MINQ input in the standard header. If left out a value of 0.2 cfs or its metric equivalent are assigned. No override of the global value is allowed in CHANRAT. Version 5.38 3 October 2001 --Added FHWA option for computing discharge coefficient for Type 5 flow in culverts. --Changed message in LPRFIT when user declined validity checking. Version 5.39 4 February 2002 --Added EQK option to FLOODWAY and updated input of the table to be heading-dependent so that table ids are supported. Further changes to FLOODWAY for an EQA option are underway. Note that it appears that the implementation of FLOODWAY requires that the FLOODWAY command be the only command in the input followed only by the cross-section processing commands required for it. Version 5.40 8 May 2002 --Added output of GISID to the final summary for the FLOODWAY command. In order for the GISID to be found, you must request that every cross section be saved using one of the SAVE options, such as SAVE20. If this is not done the GISID column entry will be blank. Note that FLOODWAY should be run as the only command with only the cross-section processing commands required for it. It may also work if the FLOODWAY command and the related cross-section commands are first in the input. --Modified the command-line argument processing so that a single argument will be taken to be the user-input file. The remaining two file names will be formed from the user-input file name by stripping off the extension (the last one if there is more than one dot(.) in the name), and appending 'out' for the user-output file and 'tab' for the function-table file. For example, if FEQUTL is invoked as: fequtl culvert.in the output file name will be: culvert.out, and the function-table file name will be: culvert.tab. Version 5.41 17 May 2002 --Added additional checking of the command-line arguments. This has been done in the hope of avoiding cryptic operating-system messages. --Checked for matches among the three command-line arguments. All must be unique to avoid destruction of either the user-input file or confusion in the output as the O/S writes to one file as if it were two files. --The HOME directory option is available in FTABIN. Details for this option are available in the release notes for FEQ. Briefly, HOME allows you to give a character string that is prepended to every file name that follows it in the FTABIN block. It is assumed that prepending this string to the subsequent names will create a valid file name with either a partial or complete path that is correct for the operating system you are using. This will make it easier to transfer projects from one system to another but only if one follows a standard pattern in creating the input to FEQ and FEQUTL. For example, if the project files are all located on one drive, say, D, then leaving D: off the names and placing HOME=D: at the start of the FTABIN block, will then give the correct drive. Later if the project is transferred to another computer and the project files are now on the G drive, only a few instances of HOME have to be changed to make the transfer. Version 5.42 3 July 2002 --Found some annoying problems with arithmetic precision when computing cross-section table elements in a slotted section. Differences in optimiation or compilers would produce slightly different results. In some cases the checking of the top width (T), area (A), and first-moment of area (J), would flag a potential BUG because the check showed differences in the incremental values that exceeded the preset tolerance. The tolerances have been increased and the compilers have had the ap flag added. This forces the compiler to store intermediate results so that we are more likely to get consistent results. This does slow the computations because intermediate results could otherwise be held in registers. Registers are both faster and offer greater precision. However, the order of computations varies and in some cases the greater precision later caused problems in the slot. Using the ap flag on the compiler also appears to have solved a problem just recently discovered in CULVERT when optimization level 1 failed with an error message but optimization level 0 did not fail. A small difference in the numerical computations can sometimes cause a large difference in the final result. This can happen when automated decisions must be made to determine the conditions of flow. Such decision happen frequently in the CULVERT command. The solution to some of these problems will need to wait until I can find the time to change some values to double precision. At least the problems with the slot will be eliminated or reduced with that change. --Added output of water-surface elevation to the user-output file results for additional two-D tables. This made it easier to check and compare results and to also prepare manual tables that made use of free-flow values only. Version 5.43 21 October 2002 --If GISID is blank then FEQUTL will set it to the value of TABID. THis proves useful in being able to see the TABID when explicit labels are given to a node on a branch. Version 5.44 17 March 2003 --Modified FLOODWAY command: 1. added flow as an optional input following all others on a line. Just add a proper heading and place the flows under the columns defined by the heading as for any other heading-dependent input. 2. The flow was added to the input in order to produce a velocity in the floodway. Thus, the output table for this command has had the flow area and the velocity added to it. Example floodway file: Floodway Specification for the Trib 2 Mainstem Conveyance Loss= 0.05 Elevation Loss= 0.1 TABID OPTN ELEV FEQBOT LEFT RIGHT LOSS Flow 100 EQK 695.42 692.20 1200. 101 EQK 698.68 696.25 1200. 102 EQK 701.7 698.68 1200. 103 EQK 704.94 699.20 1200. 104 EQK 705.13 701.73 1200. 105 EQK 706.75 702.58 1200. 106 EQK 709.44 702.70 1200. 107 EQK 709.59 704.18 1200. 108 EQK 709.9 705.28 2.7 1200. 109 EQK 713.66 707.80 1200. 110 EQK 713.85 709.17 1200. 111 EQK 716.96 710.34 1200. 112 EQK 716.72 711.16 1200. 113 EQK 716.75 712.07 1200. 114 EQK 719.96 712.35 1200. 115 EQK 720.03 717.23 120. 116 EQK 722.65 716.63 1200. 117 EQK 722.82 719.40 1200. 118 EQK 722.67 719.50 1200. 119 EQK 722.93 720.30 1200. -1 Note that the flows have been pulled from "thin air" do to speak--they do not relate to any real system. The summary table from this floodway specification file looks like this: Summary of Floodway Computation results Table Id-------- GISID----------- FldOpt BF Elevation FEQ Invert Left----- Right---- Loss----- FldwyArea FldwyVel- 100 EQK 695.42 692.20 -259.8 74.2 0.058 457.2 2.62 101 001EBE20101 EQK 698.68 696.25 -208.9 72.9 0.077 285.2 4.21 102 001EBE20102 EQK 701.70 698.68 -171.9 -1.0 0.073 225.0 5.33 103 001EBE20103 EQK 704.94 699.20 -101.1 439.1 0.039 878.1 1.37 104 001EBE20104 EQK 705.13 701.73 -95.2 10.4 0.039 217.8 5.51 105 001EBE20105 EQK 706.75 702.58 -33.2 24.3 0.037 139.6 8.59 106 001EBE20106 EQK 709.44 702.70 -87.9 49.2 0.025 502.6 2.39 107 001EBE20107 EQK 709.59 704.18 -94.0 17.4 0.030 317.9 3.77 108 001EBE20108 EQK 709.90 705.28 -54.5 2.7 0.009 220.4 5.44 109 001EBE20109 EQK 713.66 707.80 -53.7 34.4 0.024 308.4 3.89 110 001EBE20110 EQK 713.85 709.17 -25.0 27.2 0.021 197.4 6.08 111 001EBE20111 EQK 716.96 710.34 -53.1 101.9 0.022 740.2 1.62 112 001EBE20112 EQK 716.72 711.16 -45.5 93.7 0.027 494.3 2.43 113 001EBE20113 EQK 716.75 712.07 -71.0 20.5 0.030 277.4 4.33 114 001EBE20114 EQK 719.96 712.35 -120.5 17.7 0.015 573.0 2.09 115 001EBE20115 EQK 720.03 717.23 -17.7 7.0 0.052 46.7 2.57 116 001EBE20116 EQK 722.65 716.63 -51.6 50.9 0.045 251.1 4.78 117 001EBE20117 EQK 722.82 719.40 -75.2 45.8 0.040 257.6 4.66 118 001EBE20118 EQK 722.67 719.50 -109.7 26.7 0.057 226.0 5.31 119 001EBE20119 EQK 722.93 720.30 -153.1 25.9 0.056 292.6 4.10 If no flow is given, the flood-way velocity will be given as zero. Version 5.46 21 April 2003 ---A change was made in imposing variation-limitation to a fitted cubic spline. Currently this option is used in CULVERT and in LPRFIT. In cases in which the computed derivative was negative and the local trend of the data showed a positive slope, the variation-limitation code would force a zero derivative value. This has been changed to fit a positive slope based on the local behavior of the function as defined by the data points closest to the point in question. The derivative approximation is weighted such that it yields the correct derivative if the function is a parabola and the points are equally spaced. ---An error in computing the bottom-slot insertion point was discovered. This resulted in a slot one-half the width of the requested width. It was discovered in cross sections having a vertical boundary at the minimum point of the cross section. It may have occurred in other situations as well. After correction the insertion point was located correctly in all of the test cases used.