Descriptions of changes made to FEQ ----------------------------------- (Descriptions of changes made to FEQUTL follow) Version 8.07 September 1, 1994: Discovered that the tributary area adjustment factor and the tributary area linear-reservoir delay time were not stored properly. Problems could occur if the factors were not all the same value and if the order in which the branches were presented in the tributary input differed from the order of input of the branches. Also changed the format of the tributary area input echo to suppress the carriage control characters that remained. Version 8.08 October 19, 1994: Changed spacing of function table file name output line when reading function table files. Added a blank line to make the file name appear more clearly. Version 8.09 March 10, 1995: Changed FAC option in Tributary-Area Block. If FAC > 0.0, then same action as previous versions. That is, the FAC value is applied internally but its effect is not shown in the external echo of the values reported in the tributary area summary. This follows because FAC was originally provided as a factor on the unit-area runoff intensity or as a unit conversion. In either of these roles it makes sense that the echo of the tributary area be given to the user in exactly the same units and numeric value as the user supplied. However, it is sometimes convenient to use FAC to allocate tributary area to sub-branches when an existing branch is divided to provide for some requirement in the analysis. In this role it makes sense that the echo of the input value SHOULD reflect the value of FAC. In Version 8.09 this role is indicated by giving FAC as a negative value. Note that the change does not affect the internal value of tributary area. FAC has always affected that value. The only change is in the value echoed to the user. For example, if the area input for branch 30 is 1.0 square miles, and FAC = -0.2 then the tributary area stored for branch 30 is 0.2 square miles and the area echoed to the user is 0.2 square miles. However, if FAC=0.2 then the area stored for branch 30 is 0.2 square miles but the area reported to the user is 1.0 square miles. Also changed the totals for tributary area to reservoirs to include both the total for a reservoir and the cumulative total for reservoirs. In previous versions only the cumulative total tributary area was given for reservoirs having tributary area. Version 8.10 March 20, 1995: Extracted subprogram units that are common to both FEQ and FEQUTL so that only one copy exists. Version 8.50 March 31, 1995: The following series of changes have been made: 1. Blank lines are now discarded from all character input except for blank lines within a branch description. 2. Many messages that were in upper case have been converted to mixed upper and lower case. 3. Access to HECDSS time series(path names) is now supported for input of flow or elevation at a boundary node, for output of flow or elevation from any node, and for the unit-area runoff intensity used to compute lateral inflow from a tributary area. When the lateral inflow is defined using the HECDSS, FEQ operates as if DIFFUS=NO in past versions. That is, the special multiple event processing with DIFFUS=YES is not provided. In version 8.5 the use of DSS time series to supply lateral inflows is selected by using DIFFUS=DSS. Only a single event is run; the one given by the starting and ending times. Changes have been made to the INPUT FILES and OUTPUT FILES block as well as to the TRIBUTARY AREA block. Version 8.51 April 21, 1995: Corrected problem with different lengths of character strings for file names. Also made minor changes to permit LF90 to compile properly at optimization level 2. Version 8.53 April 28 1995: Modified Code 13 in Network Matrix Control Input to permit the explicit specification of one or two nodes for inflow. If the nodes are given explicitly, then the user must give the effective angle of entry of the flow into the main channel. The effective angle of entry is always less than the angle of entry of the channels themselves. If no inflow nodes, also called side nodes, are given, then FEQ assumes that there is no downstream component of momentum flux entering the junction from the side node. That is, FEQ assumes that the effective angle of entry is always 90 degrees. This assumption is appropriate if the water plunges into the mainstream from above as in a side-channel spillway on a small slope. Currently the side nodes must be on branches so that FEQ can find an area of flow and a momentum-flux coefficient when it needs to compute the momentum flux. Currently free nodes have no way of indicating a momentum flux. This may be changed in the future. Version 8.54 May 10, 1995: Blank lines are now echoed to the output file. Version 8.56 June 21, 1995: Modified formats in TRIBIN and TRBOUT to allow up to 12 land uses per gage. Version 8.57 July 26, 1995: Suppressed some intermediate outputs that where overlooked earlier. Version 8.58 July 27, 1995: Modified screen output of event elapsed time so that overwriting would not occur. Version 8.59 August 1, 1995: Added code to detect missing water-surface elevation in the steady-flow computations when code=6 in the BACKWATER Block. Version 8.60 September 1, 1995: FEQ was supposed to detect improper use of flow nodes. However, a case occurred in which the detection code failed. The detection code has been changed so that it counts relationships attached to a flow node and just not the available relationships attached to a flow node. This change will reduce the frequency of occurrence of the dreaded zero-pivot error message. Version 8.61 September 14, 1995: The use of time-series tables, that is 1-D function tables of types 7, 8, or 9 was not possible when DIFFUS=YES because the conversion from the year/month/day:hour time argument was converted to the elapsed time from the start of the event in the time series file being computed. This meant that the internal argument began at 0.0 seconds at the start of each event. This is hardly what is wanted in any application. This was a hold-over from pre-TSF days when only a single event was computed. This has been changed so that the internal conversion changes to an argument that is the elapsed time from the start of the first event. This means that the time output field that appears when complete output at a time step is requested will have a value different than before for all events but the dummy event. The internal argument for all one-D tables is single precision. This entails that the argument will not be able to define a time point down to a few seconds when the time span of the TSF is significant. The relative precision for a floating point number in Fortran, at single precision, is no better than 1.e-7. In an elapsed time of 70 years, obtained if the dummy event is in 1925 and the TSF runs through 1995, there will be potential uncertainty of about 270 seconds or 4.5 minutes. This is probably suitable for most proposed applications. Time series tables would not be a good choice for flows or stages for this time span. Flows and stages are best stored in files when the time span of the TSF is large. However, time series tables are suitable for use in controlling capacity or elevation for head datum in Code 5 type 6 using 2-D function tables of type 13. Version 8.65 October 5, 1995: Special output has been modified to permit user specification of additional output variables. The new values are given in an OPTIONS line. The OPTIONS line must follow the UNIT line in the input. The OPTIONS line must begin with "OPTIONS=" where the double quote marks are used to delimit the string here but not in the input. The word, OPTIONS, must be in all upper case and must start in column 1. The equal sign must not have any spaces preceding it. The options must be in upper case and must have at least one blank ahead and one blank following each option. The current options are: V -output the mean velocity for the cross section. A -output the area for the cross section. MCA- output the main channel area for the cross section. The main channel is defined by a user supplied cross-section function table given in the Special Output Block for this cross section. The table is given in the right-most five of the six columns following the 14 column heading for each output item. MCQ- output the main channel flow. MCV- output the velocity in the main channel. FPA- output the area of flow in the flood plain. The flood plain is all area in excess of the area in the main-channel cross-section function table. Neither FEQ nor FEQUTL know anything about a flood plain. The user defines the flood plain by the main-channel cross-section function table. If the main channel proves to have larger values of top width, area, or conveyance, than does the cross section given in the Branch Description Block, FEQ will use the area as given in the cross-section function table in the Branch Description Block. The flood plain area will be taken as zero. FPQ - output the flow in the flood plain. FPV -output the velocity in the flood plain. If the main-channel cross-section function table is not given, FEQ can only output the V and A options. The other options will have blank values output. The Special Output file will always have the water-surface elevation and the flow output. The optional items follow with one line for each item. The order of the lines is in the order in which the options were given on the OPTIONS= line. The PAGE size used to control printing of headings in the Special Output file will be adjusted to be an integral multiple of the number of lines per time point. FEQ first deducts two to account for the heading lines. Then the remaining number of lines is rounded downward to be an integral multiple of the number of lines of output. Two is added back to obtain the adjusted value for PAGE. Note that this assumes that the PAGE value is always larger than the maximum number of lines that can be output. Currently that limit is 10. Thus PAGE should never be less than 10 + 2 = 12. The options are only output for nodes on a branch. Nodes not on a branch are skipped and only two lines of output are produced for them. Version 8.70 December 28, 1995: Added option for using difference in elevation between two exterior nodes as the argument for controlling structures in an Operation Control Block. Also changed the format of the flow type in the special output file so that all flow type codes are right justified in their columns. The specification of difference in elevation is given by placing the exterior node label, new style only, in the KEY column. The difference is computed as elevation at node in NODE minus the elevation at node in KEY. Version 8.71 January 8, 1996: Corrected a bug in the output of the date/time when the computations fail to converge. This problem has apparently been present for some time but was not reported. It has no effect on results and only involved an error in the hour of the day when the iteration log for the failed time step was printed. The hour of the day remained at the time of the previous successful time step. Version 8.72 February 1, 1996: Add checking of the station sequence for each branch to make sure that all stations are either increasing or decreasing. It was possible for the user to make a mistake and have the direction of stationing change within a branch. As long as the element length was non-zero, FEQ did not complain. However, the distribution of tributary area would be affected since the sum of the absolute values of the element lengths would be greater than the length of the branch. Version 8.73 May 6, 1996: Fixed problem with zero array size when requesting special output when no control structures are present. Fixed problem with confusion between underflow and overflow gates under dynamic control. Changed case of many error messages. Version 8.74 May 24, 1996: Fixed problem with multiple control points for pumps introduced by work in progress on new options for dynamic control of gates and pumps. Removed more old carriage control characters from output. Version 8.76 July 25, 1996: Implemented new format for Network Matrix Control Input block. This format is optional and if you do nothing the old format is assumed. The new format is signaled by putting the characters: NEW with the N in column 1, on the line that initiates the Network Matrix Control Input block. This line is suggested to contain: NETWORK MATRIX CONTROL, in the input description for FEQ. In order to request the new input format this line becomes: NEW NETWORK MATRIX CONTROL. In the process of making these changes the input processing routines were changed so that more than one trailing blank is not printed for comment lines. In earlier versions comment lines were output with extra trailing white space. The new format is column independent but still order dependent. All items must be in the same order as given in the input description but they do not have to be in the indicated columns. The new rules are as follows: 1. There must be at least one space or a single comma between successive items. No spaces or commas allowed within an item. 2. If an optional item is skipped in either the integer/identifier or the fixed/float group, then a place holder must appear for that item. This is required to keep the order of all items the same. The place holder is a single asterisk. It also must be separated from adjacent items by one or more spaces. 3. Continuation lines for Code 5 Type 6 are NO longer flagged in the input by placing a positive number in the N10 location. Instead the continuation is signaled by putting a space-slash sequence after the last item is entered on the line. 4. All values appearing in the fixed/float part of the input must contain a decimal point. FEQ uses the decimal point to distinguish the two groups. If a decimal point is omitted, FEQ WILL get confused and will probably report an error message. 5. Names for pumps or gates must start with an alphabetic character. They cannot start with a number and the only special character they should contain is the under bar. 6. All of the size restrictions still apply. That is the branch number must not be larger than 999. Free nodes cannot be larger than 999 also. I plan to make these limits larger but this requires changing many formats and internal string sizes. That change will take some careful planning. Changing function table number limits is simpler but must also be done for FEQUTL at the same time in order to be useful. FEQUTL has the complication of containing special table numbers that also must be changed and this implies that TYPE5.TAB needs to be handled properly also. 7. All of the items for a given code must appear on a single line. All items must appear within 112 columns of text. This number was picked because there was already an input routine that supports it and because this should be long enough for now. It will become increasingly useful to have an editor that can support a window with more than 80 characters on a line. Support for at least 130 chars and preferably more like 150-160 chars should be sought. The following lines give a fragment of the Network-Matrix Control Input for an existing model. Note how the table numbers have no spaces between them. The node labels for branch numbers larger than 99 also have no white space to set them off. The input becomes a bit difficult to read. Also comments have been placed beyond column 80 where FEQ will not encounter them when processing the input. The code 5 type 2 entry has one fixed/float value that is skipped, F2. -----------------------------------Fragment Start----------------------------------------------------- NETWORK-MATRIX CONTROL INPUT CODE N1 N2 N3 N4 N5 N6 N7 N8 N9 N10 F1 F2 F3 F4 F5 5 2 F8 U14 F81798 501798 50 748.16 5.0 743.80 745.5 SPECIAL ASSESSMENT POND 1 14 2 3 F8 U14 D13 5 6 D14 U15 D1413181318 1 742.47 INDUSTRIAL DRIVE 14221422 2 751.61 15221522 3 749.33 15231523 752.05 5 6D109U110D10923122312 1 750.73 ABANDONED RR SPUR 24122412 755.20 -----------------------------------Fragment End-------------------------------------------------------- If the FEQ file for this model is processed by a new utility program, called NETUTL, the following fragment results. -----------------------------------Fragment Start----------------------------------------------------- NEW NETWORK-MATRIX CONTROL INPUT CODE N1 N2 N3 N4 N5 N6 N7 N8 N9 N10 F1 F2 F3 F4 F5 5 2 F8 U14 F8 1798 50 1798 50 748.16 * 5.0 743.80 745.5 'SPECIAL ASSESSMENT POND 1 14 ' 2 3 F8 U14 D13 ' 5 6 D14 U15 D14 1318 1318 742.47 /'INDUSTRIAL DRIVE 1422 1422 751.61 /' 1522 1522 749.33 /' 1523 1523 752.05 ' 5 6 D109 U110 D109 2312 2312 750.73 /'ABANDONED RR SPUR 2412 2412 755.20 ' -----------------------------------Fragment End-------------------------------------------------------- This utility takes the old input and adds a single space in front of each field except for the CODE field. It also changes the continuation option for Code 5 Type 6 and inserts any needed place holder values. Any following comments must be preceded by a single quote to prevent FEQ from viewing them as potential input. NETUTL places the single quote after the first fixed/float field unless more fixed/float fields are used. A comment can begin at any point after the last item in a line. If there is no comment following a line a quote need not be present but may be present as shown. Blank lines are permitted and will appear in the output. Also in the fragment as shown, all of the fixed/float values appear in a given column. NETUTL does this in order to keep the input in column form but the columns are not used in processing the input. NETUTL cannot adjust comment lines. The above fragment is given again in which the spacing has been reduced and the comments have shifted. -----------------------------------Fragment Start----------------------------------------------------- NEW NETWORK-MATRIX CONTROL INPUT CD N1 N2 N3 N4 N5 N6 N7 N8 F1 F2 F3 F4 F5 5 2 F8 U14 F8 1798 50 1798 50 748.16 * 5.0 743.80 745.5 'SPECIAL ASSESSMENT POND 1 14 'This is branch 14. 2 3 F8 U14 D13 ' This instruction forces the sum of flows at the three nodes to be zero. ;CD N1 N2 N3 N4 N5 N6 F1 5 6 D14 U15 D14 1318 1318 742.47 /'INDUSTRIAL DRIVE 1422 1422 751.61 / 1522 1522 749.33 / 1523 1523 752.05 5 6 D109 U110 D109 2312 2312 750.73 /'ABANDONED RR SPUR 2412 2412 755.20 -----------------------------------Fragment End-------------------------------------------------------- Version 8.80 August 8, 1996: --A problem when multiple references to a single type 15 function table were made has been corrected. Tables of type 15 contain the table numbers for function tables of type 13 in order to represent a 3-D table for representing underflow gates. In processing of these type 15 tables, FEQ converts the table numbers to the address of the table. In versions previous to this one, FEQ did not have a way of remembering that the conversion had been done. Thus if the table of type 15 was used again to describe an identical gate, the conversion from table number to table address was done again--but this time it failed since the table numbers were no longer present in the type 15 table. The work around this problem was to make a copy to the type 15 table and then change its table number. Each of the type 15 tables would refer to the same set of type 13 tables and FEQ worked fine. --A change was made to the internal format of the type 15 table so that FEQ could remember that the conversion from table numbers to table addresses had been made. Thus on second and subsequent appearances the conversion would not be made. --Corrected a problem with writing HECDSS time series when the time step become small. The routine for interpolation from FEQ's irregular time step to the HECDSS regular time step became confused at the boundaries of an internal buffer. The boundary computations have been changed so that small time steps, less than 60 seconds do not cause problems. --A major new option for the Network-Matrix Control Input has been completed. This was made possible by the column-free option introduced in version 8.76 (not released to anyone). Therefore, these new options only function when the first three characters of the heading for the block are: NEW. The option is the creation of what are called macro instructions. Each code number in the Network-Matrix Control Input is an instruction, that is, it instructs FEQ on what to do at a particular point in a network of channels. It turns out that many patterns of instructions in the input are the same. For example, many junctions are described by the same three instructions. A macro instruction is a name given to a particular group of instructions. The macro instruction name is followed by one or more arguments that supply the information that varies from instance to instance. The facility for supporting macro instructions also presents an opportunity to supply names for instructions instead of numbers. An optional input block, DEFINE MACROS AND INSTRUCTIONS, has been added to FEQ. It appears just before the Network-Matrix Control Input block. If macros are to be defined, this block must be present. Also for any macro to come into play, the column-free input option for the Network-Matrix Control Input must be invoked as well. An example of a Define Macros block is: 1 DEFINE MACROS AND INSTRUCTIONS 2 3 ; Here are instruction definitions. The pattern is always 4 ; an identifier that becomes the instruction name followed 5 ; with one or more intervening spaces by the corresponding 6 ; FEQ Network-Matrix Control Input Code value. The following 7 ; line defines the identifier: EqZ to have the value of 3. 8 9 EqZ 3 10 CS_2D 5 11 CS_1D 4 12 qsum 2 3 13 FILE=TEST.MIF 14 15 JUNCTION n1 n2 n3 16 EqZ n1 n2 17 SumQ n1 n2 n3 18 EqZ n1 n3 19 END JUNCTION 20 21 Force_Q node table limit 22 6 1 node 1 table limit 23 END Force_Q 24 25 26 END MACROS Note: the line numbers have been added to aid in referring to a particular line. They are not part of the input! The block is used to define what are called instructions and macro instructions. Both are instructions; a macro instruction is just a higher-level construct. Line 9 is an example of defining an instruction. This associates the identifier, EqZ, with the number 3. Code 3 is the equal-water-surface elevation instruction in FEQ. Line 16 is an example of the use of the instruction defined in line 9. Line 17 is an example of the use of a special instruction to force the sum of flows at the exterior nodes at a junction to zero. Notice that line 17 does not contain the number of nodes involved. The instruction, SumQ, counts the number of arguments that it has and then supplies the number of nodes to FEQ. Line 12 shows how to define an instruction name that does the same thing as SumQ. The difference is that the instruction name is followed by two numbers instead of one number like the other instructions. The first number is the Code value to be used, 2. The second number is always 3. This defines the proper internal value so that the command being defined will place the number of nodes in the proper place for FEQ. An example of a definition of a macro instruction appears in lines 15 through 19. It represents a junction involving three nodes. There are then three arguments following the macro instruction name. Each of these arguments appears one or more times in the body of the macro instruction, lines 16-18. The macro instruction definition is completed by the END statement in line 19. Line 13 shows that the definitions can also appear in a file. The key word FILE is reserved. You should not try to name an instruction using this word. Neither should you try using the word END. The contents of the file, TEST.MIF, follows: (note: MIF or mif is the suggesting extension for a file containing macro instructions--Macro Instruction File.) 1 Bran A1 2 1 A1 3 END Bran 4 5 Rating node table datum width 6 4 1 node -1 node table datum width 7 END Rating Lines 1-3 define a name for the branch code option. Lines 5-7 define a name for the rating table option for a one-node control structure. Here is an example input that uses some of the above definitions: 1 NEW NETWORK-MATRIX CONTROL INPUT 2 CODE N1 N2 N3 N4 N5 N6 N7 N8 N9 N10 F1 F2 F3 F4 F5 3 Force_Q U1 100 * 4 Bran 1 5 Bran 2 6 Force_Q U2 102 * 7 JUNCTION D1 D2 U3 9 Bran 3 10 Rating D3 110 103.0 32.0 11 -1 The first thing to notice is that the old headings don't work anymore. We will need to develop new methods for structuring the input. With a careful choice of names and macros, the input should be much easier to develop and read. Notice also that the place-holder, the asterisk, is required for the Force_Q macro if we want to leave a blank for the F1 value. If a F1 value is supplied it must have a decimal point in it. Otherwise FEQ will treat it as if it were a N5 value which is not what we want! A place holder is required because FEQ requires that the number of actual arguments, the items given after the macro name in the Network-Matrix Control Input, must be the same as the number of dummy arguments, the identifiers given after the macro name in the Define Macros block. Here are the current general rules for defining instruction and using them: 1. All names and dummy arguments must be identifiers. A dummy argument is an argument following the macro name in the Define Macros block. For example line 1, Bran A1, in file TEST.MIF has one dummy argument, A1. The macro instruction in line 5, Rating, has four dummy arguments: node, table, datum, and width. An identifier in FEQ is a sequence of alphabetic and numeric characters with the first character being an alphabetic character. The following characters are added to the alphabet and are considered to be alphabetic: underline, _, slash, /, and back slash, \. The colon, :, and period, ., are considered to be numeric. This means that an identifier must have a first character taken from the letters a through z, underline, slash, or back slash. After the first character the digits 0 through 9 and the colon and period plus any of the extended alphabet can appear. Using this definition, a file name in either DOS or UNIX format can be treated as an identifier. Here are some examples of valid identifiers for FEQ: d:\usf\feq\test\test.mif BRANCH Branch CS CS_2D Note that case is important. BRANCH and Branch are NOT the same. Here are some examples of invalid identifiers: 1234 (Invalid because first character is a digit. This is a valid file name in the crazy world of DOS.) CS-2D (Invalid because the special character, -, is used. ) :a45 (Invalid because the first character is non-alphabetic.) 2. The current maximum length of an identifier used as a macro instruction name, an instruction name, or a dummy argument is 16 characters. If this proves too short, we can make it longer. However, all instructions must be completed within 112 characters. Making the instructions too long will counteract the benefit of using column-free input. 3. An identifier used as a file name can be up to 64 characters counting all character, that is the drive letter, colon, and slashes and the period are counted as part of the limit of 64. 4. The actual arguments can be anything required by FEQ. They can be table numbers, node labels, etc. However, any number that is a measurement and not a counting value or label should have a decimal point. This is needed so that the column-free input can be processed properly. 5. The number of actual arguments and dummy arguments must match. If an actual argument in the sequence is to be passed over, use the place-holder, the asterisk, for that argument. 6. Comments can appear in a macro definition but they will not be transferred to the usage of the macro. 7. Currently the special instruction, SumQ also has the following aliases: SUMQ, SQ, QSUM, and sumq. These are coded within FEQ. If you do not like any of these names, you can define your own as outlined above. Version 8.86 January 30, 1997: --Further extensions have been made to macro instruction processing. Four operators have been defined for the construction of identifiers and numbers to use as arguments. The operators are: addition: denoted by + subtraction: denoted by - replication: denoted by * concatenation: denoted by | Here is an example of using some of these operators. Again the numbers for each line are not part of the input to FEQ-they have been added for reference. 1 ; Macro for a gravity connection via one path to a LPR 2 3 SWCG_1 nup ndn swcid lprtab hdatum 4 13 D|nup U|ndn 5 SumQ D|nup U|ndn F|lprtab-7000 6 LPR lprtab 7 6 1 F|lprtab-6900 1 0 0.0 8 5 6 F|lprtab-7000 D|nup F|lprtab-7000 2*swcid+4200 hdatum 9 END SWCG_1 This macro has five dummy arguments. They have the following meaning: nup- branch number upstream of the two-branch junction ndn- branch number downstream of the junction swcid- surface water connection identification number. Is in the range of 1001 through 1099. Identifies a culvert under a levee. lprtab- table number for a level-pool reservoir that will be connected to the two-branch junction using the culvert defined by the swcid number. hdatum- datum for head for the 2-D table describing the flow through the culvert. Line 4 shows the use of the concatenation operator where we construct the labels for the nodes on the branches in this junction. Notice that the concatenation operator appears with no spaces. The concatenation operator must always be next to the items that are to be concatenated. Line 5 shows an example of using the subtraction operator. We need the downstream node on the level-pool reservoir. The node ids have been selected so that we can compute them from the capacity table number of the level-pool reservoir. In the current case all of the reservoir capacity tables have table numbers in the range 7200-7299. This means that all level pool reservoirs to be used with this macro instruction will have downstream nodes in the range of F200 through F299. The argument: F|lprtab-7000 is evaluated in the following order: 1. lprtab is replaced by its numeric value from the actual argument. 2. The subtraction is done to yield the value in the range 200 through 299. 3. The concatenation operation is done last to form the free node id in the range of F200 through F299. Line 8 shows an example of all four of the operators. The replication operator repeats an item. The order of operation must be kept in mind. The replication operator has the lowest precedence. That is, it is always done last. Thus the argument: 2*swcid+4200 is evaluated as follows: 1. swcid is replaced by its numeric value from the actual argument. 2. The subtraction is done to yield a number in the range of 5201 through 5299. 3. The replication operator is done to produce two instances of the argument. Line 7 creates the upstream free node id for the level pool reservoir. In this case the upstream free node id has a numeric portion that exceeds the numeric portion of the downstream node id by 100. The rules for evaluation of the operators, including the implied operator of value replacement is as follows: 1. Replace all identifiers with the proper numeric value. If an identifier has an unknown value it is retained and a warning message is issued indicating that the value is unknown and it may cause later errors. 2. Do the addition and subtraction operations. Note that these must take place on numbers NOT on identifiers. 3. Do the concatenation operation. 4. Do the replication operation. There can be no spaces next to any of the four operators. Also these operators are always binary; that is, they are between two items. --The output of the tributary area has been changed so that the format is varied to produce the maximum precision value that will fit in the available space. --An additional option for controlling output of the iteration log has been added. A value of 2 for MINPRT will output the iteration log whenever the non-convergence condition arises. --Two-way pumps have been added to FEQ. A two-way pump is a pump designed such that the water can be moved in either direction. This does not mean that the pump is reversible but that it has extra gates or valves near it that allow the water to flow through the pump in a single direction while directing the water in either direction, depending on the gate setting. The following assumptions are made to make implementation simple: 1. The losses in conduits/channels used to connect the pump to the flow system are assumed to be such that they are the same for both directions. That is, FEQ assumes that the pump, whatever its means for changing the direction of the flow, is symmetrical with regard to the losses. 2. The pump rating is the same in both directions. The direction given in the Code 5 type 3 instruction for the pump defines the default direction. That is, if the speed of the pump is > 0, then it is pumping in the default direction. If the speed of the pump is < 0, then it is pumps opposite to the default direction. The input for a two-way pump is the same as for a one-way pump. This was the reason for the simplifying assumptions: to keep the changes to a minimum. The major changes were made to permit user control of two-way pumps. Recall that a one-way pump is controlled by using tables that give its speed as a function of the control value: flow or elevation of water surface. Let SPD be the value of found in these functions. The following rules govern the speed setting for one-way pumps: 1. If SPD > 0, then if the pump is off, it is turned on at the given speed, SPD. If the pump is on, its speed is set to SPD. 2. If SPD=0, then the speed of the pump is unchanged. 3. If SPD < 0, then the pump is turned off if it was on and remains off if it was off. These rules remain the same for one-way pumps. However, control functions like this will no longer suffice for two-way pumps because the rules for speed of the pump must now define the direction of pumping as well. Therefore, the rules for the control tables for two-way pumps have been changed. This means there are now two kinds of control tables for pumps: those for one-way pumps and those for two-way pumps. In order to control two-way pumps a tolerance value at near zero speed was defined. This value is 0.01. No pump will likely operate at 0.01 of its maximum speed and therefore this value has been taken as a special value for the control of two-way pumps. This value is called PUMP_EPS and is used in a pump-control table to define pump speed as well as direction. A pump control table for a two-way pump can have speeds that are negative as well as positive, always limited so that -1 <= SPD <=+1. However, we need a value that defines a null zone for both positive and negative speeds. For a one-way pump the null zone is defined by SPD=0.0 but that is not adequate for two-way pumps because we have no way for turning the pump off and on. The following rules for the meaning of a value from the pump control table (function) for a two-way pump is as follows: 1. When SPD > PUMP_EPS the direction is the same as given by the user in the pump instruction. The pump speed is set to SPD. 2. When SPD < -PUMP_EPS the direction is opposite to that given by the user and the pump speed is the absolute value of SPD. 3. If SPD = -PUMP_EPS or SPD = +PUMP_EPS then the pump direction and speed remains unchanged. This is the null zone value for control of a two-way pump. 4. If -PUMP_EPS < SPD < PUMP_EPS, that is, if SPF is within the tolerance interval about zero defined by PUMP_EPS, then the pump is turned off. An additional block type for two-way pumps has been added to the existing block types of PUMP and GATE. The following input fragment from an existing model shows examples of a control block for a one-way pump and a two-way pump. The format of the block is the same for both but the contents of the control tables will differ. BLK=00001 BLKTYPE=PUMP SPEEDS= 4 MINDT=3600. PINIT=0.0 BRAN NODE KEY MNRATE RISE FALL ONPR OFPR C24T LPR 7224 0 D215 ELEV 0.10 101 101 4 1 0 F324 ELEV 0.10 4224 4224 3 2 -1 BLK=00002 BLKTYPE=PUMP2WAY SPEEDS= 4 MINDT=3600. PINIT=0.0 BRAN NODE KEY MNRATE RISE FALL ONPR OFPR C24F LPR 7207 0 D311 ELEV 0.10 102 102 5 2 0 F207 ELEV 0.10 4207 4207 3 4 0 F207 ELEV 0.10 6207 6207 1 6 -1 The SPEEDS option has been added to the pump control blocks also. It gives the number of speeds that are allowed and only those speeds will be used. In these two examples, the number of speeds is 4. Thus the only non-zero speeds used will be 0.25, 0.50, 0.75, and 1.0. If a control table requests a speed of 0.05, it will be rounded up to 0.25. Thus the minimum non-zero speed is 0.25. Other speeds will be rounded to the nearest valid speed. This avoids having the pump be set at some small value that represents only a trickle of water compared to its actual operation. --Multiple control blocks can now be used for one control structure. Here-to-fore only one control block per structure was permitted. Thus seasonally varying rules for structure operation were cumbersome and required multiple runs of the model, each with the proper operation rule for the season being run. There is still the restriction that only one control block is active at any given time. Support for this option was added as follows: 1. The block numbers for the control blocks to be used for the structure are placed in a time series table of type 7. The block numbers are integers but function tables will treat these numbers as having a decimal point. To obtain a block number from these tables FEQ rounds the value obtained from table lookup to the nearest integer. This insures that block numbers in the time series table will be integers before they are used as a block number in the operation of the control structure. The block numbers are repeated in order to represent seasonal variation of operation rules. FEQ implements the more general case where the operation rule may also change over time. Thus the special case of seasonal repetition of operation rules requires a small increment of effort in creating the table. The following table gives an example for control of a sluice gate with two different rules: one for the dry season and the other for the wet season. Notice that each rule appears twice in succession in the central part of the table body. The transition between rules takes place over a one-day period. The division point is then at noon in that day, since the interpolated value from the table will have a fractional value. For example, on May 15, 1980, the day will begin with block number 32 active. At noon the interpolated value from the table will be 31.5 with some small roundoff noise. If that noise is such that the number is 31.5 or slightly greater, then the rounded value will become 32. On the other hand, if the number is slightly less than 31.5, the rounded value will become 31. TABLE#= 200 TYPE= -7 REFL=0.0 YEAR MN DY HOUR OperBlk Select operation block for S-49 1979 10 14 0.0 31. 10 15 0.0 32. 1980 05 15 0.0 32. 16 0.0 31. 1980 10 14 0.0 31. 10 15 0.0 32. 1981 05 15 0.0 32. 16 0.0 31. 1981 10 14 0.0 31. 10 15 0.0 32. 1982 05 15 0.0 32. 16 0.0 31. 1982 10 14 0.0 31. 10 15 0.0 32. 1983 05 15 0.0 32. 16 0.0 31. 1983 10 14 0.0 31. 10 15 0.0 32. 1984 05 15 0.0 32. 16 0.0 31. 1984 10 14 0.0 31. 10 15 0.0 32. 1985 05 15 0.0 32. 16 0.0 31. 1985 10 14 0.0 31. 10 15 0.0 32. 1986 05 15 0.0 32. 16 0.0 31. 1986 10 14 0.0 31. 10 15 0.0 32. 1987 05 15 0.0 32. 16 0.0 31. 1987 10 14 0.0 31. 10 15 0.0 32. 1988 05 15 0.0 32. 16 0.0 31. 1988 10 14 0.0 31. 10 15 0.0 32. 1989 05 15 0.0 32. 16 0.0 31. 1989 10 14 0.0 31. 10 15 0.0 32. 1990 05 15 0.0 32. 16 0.0 31. 1990 10 14 0.0 31. 10 15 0.0 32. 1991 05 15 0.0 32. 16 0.0 31. 1991 10 14 0.0 31. 10 15 0.0 32. 1992 05 15 0.0 32. 16 0.0 31. 1992 10 14 0.0 31. 10 15 0.0 32. 1993 05 15 0.0 32. 16 0.0 31. 1993 10 14 0.0 31. 10 15 0.0 32. 1994 05 15 0.0 32. 14 The control block selection table must cover the entire time span of the period being simulated. If a new rule is introduced at some time during the simulation its number is added at the proper point in the selection table. In the instruction for the control structure in the Network-Matrix Control Input (NMCI) the table number for the selection table is given instead of the block number for the control block. The table number must be larger than any valid control block number. Otherwise, FEQ will assume that you are providing the number of the control block and not a control-block selection table. This number is given by the parameter, MNBLK, in the INCLUDE file ARSIZE.PRM. Its current default value is 50. Thus with this default for the maximum number of control blocks, any table number larger than 50 will signal FEQ that a control-block selection table is being referenced instead of just a control-block number. An example is given here, taken with comments from an existing model. This example is of a sluice gate, not subject to tailwater influence, so that it could be represented by the Code 4 type 5 instruction in the NMCI. This example makes use of the column-independent input for the NMCI. ; Downstream boundary for C-24 at S-49. This has a special ; operation control block option that refers to time-series ; table, id number 200, that gives the operation control block ; as a function of time. 4 5 D362 -1 D362 420 200 0 0 0 S_49 4.4 ' --I/O unit numbers are no longer used in FEQ. They were an artifact of Fortran on mainframes. No user of FEQ is known to be on a mainframe at this time. However, old inputs can be left as they are. FEQ will detect the unit number and issue a warning message that they are no longer used and that the number given will be ignored. FEQ assigns unit numbers internally as needed and they should not be of concern to the user. The file name is all that is now required for input. It can be moved to the left as desired. --The conversion of all upper case error and warning messages to mixed case messages is essentially complete. However, the documentation of these messages will still have some of them in all upper case. Over a period of time these too will take on mixed case. Version 8.87 February 21, 1997: --Added argument scale factor to the input of function tables of type 2, 3,and 4. This can be useful when building a model in a region that has two differing unit systems. --Changed the error reporting when function tables of type 3 are checked to make sure that the value of the function agrees with the derivative. In previous versions discrepancies larger than 2 per cent for the function value as computed from the derivative by the trapezoidal rule caused a warning message to be issued. If the function value is given as zero or left blank, NO warning is issued. FEQ computes the value for the function from the derivatives and places the result in the table. The first value of the function, that is the one for the smallest argument, is taken to be correct. If left blank its value is zero. This is probably the preferred option and avoids computing the function value. --Added single line option to Special output. Selected by using a heading that begins with the word "special" given as "Special" and not as "SPECIAL". Optional output is not supported in single-line output because it conflicts intrinsically with it. That is, a single line is a single line and the optional outputs are inherently multi-line. Version 8.88 April 23, 1997: --Added initial support for output to GENSCN and other similar programs. Will be modified later. Version 8.89 May 5, 1997: --Added checks for missing FREE nodes input block based on problems at the introductory short course. --Added checking for premature end of a function table to try to make error reporting more intelligible. Again based on problems found at the DuPage introductory short course. --Corrected error in one invocation of ERR:207 that caused a runtime Lahey error message. Proved to be most disconcerting to new students! Version 8.90 May 23, 1997: --Adjusted various output formats to better display the results when the metric system of units is used. May affect programs that read the ASCII output for plotting or other actions. --Changed the initialization for tracking the extreme values. The minimum value was being missed because previous versions did not check the results at time zero but at the end of the first time step. This should have a minor effect on any existing results since minimum flows are often of no interest. --Checked a simple model using one LPR and two sets of units. Values are equivalent at the points checked. The metric option appears to function in FEQ. Version 8.91 June 4, 1997: --Output of the order of equations in the network matrix encountered a runtime error if more than 7 nodes were involved with a Code 2 instruction. Increased support to 8 nodes by making the output string longer. --Array for output of GenScn values, GENSCN_OUT_VEC, had too few elements. Changed dimension to be twice the number of output node locations. Version 8.92 June 18, 1997: --Added a common block, version.com, to permit tracking and printing the FEQ version at more than one place. Currently used to place the current version into the GenScn files. Version 9.00 October 1, 1997 --Fixed bug in subroutine OPER that may have affected resetting of variable geometry structures when the time step was reduced after failure of convergence. --The internal representation of tributary area was reorganized to make support of detention and delay reservoirs possible. Externally the input remains unchanged, but the order and nature of the internal computations is different. Limited testing has shown that the differences in results are generally small with extreme values showing occasional variation of 1 or 2 units out of 10,000. The output of tributary area has been changed to reflect the internal change. In previous versions, tributary area assigned to a branch in the branch mode of input was allocated among the computational elements based on the length of the element relative to the length of the branch. The tributary output showed the areas allocated to each element. The internal computations also were done element by element. In the current version, the tributary area is not allocated to the elements on the branch, and only the fraction of the runoff is allocated to each element. Thus, the runoff total for the branch is computed and then the runoff is allocated to the elements on the same basis as the tributary area was allocated in early versions. This order obtains the same result within roundoff error and reduces the number of computations. In general the current version retains the tributary area internally as it was given in the input. --A side effect of reorganizing the tributary area representation is that the allocation of space for tributary area was modified. --It is no longer required that every branch in the model appear in the tributary area input. Only the branches that have tributary area need appear in the input; other branches may be omitted. This input works only if the exact spelling and capitalization of the block heading names as given in the manual are used. If they are not used, the model description cannot be processed correctly. --Tributary area can now be attached to a boundary node. The tributary- area input for a boundary node is identical to the tributary-area input for a level-pool reservoir. The Code 6 boundary condition is still given for the boundary node as if nothing had changed. However, the boundary condition must always be constant flow. A time-series table or a time-series file CANNOT be given at a boundary node that has tributary area attached to it. In addition, the flows from the tributary area must be routed through a delay reservoir. Specification of delay is discussed below. The delay reservoir must be used in order to convert the mean flow that comes from the product of tributary area and unit-area runoff intensity to a instantaneous flow, that is, a point value of flow. Boundary nodes cannot support specification of mean flows; they do not work in general and eventually result in undamped oscillations as the model tries to adjust the flow at the boundary to match the imposed mean flow. --Detention ponds are now supported within a tributary area. These ponds are used to approximate the effect of detention storage that is required in all new developments. Other commonly used terms include ponds, basins, and reservoirs. The following assumptions are used in implementing detention ponds: 1. The outflow from each pond takes place through a low-level circular orifice that is always free of tailwater or over a spillway that is also assumed free of tailwater. 2. Each pond has a conservation pool at the level of the minimum point of the circular orifice. In other words, the surface area of the pond at zero outflow is always greater than zero and should generally be much greater than zero. The water surface in the pond is taken to be horizontal for all combinations of inflow and outflow. 3. Each tributary area given in the input can be divided into two parts: one part that flows through one or more detention ponds and the other part that does not flow through detention ponds. The outflow from both subareas is added to create the inflow to the branch, level-pool reservoir, or boundary node. The outflow from the detention reservoir is assumed to discharge into the branch, level-pool reservoir, or boundary node directly just as the tributary area not subject to detention does. 4. The reservoir, orifice, and overflow spillway are defined by the following set of values: Identifier used What it means in FEQ input ----------------- ---------------------------------------------------------- YD Design depth. This value is the vertical distance from the invert of the orifice to the minimum point on the overflow spillway. Default= 5 feet. UAQ Unit-area flow. This value, expressed in the ENGLISH unit set as ft^3/second/acre, gives the maximum flow that the pond is to release through the orifice when it is at its design depth. The typical range is from 0.05 to 0.30. In DuPage County, the storage required in acre-feet per acre to meet a range of unit-area flows for a range of directly connected impervious area fractions has been computed by the Northern Illinois Planning Commission (NIPC). These results are tabulated in a 2-D table of type 10 and stored in the ASCII file, DETAIN.TAB, that is stored in the \BIN directory. This table is number 10000. For example, if the directly-connected impervious fraction is 0.25 and the allowed unit-area flow is 0.1 ft^3/second/acre, this table requires that 0.33 acre-feet/acre or 0.33 feet of storage be provided for each acre of area that drains through the detention pond. Default = 0.1 ft^3/s/acre. BSS Basin side slope. This identifieds the assumed side slope of the detention pond given as the horizontal distance for an assumed unit rise. A typical range is from 2.0 to 5.0 A value of 0.0 indicates that the pond walls are vertical. Default = 4.0 AVDA Average drainage area. Gives the assumed average drainage area for the detention ponds in a tributary area. The area tributary to a branch, LPR, or boundary node in an FEQ model may have more than one detention pond in it. We usually do not wish to simulate every detention pond. If we want to simulate every pond, then we can describe each pond as a level-pool reservoir and include it explicitly in the model. In this case, each pond would have to be connected to the model and the detail required would be excessive. The detention facility in the Tributary Area input is not designed to simulate the detention pond network in detail. The input is designed to approximate the major effect of numerous small detention ponds. The tributary area to the branch, level-pool reservoir, or boundary node, is divided by the AVDA to find the number of average ponds that would be present. The flow computed from the tributary area is divided by the number of average ponds. For example, if the AVDA= 40 acres and the tributary area subject to detention is 365.5 acres, then the number of average ponds is 365.5/40=9.138. If the average flow from the tributary area during a time step is 51.6 ft^3/s, the inflow to the average detention pond would be 51.6/9.138= 5.647 ft^3/s. The outflow from the average detention pond is then multiplied by the number of average detention ponds. Thus, if the outflow from the pond were 1.25 ft^3/s, the flow that is applied to the branch, LPR, or boundary node would be 1.25*9.138=11.423 ft^3/s. The storage outflow relation for the pond is non-linear, especially at smaller depths so that this yields a different result than forming one large storage to represent all the ponds in the tributary area. Only when the storage- discharge relation is linear would aggregation across ponds of disparate size be valid. Default = 40 acres. WS Weir slope. The overflow weir is approximated by flow over a triangular broad-crested weir. WS gives the slope of the weir crest in horizontal distance over vertical distance. A large value of WS gives a weir crest that is nearly horizontal. Values in the range of 25 to 50 should give a fair representation of the flow over the banks of the pond. A triangular crest was chosen because the flow usually starts at a low point and then rapidly increases as more of the bank of the pond is overtopped by the rising water. Default = 50.0 OFWC Overflow weir coefficient. This identifies a dimensionless weir coefficient for the overflow weir. Values of 1.0 or less are probably in the correct range. The weir equation for this form and for the values defined here is Q = OFWC*0.64*SQRT(.4*GRAV)*WS*H^2.5. Default =0.9. ORIFICE_CD Orifice coefficient of discharge. If the orifice is sharp- edged or square-edged, the coefficient of discharge is approximated by 0.60 within about +/- 2 percent. The weir coefficient for flow over a circular sharp-crested weir is also close to 0.6 except at small heads relative to the diameter. Therefore, the same discharge coefficient can serve for both weir and orifice flow. Default = 0.6. ORIFICE_TAB Table number for the orifice flow function. This table number is 10001 in the file DETAIN.TAB in \BIN. This table is of type 4 and approximates the dimensionless orifice flow function covering both weir and orifice flow. The table has been computed and checked for a dimensionless range of head relative to orifice diameter from 0.0 to 542. The relative error in the table is less than 3.5 x 10^-6. The table is used to define the relation between storage and outflow for the pond. Default = 10001 LUI Land-use index for the impervious area. Needed to determine where the impervious area appears in the sequence of land uses given for each gage in the input. If the impervious area appears first then LUI=1. The value of LUI must be constant for all input of tributary area for a FEQ model. Default = 1. UADV_TAB Unit-area detention volume table number. This is table number 10000 already mentioned above. Default= 10000. The default values may be changed by the user. 5. The following steps defines the characteristics of the detention storage (either default values or user-given values may be used): 5.1 Compute the sum of areas for the land uses subject to detention to obtain the total area subject to detention. 5.2 Compute the fraction of imperviousness for the area subject to detention. 5.3 Using the impervious fraction and the value of unit area flow, UAQ, lookup in the unit-area detention volume table, given by UADV_TAB, to find the unit-area storage needed. 5.4 Compute the volume of storage needed by multiplying the unit-area storage found in step 5.3 by the value of AVDA, the drainage area for the average detention pond. 5.5 Assume that the pond is an inverted frustrum of a cone. That is, the basin is circular in plan and has a flat bottom or conservation pool. The volume computed in step 5.4 must be contained by this frustrum over a depth given by YD, the design depth, and the basin side slope, given by BSS. The volume contained in the frustrum with the values given is a quadratic equation in the radius of the conservation pool. Solve for the radius of the conservation pool. This value then defines the volume and surface area of the pond as a function of the depth of water in it. The volume is a cubic function of the depth. 5.6 Find the design discharge from the product of ADVA and UAQ. Then compute the diameter of the orifice such that the orifice function will give the design discharge when the head on the invert of the orifice is YD. Note that this value can deviate from the traditional Q=PI*D^2/4*SQRT(2*GRAV*YD) value because the flow through the orifice deviates from this relation when the relative head is less than 4 or 5. Orifice diameters can become as large as 20 percent of the design depth. 5.7 Compute a table of type 4 that gives the flow out of the pond as a function of the storage in the pond. This computation is done at 50 different levels extending from 0 to 1.5*YD. The storage in the pond is assumed to follow the same function even though the design depth is exceeded. We do this because ponds often have banks that are of irregular elevation and because the overflow weir has large capacity. Thus, it is unlikely that a detention pond will ever have a large head on the overflow spillway. The table resulting from step 5.7 is then used to define the response of the pond to inflows to it. An example input fragment from the Tributary Area Input block shows how detention is requested. The line numbers have been added for reference here. Note that column 1 of the input is at the "B" in "BRANCH". The tributary area for the branch is given first. If detention storage is requested the next line should start with the name "DTEN" as shown This name is followed in this example by the identifier name "FRACTION" which defines what fraction of the tributary area for this branch is subject to detention. This means that one-half of the tributary area will be subject to detention and one-half will not. The land-use distribution in the two areas remains the same. If FRACTION= 1.0, then the entire tributary area is subject to detention. . . . 01 BRANCH= 2 FAC=1. 02 NODE GAGE IMPRV FGRSS MGRSS SGRSS FRST AGRIC 03 0 2 0.150 0.000 0.390 0.000 0.030 0.000 04 DTEN FRACTION=0.5 05 BRANCH=... . . . If you wish to specify a detailed distribution of land uses in the area subject to detention, then use one of the following two formats: . . . 01 BRANCH= 7 FAC=1. 02 NODE GAGE IMPRV FGRSS MGRSS SGRSS FRST AGRIC 03 0 2 1.180 0.640 0.650 0.000 0.200 0.000 04 DTEN -0.50 -0.25 -0.15 0.00 -0.05 0.000 05 BRANCH= 8 FAC=1. 06 NODE GAGE IMPRV FGRSS MGRSS SGRSS FRST AGRIC 07 0 2 0.204 0.098 0.283 0.024 0.032 0.000 08 DTEN 0.12 0.04 0.15 0.01 0.02 0.000 . . . Lines 1-4 have the usual tributary area input which provide the TOTAL tributary area. Line 03 is the total tributary area, both detention and non-detention. Line 04 is the DTEN line that indicates the area subject to detention by giving the negative of the areas that are subject to detention. FEQ will then assign the areas accordingly, deducting the area subject to detention from the total tributary area to find the area that is not subject to detention. Lines 5-8 have the usual tributary area input give the area that is not subject to detention followed by the DTEN line that defines the area subject to detention. In this case the areas on the DTEN line are positive or zero. Line 07 shows the tributary area without detention. Line 08 gives the tributary area with detention. If you want to change one or more of the values that define the detention and these changes are to apply to only one tributary area, then give the PARM line followed by one or more of the identifiers given above. All of the identifiers must fit on the PARM line. The line can have 112 characters on it. These parameter values will only apply to the detention reservoir defined on the line above the PARM line. The PARM line must be AFTER the DTEN line. . . . BRANCH= 1 FAC=1. NODE GAGE IMPRV FGRSS MGRSS SGRSS FRST AGRIC 0 2 0.060 0.000 0.250 0.000 0.090 0.000 DTEN FRACTION=0.7 PARM UAQ=0.05 AVDA= 15. YD=3.0 WS=25. BSS=3. OFWC=0.8 . . . If you want the changes to parameters defining a detention pond to change and to apply to all subsequent detention ponds in the input, then put the DEF line, for default, BEFORE the first detention specification to which it is to apply. All detention ponds given after this point will have these changed values. . . . BRANCH= 1 FAC=1. NODE GAGE IMPRV FGRSS MGRSS SGRSS FRST AGRIC 0 2 0.060 0.000 0.250 0.000 0.090 0.000 DEF UAQ=0.05 AVDA= 15. YD=3.0 WS=25. BSS=3. OFWC=0.8 DTEN FRACTION=0.7 . . . Note that the key words DEF, DTEN, PARM must all begin in column 1 of their line. --FEQ supports linear delay of the flows computed from tributary area. These flows are computed from the product of an average unit-area runoff intensity and the tributary area. Thus, the sequence of average or mean flows is a step function, with the abrupt changes in the step function acting as small, but abupt dam-break flood waves imposed on the flow in the branches of the model. In order to smooth this function and, thus, improve the computational characteristics, the option to provide a linear delay of flows to a given branch has been developed. This linear routing can approximate the natural delay and attenuation of flows present in the physical system at a level of detail beyond what can be simulated with distinct reservoirs and branches. The user is referred to a study done at Purdue University and reported in Rao, R.A., Delleur, J.W., and Sarma, B.S.P., 1972, "Conceptual hydrologic models for urbanizing basins," Journal of the Hydraulics Divison, American Society of Civil Engineers, vol. 98, no. HY7, pp. 1205-1220 for an example of measured tributary lag times. Values derived from this study, shown below, are available in FEQ,in addition to the option to provide user-determined estimates. Lag times in hours (derived from Rao and others, 1972) Impervious cover as decimal fraction ------------------------------------------------------ Area 0.00 0.08 0.25 0.50 0.75 1.00 (mi^2) ---- ----- ----- ----- ----- ----- ----- 0.10 0.25 0.22 0.18 0.14 0.11 0.09 0.50 0.56 0.50 0.41 0.31 0.25 0.21 1.00 0.80 0.72 0.58 0.45 0.36 0.30 2.50 1.28 1.15 0.93 0.72 0.58 0.48 5.00 1.83 1.64 1.33 1.02 0.82 0.68 The delay is specified in a manner similar to detention. The delay specification line begins with DLAY as its first characters starting in column 1. This is followed by either an explicit specification of the delay time in hours, or by requesting an equation by name to compute the delay time from the tributary area and the fraction of impervious cover. The first case is shown here. The identifier to give an explicit delay time is K. BRANCH= 2 FAC=1. NODE GAGE IMPRV FGRSS MGRSS SGRSS FRST AGRIC 0 2 0.150 0.000 0.390 0.000 0.030 0.000 DTEN FRACTION=0.5 DLAY K= 1.0 In this case, the delay time will be applied to BOTH the area subject to detention and to the area not subject to detention. The identifier to give an equation name is KEQ. BRANCH= 2 FAC=1. NODE GAGE IMPRV FGRSS MGRSS SGRSS FRST AGRIC 0 2 0.150 0.000 0.390 0.000 0.030 0.000 DTEN FRACTION=0.5 DLAY KEQ= PURDUE In this case PURDUE is the name of the equation, derived from Rao and others as discussed above, for computing the delay time in FEQ. If a different delay time is needed for each of the two areas, detention and non-detention, then the key word for the line differs. DLAY_DTA is the keyword that selects the detention tributary area. The keyword DLAY_NDTA selects the non-detention tributary area. BRANCH= 1 FAC=1. NODE GAGE IMPRV FGRSS MGRSS SGRSS FRST AGRIC 0 2 0.060 0.000 0.250 0.000 0.090 0.000 DTEN FRACTION=0.7 PARM UAQ=0.15 AVDA= 15. YD=3.0 WS=25. BSS=3. OFWC=0.8 DLAY_DTA K= 1.0 DLAY_NDTA K= 2.0 An equation can also be selected using the KEQ identifier for either or both of the areas. --A new option for controlling a gate in the Operation Control Block has been added. This is the GATETABL option. This option allows direct control of the opening of the gate as a function of the water-surface elevation at three exterior nodes. The three nodes are named as follows: control node -- this is the location giving the water-surface elevation that is being sensed and/or controlled. An example location is downstream of a flood-control reservoir to sense flood stage in the stream. Another location that could be used as a control would be upstream of the flood-control reservoir to sense when a large flow has appeared. upstream node -- this is the node that is upstream of the gate or gate set that is being operated. downstream node -- this is the node that is downstream of the gate or gate set that is being operated. The upstream and downstream nodes should be those that define the flow through the 2-node control structure. The control node can be the same as the upstream or downstream node but it need not be. GATETABL uses one or more 2-D tables of type 10 to define the relative gate opening. A relative gate opening of 0.0 means that the gate is closed and a value of 1.0 means that the gate is open as wide as physically or administratively allowed. The argument on the rows of the table of type 10 is the head at the control node. The datum for head is defined in the table of type 10 as an optional input. The argument for columns of the type 10 table is the elevation difference in the sense of elevation at the upstream node less the elevation at the downstream node. The following table is an example illustrating the format of a gate control table. The column with the heading of Salt is the flood stage at the control node. The datum for flood stage is 673.5 ft. This table is part of an application to on offline reservoir. The upstream node in this case is in the stream and the downstream node is in the reservoir. The goal of operation is to divert water into the reservoir to reduce flood stages and to also drain some water by gravity out of the reservoir when the stream is below flood stage. The table has four subregions. The line of zero head for flood stage and the line of zero difference of elevation form the boundaries of these subregions. The upper left subregion has the stream below flood stage and the stream below the elevation in the reservoir. This is the gravity drainage part of the table. The upper right subregion has the stream above the reservoir and the stream is below flood stage. The gates should be closed because it makes no sense to drain water below flood stage and needlessly add water to the reservoir that must be pumped out later. The lower left subregion has the stream above flood stage and the reservoir above the stream. The gates should be closed because any release could make the flood worse than it would be without the reservoir. The last subregion, the lower right, has the stream above flood stage and the reservoir below the stream. This is the region of gravity inflow and the gates will generally be open. TABLE#= 5300 TYPE= 10 ; HDATUM gives the datum for the left-hand column: in this case the flood stage HDATUM= 673.5 LABEL= Dominant table for lower slide gates ; left hand column contains the head above flood stage for Salt Creek ; top row contains the difference: Salt Creek - reservoir Salt -20.0 -1.0 -.250 0.250 1.0 2.0 50.0 -5.0 0.15 0.15 0.0 0.0 0.0 0.0 0.0 -3.0 0.15 0.15 0.0 0.0 0.0 0.0 0.0 -2.0 0.10 0.10 0.0 0.0 0.0 0.0 0.0 -0.5 0.10 0.10 0.0 0.0 0.0 0.0 0.0 -0.10 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.10 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.5 0.0 0.0 0.0 0.0 0.10 0.10 0.10 15.0 0.0 0.0 0.0 0.0 0.10 0.10 0.10 -1.0 This table has a small region of gate closure near the points of zero flood stage or zero elevation difference. This region can be used to avoid having long durations water trickles entering or leaving the reservoir by gravity. The region can provide a null zone or a dead zone so that changes in water level at the control node as the result of initiation of pumping do not cause the gate to open. The values in the current table are not typical of the values in an actual application. These values have been picked to test the basic functioning of the software. However, the table shows the basic pattern that all tables will take. The regions showing the gates closed could have the gate open a small amount to simulate the effect of a leaky gate. Gate leakage may add to the burden of pumping the reservoir during long periods of flow below flood stage. The number of columns in the table is limited at present to a total of 20. One of these columns is for the argument on rows and the others are for the data. The default format is 6 characters per column. The line width for reading these tables is set to 120 characters. All the values for a row of the table must appear on a single line. An example input in the Operation Control Block is shown here. 01 BLK=00002 02 BLKTYPE=GATETABL 03 MINDT=3600. 04 PINIT=0.0 05 CTLN UPSN DNSN TABN DOM AUX 06 U20 U20 F29 5300 MAX MAX 07 -1 This input is simple example of a control block for a gate. Line 2 defines the type of control block. Line 3 gives the minimum time, in seconds, that must elapse before the gate setting will be changed. Line 4 is the initial gate opening at the start of the simulation. This will be overridden by the initial elevations at some future time because we can look up the gate opening in the control table for GATETABL. Line 5 is a line of mandatory headings. Line 6 gives the dominant control table. The first control point given is always taken to be the dominant table. There may be one or more auxiliary tables as well, but this example has none. If present, auxiliary control tables would appear on line 7 and later and the columns under the headings DOM and AUX would be left blank. The three exterior nodes given for each control table appear in the order of CTLN (control node), UPSN (upstream node), and DNSN (downstream node. The control table number appears next followed by the action rules of the control tables under the columns headed by DOM and AUX. The action rule of a table defines how FEQ will use the gate opening found in the table. If the action rule for the dominant table is MAX, this means that under no conditions will the gate opening ever be larger than that given by the dominant table. If the action rule for the dominant table is MIN, then under no conditions will the gate opening be smaller than that given by the dominant table. The action rule for the auxiliary tables is defined by the rule given for them with the rule for the dominant table. If the action rule of the auxiliary tables is MAX, then the maximum of all gate settings among the auxiliary tables is selected. On the other hand, if the action rule for the auxiliary tables is MIN, then the minimum setting of all the auxiliary tables is selected. This rule only determines which auxiliary table provides the gate setting. The setting defined by the operation of the action rule on the auxiliary tables is then tested against the setting for the dominant table to finally select what opening to use for the next time step. In general, the control node for the dominant table will be one of the upstream or downstream nodes for the structure being controlled. The auxiliary tables permit gate operations to be initiated from one or more nodes that are at a greater distance away. In this way, some anticipation of flows that are en route to the dominant control point can be built into the gate operation. The contents of the control tables must be designed with the particular structure and stream in mind. It is clear that a range of trial runs may be required to define the table such that the benefit of the flood reduction structure is made as large as possible. Once such tables are constructed they could also be used to guide or suggest the actual operation of the gates based on what worked well in the past. This could be a good starting point in operating the gates during an ongoing flood, even if simulation of the flood in real time is not done. The elevations at the three nodes would have to be made available in near real time and then the tables could be used manually or in a small interactive computer program to yield recommended gate settings. Version 9.01 October 15, 1997 --Corrected problem with detection of DEFINE MACROS block to terminate tributary input when one or more branches without tributary area were left out of the input. --Modified summary output of tributary area so that negative tributary area is not included in the basin totals. This area is included in the summary for each gage and the total for the basin. Thus, the total tributary area given for each gage and the basin will be the total of all positive area given in the input. --Removed some overlooked debugging output in processing macro instructions. --Removed output of detention and delay summary headings when no detention or delay was present. Version 9.02 October 17, 1997 --Found cases in which NEW NETWORK option would not properaly process the terminating -1 at the end of the Network-Matrix-Control Input. An incorrect count could result in an input format error and generate an error message. Version 9.03 October 20, 1997 --Changed XSECIN, in COMPROG.FOR, to read both the old and new formats for interpolated cross-section function tables. --Corrected problem if trib area is zero in delay computations. Version 9.05 December 10, 1997 --Corrected various format statements that contained a backslash instead of a forward slash. Found in compiling FEQ for a UNIX workstation. --Updated version string for HECDSS files to report correct version in future when version changes. --Added dual-source irrigation control tables. Dual-source irrigation assumes that the rainfall-runoff model providing the unit-area runoff intensity files also supplies a time-series of unit-area intensity of irrigation application. The rainfall-runoff model assumes that water for irrigation is always available to meet the irrigation demands. The name dual-source irrigation denotes that there are two sources for this irrigation water: near-surface and imported water. Near-surface water comes from surface-water bodies or shallow ground water that is hydraulically connected to surface-water bodies. Imported water comes from outside the watershed or from deep ground water that is not hydraulically connected to the near-surface sources. The control in FEQ is used to specify which of the two sources is used. This is accomplished by giving a control table that specifies the proportion of water coming from near-surface sources as a function of the water-surface elevation at an exterior node that reflects the level of the near-surface water source. If all applied irrigation water comes from the near-surface-water source, then the value of the function is 1.0. If no water comes from the near-surface water source, then the value of the function is 0.0. Values of the function less than 1.0 but greater than 0.0 specify a blend of the two water sources. The irrigation input block must appear immediately following the tributary area input block. A control table may be specified for the area tributary to a level-pool reservoir, a boundary node, or to a branch. The control table applies to the entire branch; control tables cannot be given for an element on a branch. Withdrawal of water is simulated in FEQ by using a negative area to compute a negative lateral inflow to the flow path. One of the land uses in the area tributary to the flow path must contain the area that is receiving the irrigation water. This area is used to compute the runoff, however, and not the irrigation. To compute the irrigation withdrawal, FEQ multiplies the irrigated area by the negative of the near-surface fraction taken from the control table for this flow path. This area is treated like another land use and the user must include this land use in the total land-use count. The unit-area irrigation intensity value from the time-series computed in the rainfall-runoff model is then multiplied by this negative area to obtain the withdrawal of water from near-surface sources. An example input appears below together with the input for tributary area. In this case, all of the tributary area was attached to level-pool reservoirs. The units of the areas given are in tenths of an acre so that the factor of 4356.0 converts them to square feet. The value of SFAC was 1.0 so that stations were given in feet. There are seven land uses including the one for computing irrigation: Irrig. Note that the area for the irrigation computation is given as 0. This area will be supplied by the computations in IRRIGATION block. The land use subject to irrigation in this case is Grove. Its land-use index is 3 because it is the third land use given, counting left to right. The land-use index (LUI) for Irrig is 7. 01 BRANCH= 0 FAC=4356.0 02 NODE GAGE UbImp UbPer Grove Pastr Forst WetLd Irrig 03 F200 6 3725 5587 0 0 418 997 0 A 04 F201 6 1202 1802 0 0 7 382 0 B 05 F202 6 692 1037 0 21521 3369 15315 0 C1 C2 C6 C7 06 F232 6 0 0 12679 0 0 0 0 C1 C2 C6 C7 07 F203 5 121 181 0 5585 150 1134 0 C4 C5 08 F233 5 0 0 12538 0 0 0 0 C4 C5 09 F204 5 148 221 0 31 480 1324 0 C3 10 F234 5 0 0 14825 0 0 0 0 C3 11 F205 2 3402 5103 0 0 19039 800 0 D 12 F235 2 0 0 3483 0 0 0 0 D 13 F206 5 0 0 2942 16 0 243 0 E 14 F207 5 17 26 3657 0 0 109 0 F 15 F208 5 0 0 0 0 255 3327 0 G 16 F238 5 0 0 6098 0 0 0 0 G 17 F209 5 288 432 0 34618 9062 4534 0 H1 H2 K3 18 F239 5 0 0 5143 0 0 0 0 H1 H2 K3 . . . 19 F270 3 50 75 82363 1807 110 2625 0 NFW6NFW7NFW8NFW9 20 -1 21 22 IRRIGATION 23 Node/ LUI LUI H2O Cntl 24 Bran Irig Comp Node Tab# 25 F232 3 7 D335 100 26 F233 3 7 D335 100 27 F234 3 7 D317 100 28 F235 3 7 D323 105 29 F206 3 7 D320 106 30 F207 3 7 D311 100 31 F238 3 7 D305 100 32 F239 3 7 D305 100 . . . 33 F270 3 7 D221 100 34 -1 The irrigation block has a heading of IRRIGATION starting in column 1. This block is followed by two heading lines defining the contents of the subsequent columns. The first column gives the exterior node id or the branch number that has some or all of its tributary area subject to irrigation. The second column gives the land-use index of the land use that is subject to irrigation. In the current example, this number is always 3. The third column gives the land-use index for the computation of the near-surface withdrawal of irrigation water. This number is always 7 in this example. The fourth column gives the exterior node id that defines the water-surface elevation controlling the withdrawal of near-surface water. The fifth and last column gives the table number of the function table containing the fraction of irrigation withdrawal that comes from near-surface sources. Two examples of these tables are given here, taken from the FUNCTION TABLES block: Table 100 is used for most of the irrigated area. When the water-surface elevation is at 15.0 ft or below, no near-surface water is used for irrigation. All irrigation water then comes from other sources. These other sources represent an addition of water to the basin. At an elevation of 15.25 ft, one-half the water for irrigation comes from near-surface sources and one-half comes from elsewhere. TABLE#= 100 TYPE= -2 REFL=0.0 ELEVATION FRACTION Control table for irrigation from project canal. -15.0 0.0 15.0 0.0 15.5 1.0 50.0 1.0 -1.0 TABLE#= 105 TYPE= -2 REFL=0.0 ELEVATION FRACTION Control table for F205 -15.0 0.0 17.0 0.0 17.5 1.0 50.0 1.0 -1.0 Table 105 applies to the area attached to the level-pool reservoir with a downstream node of F205. Near-surface water becomes unavailable at sources levels below 17.0 ft. At 17.5ft all irrigation water is obtained from near-surface sources. Version 9.05 (Same version as above) January 5, 1997 --Use of directory or file names longer than the DOS value of 8 characters. Found the following behavior: Extended-DOS executables fail to interpret long file names properly. The filename and all directory names must not be longer than 8 characters. Windows NT executables do interpret long file names properly. However, these executables continue to run more slowly than extended-DOS executables when both are run using Windows NT. In some cases, the extra run time approaches 50 percent of the extended-DOS run time. Version 9.06 February 3, 1998 --Added new option to GATETABL. If the downstream node is left blank in the Operation Control Block, then both the arguments in the 2-D table of type 10 are heads relative to the head datum for the table given in the header block for the table. --An analysis of non-convergence events is now appended to the output. This analysis was added to FEQ in a prior version but not described in the release notes at that time. FEQ counts the number of times that each node appears as the location of maximum relative error in the last iteration of a time step that fails to converge. Thus, if all time steps converge no analysis is given. The following example shows what is reported. The location is given for each occurrence as well as the number and fraction of the total number for each location. This example shows that only 6 time steps failed to converge with the selected time step. Analysis of Non-Convergence Events Exterior nodes appearing as last location of maximum relative correction. Node Count Fraction Branch nodes appearing as last location of maximum relative correction. Branch Node Count Fraction 93 9333 1 0.17 90 9010 1 0.17 89 8909 1 0.17 79 7913 1 0.17 79 7914 1 0.17 79 7915 1 0.17 The analysis is useful in troubleshooting large models with hundreds or even thousands of convergence failures. Nodes that have a larger proportion of a large number of failures should be the focus of troubleshooting efforts, because there is probably something at or near the node in question that requires closer examination. Version 9.07 February 27, 1998 --Added additional checking for tributary-area input. Heading lines for the definition of the distribution of tributary area must now have their first non-blank information be NODE or USTAT. Any deviation from these two standard labels will cause an error and the processing will stop. This has been done because omitting a heading line in past versions did not cause an error. This was true of reservoir input and the station-range input. FEQ would merely treat the first line of numerical input as a label and continue processing as if everything were in order. The only signal of trouble would be in the results or in a short fall in the total tributary area. Version 9.08 March 8, 1998 --Warning 22, discontinuous flow at a flow boundary, was issued in error. Changes to flow boundary handling when the detention and delay reservoirs were added caused this bug. The code has been corrected so that the proper boundary-flow value is used in the checking. Version 9.09 April 2, 1998 --Added an additional option to the generic underflow gate Code 5 Type 9 command for the Network Matrix Control Input. The addition is an optional gate-efficiency factor table and the flow node used for the lookup in the table. This was added for the weir-gate on the Elmhurst Quarry diversion weir in order to vary the gate capacity in keeping with the physical model test results. When water flows over the diversion weir, apparently the approach flow to the gate is distorted such that the flow through the gate is reduced significantly. The efficiency of the gate decreases as the flow over the weir increase. Eventually, at close to maximum flow over the weir, the gate allows water to flow back into Salt Creek. The added input is in integer position 9 and 10. The fragment below shows the format This is integer position 9 and it contains the table number of the gate-efficiency factor function table. | 5 9 F56 F95 F56 001ELMG 550 550 40 F96 660.00 7.0 | This is integer position 10 and it contains the exterior node id that defines the flow used to lookup the gate-efficiency factor. A gate-efficiency factor table example is: TABLE#= 40 TYPE= -2 REFL=0.0 DivertedQ Gatfactor Gate-efficiency factor for Quarry weir-gate -100.0 1.0 0.0 1.0 350.0 1.0 560.0 0.37 1420.0 0.26 1490. 0.24 1660. 0.0 5000. 0.0 -1.0 In this case, the argument is the total diverted flow, including both the flow over the weirs and the flow through the gate. The gate can discharge a maximum of about 350 cfs before water flows over the weirs. Notice the sharp decline in gate efficiency when flow over the weirs begins. The negative argument is used to make sure that a small negative flow which could arise via roundoff, does not cause an error condition. Also the high flow of 5000 cfs is also present to prevent table overflow. The gate factor is fixed during a time step at the value of the diverted flow that existed at the start of the time step. This avoids feedback onto the gate during the solution process. Given the time steps normally used and the flow variation normally encountered, this assumption should not distort the results significantly. Version 9.10 April 15, 1998 --Added additional check for cross-section table numbers to detect a missing table at the last node on a branch. Existing checks did not flag this as an error and the program later failed with a subscript out of range system error. Version 9.11 June 5, 1998 --Added new item to the Run Control Block, TAUFAC, a tributary-area unit factor, to allow FAC in the Tributary-Area Block to be used for purposes other than unit conversion. Many users have already used it in such cases. TAUFAC follows immediately after SFAC. If not present, FEQ will assign a value of 1.0 to TAUFAC. However, if it is present it must be spelled exactly as given here. Errors in spelling will result in a failed run or some other incorrect result. An input fragment follows: . . . MAXIT= 30 SFAC=1.0 TAUFAC=5280.0 QSMALL=10.0 IFRZ=00009 . . . The rule for setting the value of TAUFAC is as follows: 1. Determine the linear factor for tributary area required to convert it to the internal units used by FEQ: square feet if GRAV is in the range of 32.2 and square meters if GRAV is in the range of 9.8. For example, if the tributary area is in square miles, then the linear factor is 5280. 5280^2 then gives the conversion factor. If the tributary area is in acres, then the linear factor is 208.71033, that is, the square root of 43,560. 2. Divide the linear factor for tributary area by SFAC. This gives the value of TAUFAC. Here are some typical examples: Trib Trib SFAC TAUFAC Area Area Unit Linear Factor ------- -------- ------ ----------- mi^2 5280.0 1.0 5280. mi^2 5280.0 5280. 1.0 mi^2 5280.0 100. 52.80 mi^2 5280.0 1000. 5.280 acres 208.71+ 5280. 0.03953 acres 208.71+ 1000. 0.2087+ acres 208.71+ 100. 2.0871+ acres 208.71+ 1. 208.71+ The above values assume that FAC is 1.0 or has a value unrelated to the conversion of units. The logic for these values is as follows: 1. FEQ ALWAYS multiplies the tributary area values given by the user by the square of SFAC. The default assumption is that the area unit for tributary area is given by a square that is SFAC by SFAC feet (meters) in size. Thus, if SFAC is 10, the area units are assumed to be 100 ft^2. 2. The internal value of tributary area used by FEQ is (user input value)*SFAC^2*TAUFAC^2*FAC. This number should be the area in square feet (square meters). This addition permits a free choice of the values of FAC, SFAC, and the units for tributary area. This addition gives considerable power but must be done correctly. Be sure you have the correct values for SFAC, TAUFAC, and FAC for your application. IT IS THE USER'S RESPONSIBILITY TO UNDERSTAND WHAT THESE THREE FACTORS DO AND TO PICK THE CORRECT VALUES. Version 9.15 June 18, 1998 --A number of changes were made in the way the various source files are arranged: 1. The COMPROG directory was changed to SHARE 2. COMPROG.FOR was broken into many smaller units 3. ARSIZE.PRM for FEQ and FEQUTL were combined into one file 4. Several .COM files between FEQ and FEQUTL were the same or nearly so. These files were adjusted to be the same and moved to the SHARE directory to be used by both FEQ and FEQUTL. 5. One bug was found in FEQ that occurred on a DG compiler. All other compilers did not encounter the problem. These tasks were done by RS Regan, USGS, Reston. I have made check runs of the modified code and have found no differences. However, there may be options not tested that could cause errors. Version 9.18 14 July 1998 --The GATETABL option in the Run-Control Block has been modified to permit specification of a control-table selection table. This makes it possible to vary the control table with time to reflect the differing flood sizes and characteristics. A selection table is specified whenever the table number in the control-table input location is negative. The selection table must contain the table numbers for the control tables that are to be used. Here is an example of the input and the various tables: . . . BLK=00003 BLKTYPE=GATETABL MINDT=60.0 PINIT=0.0 CTLN UPSN DNSN TABN DOM AUX Lower gates for WDIT pumped LPR F28 F28 F32-9600 MAX MAX -1 . . . This portion of the Operation Control Block input shows a negative table number, -9600. An example of this table (with added line numbers) is shown below. 01 TABLE#= 9600 02 TYPE= -7 03 REFL=0.0 04 YEAR MN DY HOUR Selection table for low-level gates. 05 1925 1 1 0.000000 9601.1 06 1949 6 12 0.000000 9601.1 07 1949 6 12 0.250000 9602.1 08 1950 3 5 0.000000 9602.1 09 1950 3 5 0.250000 9603.1 10 1951 2 15 0.000000 9603.1 11 1951 2 15 0.250000 9604.1 . . . Line 05 is the start time of the dummy event in the TSF. Line 06 indicates that 9601 is the low-level control table for the dummy event. Line 07 indicates that 9602 is the low-level control table for the first real event. In line 09, note that each event except the dummy event used the control table from the previous event for the first 15 minutes. 1988 4 1 0.000000 9664.1 1988 4 1 0.250000 9665.1 1988 9 11 0.000000 9665.1 1988 9 11 0.250000 9666.1 1988 9 15 23.000000 9666.1 1925 1 1 0.000000 0.0 The reverse time step signals the end of the table. Examples of the control tables are shown below. TABLE#= 9601 TYPE= -10 '( 13A6)' '(1X, 13A6)' '(F6.0)' '( 13F6.0)' '(1X,F6.1, 12F6.2)' HDATUM= 673.500 LABEL=Low Level Event Start: 1925 1 1 0.00000 Fstage-9999. 0.00 0.10 50.00 -99.00 0.0 0.0 0.0 0.0 -4.17 0.0 0.0 0.0 0.0 -3.17 0.0 0.0 0.0 0.0 15.00 0.0 0.0 0.0 0.0 -1.00 TABLE#= 9602 TYPE= -10 '( 13A6)' '(1X, 13A6)' '(F6.0)' '( 13F6.0)' '(1X,F6.1, 12F6.2)' HDATUM= 673.500 LABEL=Low Level Event Start: 1949 6 12 0.00000 Fstage-9999. 0.00 0.10 50.00 -99.00 0.0 0.0 0.0 0.0 0.00 0.0 0.0 0.0 0.0 1.83 0.0 0.0 0.0 0.0 2.83 0.0 0.0 1.0 1.0 15.00 0.0 0.0 1.0 1.0 -1.00 Version 9.19 11 August 1998 --Corrected error in GENSCN related routine that placed the incorrect station and invert elevation for exterior nodes on branches into the FEO file. Version 9.20 14 November 1998 --Found that the negative constant-flow boundary condition was incorrectly applied as zero flow for versions released after October 1, 1997. This condition is sometimes used to remove a constant positive base flow elsewhere in the system. The necessary code changes in INPUTUF.FOR have been made. Withdrawing flow by positive constant flow out of a dummy branch works in either case. Version 9.21 14 November 1998 --Found an undefined variable in TDLK15. The undefined variable was used in the interpolation between tables of type 13 that define the flow for various gate openings in the flow regime between weir and orifice flow. This slightly affects models using underflow gates. Version 9.22 30 November 1998 --Changed work space for processing the Network Matrix info so that it is of the proper type. This was needed to run the chk mode of the latest Lahey compiler. Added an equivalence overlay to make this possible. --Found that SYSGN was undefined in TDTCHK for side-weir tables. This only affected checking for overflow on a 2-D table used in Code 14 at the end of the run. Added the proper sign for this case. --Argument TIME to SET_INITIAL_OPER_BLK when called in FEQ was of the wrong precision. Added SNGL to the TIME argument at this call. --Found that HECDMY.FOR needed changes to permit null calls. Some program units are called even if the HECDSS is not being used. Added FULL = 0 to UPDATE_DSSOUT_JTIME in HECDMY.FOR. FULL was undefined if HECDMY.FOR was linked into the files. Version 9.23 9 December 1998 --Changed balance computation to provide more detail and to output a balance summary at the end of each event when running with DIFFUS=YES. Two balances are reported: one for the branches and level-pool reservoirs and the other for the tributary area that may include detention and delay reservoirs. --Added output of units to the detention basin summary output. --Changed some still existing carriage-control characters in the output of the DTSF description. Also changed upper case text to mixed case. --Placed cross-section-function-table lookup in line in the loop in SETINX. This reduced run times by 2-4 percent in two test cases. --Changed some carriage control characters in error and warning statements. Version 9.24 28 January 1999 --Added code to do a check on the state of 2-D tables used in Code 5 Type 6 instructions: 2-Node Control Structures. The state of the tables refers to the nature of the flow in the table when the ups or dns node is at its maximum elevation. The tables of concern are the type 14 tables whose contents are based on WSPRO bridge analysis results. As outlined in the documentation, such tables will probably not have valid free-flow conditions in them. WSPRO does not compute critical flow through the bridge opening and it often proves difficult to get the computations in WSPRO to succeed when the Froude number is close to 1.0. As a consequence of the above, all type 14 tables with contents defined by WSPRO analyses should never have a lookup in the free-flow state. Such lookup results will be incorrect. FEQUTL has been changed to output a source string following the HDATUM item in all 2-D tables of any type. This string is currently only used by the WSPRO checking code. Thus, only tables of type 14 with contents defined by WSPRO runs need to be changed. The source string for these tables is: WSPRO and should appear after the HDATUM entry. An example is, TYPE= -14 HDATUM= 685.120 WSPRO The spaces between the numeric response to HDATUM and the source string must be present. One space is sufficient if the numeric response is at the far right of its set of columns. This will be the case if the table has been created by FEQUTL. These changes can be made manually in order to avoid rerunning WSPROT14 in FEQUTL. The results of the table state analysis appear after each event in the FFFILE if diffuse inflow from a DTSF is used. Otherwise they appear in the standard user output after the extreme values are given. The analysis results look like the following: Two-D table states for Code 5 Type 6: Head- Tail- Table Table Flow Flow water water id Type state Ratio node node ----- ----- ---------------- ----- ------ ------ D142 U141 1543 14 Sub. D32 U31 1704 13 Free D54 U53 2503 14 Sub. 0.22+ D155 U154 2504 14 Free 1.07- *ERR:XXX* Table state in preceding line is invalid. Flow exceeds max flow in WSPRO computations by more than 5 percent. D56 U55 2505 14 Sub. 0.76+ D59 U58 2506 14 Sub. 0.99+ D60 U59 2507 14 Sub. 0.74+ D63 U62 2508 14 Sub. 0.32+ D67 U66 2511 14 Free 1.04* D68 U67 2512 14 Sub. 0.31+ The headwater node is the node from which water flows and the tailwater node is the node to which water flows. If the flow reverses at the structure, two entries will appear with the headwater and tailwater nodes reversed and perhaps with different table numbers (if different tables were supplied in the instruction). The entries are given in ascending order of table numbers. If the table type is 13, this information is reported only for user interest. The flow ratio will be blank for all type 6 or 13 tables. If the table is of type 14 and the flow ratio is blank, then the table is treated as NOT having its contents defined by WSPRO. This depends on all the tables with contents defined by WSPRO being marked as outlined above. Tables of type 14 with information in the flow-ratio column are those tables that were marked as having contents defined by WSPRO. If the ratio is followed by the plus (+) sign, then the table was in the submerged flow state. The flow used in the flow ratio is the flow through the structure when the maximum elevation occurred at the headwater node. This may differ from the maximum flow at the flow node for the structure. The denominator of this ratio is the flow in the table treated as the critical flow for the head at the tailwater node. However, if the table contents are defined by WSPRO, this will NOT be the critical flow, but only the maximum flow computed for the given fixed tailwater head. For example, at headwater node D54, the flow ratio was 0.22, well in the submerged state. If the flow ratio exceeds 1.05, the ratio is marked with a minus (-) and an error message is issued. If the flow ratio is between 1.0 and 1.05, no message is issued but the flow ratio is marked with an asterisk (*). Version 9.25 February 4, 1999 --Added internal flag on cross-section function tables to indicate the source of the table contents: read from input or interpolated within FEQ. Needed to support more options for GENSCN output. XTIOFF in arsize.prm increased from 20 to 21. --GENSCN output options have been extended from the limited options available in past versions. The past version option is retained without change. The new option is selected with a heading of NEW GENSCN OUTPUT. An example of the new options is: 1 FREE NODE TABLE 2 NODE NODEID DEPTH DISCHARGE ELEVATION SIGN 3 F1 RESERV 95.4 0.00 0.0 -1 4 F2 RESERV 95.4 0.00 0.0 +1 5 F3 OVERFLOW 2.0 0.00 95.0 -1 6 F4 OVERFLOW 2.0 0.00 95.0 +1 7 NEW GENSCN OUTPUT 8 FILE=NEWTEST 9 ADD=ALL_NODES 10 SUB=LPR_NODES 11 SUB=INTERPOLATED 12 SUB=FREE_NODES 13 SUB=BRANCH_EXN 14 ADD=NONE 15 OPT BRA NODE 16 ADD 0 F1 17 END 18 19 BACKWATER ANALYSIS 20 BRANCH NUMBER= -1 21 DISCHARGE= 54.0 Line 7 through 17 are the new GENSCN output specification. This block appears after the FREE NODE Initial Conditions table and before the specification of BACKWATER ANALYSIS. Lines 8 through 14 all have a keyword followed by an equal sign and followed by a response. The FILE input is required and gives the base filename for the three files created for GENSCN. In this case, the three files will be: NEWTEST.FEO, NEWTEST.FTF, and NEWTEST.TSD. The *.FEO file contains a description of the output so that GENSCN can find the information in the other two files. The *.FTF file contains the function table storage system from FEQ so that values from the cross section at a node can be plotted. The final file, *.TSD, contains the time-series data, flow, and elevation for each node defined in the input. Lines 9 through 14 specify the nodes by referring to various groups (classes) for nodes. The keyword, ADD, means to add the nodes in the class to the list that will be output to GENSCN. The keyword, SUB, means to subtract the nodes in the class from the list. Subtracting non-existent nodes will be ignored. Also adding the same nodes twice will not create duplicates--only the last request is retained. In the above example, line 9 adds all nodes in the model. Then line 10 subtracts all the level-pool reservoir nodes. Line 11 removes all nodes on branches that have a cross section interpolated within FEQ. Line 12 removes all free nodes and finally line 13 removes all exterior nodes on branches. The current classes are: 1. ALL_NODES-- all the nodes in the model 2. ALL_BRANCHES -- all the nodes on branches in the model 3. ALL_EXN -- all the exterior nodes ( those on branches AND free nodes) 4. INTERPOLATED -- all nodes on branches that have a cross section interpolated by FEQ. 5. INPUT_XSEC -- all nodes on branches that have a cross section input to FEQ. 6. FREE_NODES -- all free nodes, that is, exterior nodes not on a branch. 7. LPR_NODES -- level-pool reservoir nodes, that is, the downstream free node of the two nodes used to represent a level-pool reservoir 8. BRANCH_EXN -- all exterior nodes on branches 9. NONE - used to terminate specification of the output by class name and revert to detailed specification. These class names can be used with ADD and SUB to specify a wide variety of nodes. However, lines 15-17 show an example of adding an individual node. One can also add an individual branch by specifying the branch number under the BRA column and leaving the NODE column blank. This new version of the GENSCN output places the nodes in a particular order that is not under the control of the user. This is distinct from the old command. In the old command no class names existed and one added individual nodes or branches by specifying each one. The order of the information in the time series file was then the same as the order of specification. This order can no longer be maintained because with class names the order of specification is no longer maintained. Therefore, with the new command option, the order is exterior nodes first followed by branch nodes. The exterior nodes are given with the nodes on branches given first followed by the free nodes in ascending order based on their number. The order for the exterior nodes on branches is given by the order of input of the branches. If branch 1 is given before branch 2, then branch 1 will be given first. However, if branch 2 is given first, then branch 2 will be before branch 1 in the output order to GENSCN. Looking at the *.FEO file will give the order that was used. The branch nodes follow the exterior nodes. Here, the order is also dictated by the order of input of the branches to FEQ. This could become important in GENSCN when plotting a water-surface profile because profile plots crossing branch boundaries require that the branches be continuous and in the proper order, upstream to downstream, in the GENSCN output. Version 9.26 April 8, 1999 --Changed FAC to EPSFAC in subroutine INFO. Done to avoid conflict in later extensions. -- Forced OUTPUT to be 0 no matter what the user supplied. Also deleted the input of the optional values STPRNT and EDPRNT that dependent on OUTPUT being non-zero. --Added option to set QCHOP to default value when its input value is less than zero. Done to prepare for future additions. Version 9.50 August 6, 1999 --Changed processing of Run Control Block(RCB) to be namelist-like input. The new form requires an explicit heading be given, RUN CONTROL BLOCK, starting in column 1 as do the other block headings. The contents of the block then consists of variable names followed by an equal sign followed by the value to be used for the variable. A minimal Run Control Block would be: RUN CONTROL BLOCK NBRA= 5 NEX= 10 STIME= 1990/10/20:0.D0 ETIME= 1995/12/02/24.D0 All other variables given in previous versions still exist. However, they all now have default values that are used if they are not found in the input. The default values are given in the table below. The reference number gives the LINE number in the revised input description for the Run Control Block. The rules for giving the variables are as follows: 1. A variable and its response must all appear on one line of input. A line of input can contain up to 120 characters in the Run Control Block. 2. The equal sign must appear after the variable name. 3. The variable name must be given as in the table. In the future, alternatives for names will be given. 4. Some variables have more than one value in a response. These variables include standard date-time sequences and the specification of IFRZ. The date-time sequences need all four values to be present; none can be omitted. IFRZ requires that the values for the number of time steps be present. However, more than the required number of time steps is accepted and the excess information is ignored. For IFRZ, the number of steps AND the step values must all appear on one line. 5. The spacing within a name and its response is not restricted. However, adjacent name-responses must be separated by one or more spaces. For example, NBRA=5 and NBRA = 5 will both give the same result. However, NBRA=5NEX=10 is invalid. Table of Variable Names, Default Values, and Description Reference Variable Default Brief Description Number Name Value -------- ------------- ----------- ---------------------------------------------------------- 1 NBRA 1 Number of branches in the model 2 NEX 2 Number of exterior nodes in the model 3 SOPER NO Indicates presence of Operation of Control Structures Block 4 POINT NO Indicates presence of Point Flows Block 6 WIND NO Indicates presence of Wind Information Block 5 DIFFUS NO Indicates presence of tributary area and diffuse inflows 5 MINPRT 0 Flag to minimize size of the user output file 5 LAGTSF 0 Flag to signal lagging of diffuse inflows 5 DMYEAR 1925 Year of dummy event for diffuse flows 5 DMMN 1 Month number of dummy event for diffuse flows 7 UNDERF NO Flag for treatment of numerical underflows 8 ZL 0.0 Depth below which zero inertia approximation is used 9 STIME 1901/01/01:0.0 Starting time: year/month/day:hour 10 ETIME 1900/01/01:0.0 Ending time: year/month/day:hour 11 GRAV 32.174 Gravitational acceleration 12 NODEID YES Node ids MUST be used for version 9.30 and later 13 SSEPS 0.1 Convergence tolerance for water ponded above sewers 14 PAGE 24 Page size for file containing special output 15 EPSSYS 0.05 Global relative primary convergence tolerance 15 ABSTOL 0.000005 Global absolute convergence tolerance 15 EPSFAC 2.0 Factor defining secondary relative convergence tolerance 16 MKNT 5 Maximum number of iterations per time step 16 NUMGT 0 Count limit for secondary tolerance 17 OUTPUT 0 Output level-must be left at 0 17 PROUT 0 Output level-must be left at 0 18 PRTINT 1 Printing interval for detailed time-step output 18 DPTIME 9999/12/31/24. Start time for debugging print output 21 GEQOPT STDX Global governing-equation option 22 EPSB 0.0005 Relative convergence tolerance for steady flow computations 23 MAXIT 30 Maximum number of iterations per node in steady flow computations 24 SFAC 5280. Stationing factor for unit conversion 24a TAUFAC 1.0 Tributary area factor for unit conversion 25 QSMALL 1.0 Small flow used in computing relative change in flow 26 QCHOP 0.001 All flows < |QCHOP| are printed as zero 27 IFRZ 1 300. Number of frozen time steps AND the time step length(s) 28 MAXDT 1800. Maximum time step in seconds 28 MINDT 1.0 Minimum time step in seconds 28 AUTO 0.7 Weighting factor for computing running sum of iteration count 28 SITER 2.8 Initial value of running sum of iteration count 28 HIGH 3.2 Upper value for running sum of iteration count 28 LOW 2.4 Lower value for running sum of iteration count 28 HFAC 2.0 Factor for increasing time step 28 LFAC 0.5 Factor for decreasing time step 29 MRE 0.20 Maximum relative change in a variable during extrapolation 29 FAC 0.0 Extrapolation factor 30 DWT 0.1 Increment to BWT when max. iteration count is exceeded 31 BWT 0.55 Base value for time-integration weighting factor 32 BWFDSN File name for the initial conditions when DIFFUS=YES 33 CHKGEO NO Indicates checking of hydraulic geometry 34 ISTYLE NEW Style of exterior nodes MUST be NEW in version 9.30 and later 35 EXTTOL 0.0 Limit for reporting cross-section table overtopping 36 SQREPS 1E30 Action level for sum of squares of equation residuals 37 GETIC File name for initial conditions 38 PUTIC File name for saving state of the model OLD_SUMMARY YES Indicates old form of summary output ------------------------------------------------------------------------------------------------------------- --Changed the descriptive lines at the head of the input file. As many as 50 lines of 120 characters each can be given to describe the run. These lines are given as part of the description of output. An additional line giving the version number, version date, and the date and time of the run are given in the description. The occurrence of the heading for the Run-Control Block terminates the descriptive information. Thus, it is possible to have blank lines in this description. Every line above the line containing "RUN CONTROL BLOCK" is considered part of the descriptive input. The added line giving the version, the date of the version, and the date/time of the run has the following format: Version: 9.50 Version date: 3 June 1999 Date/time of run: 1999/07/21: 11.14.12.234 The time is given in the sequence of hour, minute, second, and milliseconds. Thus, the time in the example line is 11:14 and 12.234 seconds. --Function tables are now referenced with a table id using a maximum length of 16 characters. The old table numbers will still work, but they are treated as strings of characters and NOT as numbers. This means that table numbers of 0010 and 10 are now different when they were the same in earlier versions. The table id can be composed of the digits 0-9, the letters A-Z, and a-z, the underbar character, _, and the period. Spaces and other characters will cause problems. Do not make a table id look like a floating point number. For example, 11D3 will be treated as a double precision number, 11,000, and not a table id. Table ids, such as 1000d, 10D, 234E, or 376e, will also cause problems because they look like incomplete specifications of floating point numbers. In the same way, a table id that looks like a number with a decimal point will not be detected as an id but as a number. To allow these table ids would require reducing the level of error checking on numeric input. The best way to avoid problems like this is to switch to using table id's that start with an alphabetic character or the underbar character. These are clearly not numbers so that no confusion will result. In addition, the maximum branch number was increased from 999 to 9999. At the same time, support for the old style of exterior node designation, that is, by numbers, has been deleted. The new style for exterior nodes MUST now be used. Node ids must also be used and the maximum length of a node id has been increased to 16 characters. These changes in the size of various input items required changes to the input to FEQ. The Branch Description Tables Block, the Output-Files Block, Free-Node Initial Conditions Block, and the Pump option in the Operation of Control Structures Block, have been changed. The Special-Output Locations Block has one rarely used option at the end of an input line that was extended in field width. The Network-Matrix Control Input Block (NWMCI) must be invoked with the NEW option so that the input is column independent. This then allows for the extra space required by the longer table id's and the longer exterior-node names. The new format for the NWMCI has been an option for years; therefore, the format is documented and tested. The changes made to the Branch-Description Tables Block are: 1. The header for each block has been changed to name-list like input. The header is the line that starts with BNUM and is followed by the branch number and a series of optional input values. All these values must still appear on a single input line, but they are now of the form: =, just as for the Run-Control Block as outlined above. The variable names are the same as those used in the old form of input with the addition of BRANCH as an alias for BNUM and ADDNODE as an alias for ADDNOD. The following are valid example inputs for a branch header: BRANCH= 20 INERTIA=0.9 BNUM= 9000 ADDNODE=-1 2. The body of the branch table has been changed to use what we will call a heading-dependent format. The name-list form of input is order and column independent. Column-independent input is order dependent but column independent (within the line length). Using column-independent input for the body of a branch-description table is difficult because the pattern and number of non-blank items in the input is quite varied. Column-independent input works best if there are only a few variations in the number and meaning of the items of input. A heading-dependent format uses the headings to define the column range for each item in the branch-description table. For example, BRANCH= 4000 NODE ----------NODEID ------------XSID STATION ELEVATION KA KD HTAB AZM 400001 Deming USGS Gage R4_35.9978.93 TAB TAB R4_35.8081.93 TAB TAB 0.1 0.3 R4_35.6387.93 TAB TAB 0.1 0.3 -1 The heading line is the line immediately following the line: BRANCH= 4000. The valid range of columns for each item of input is from the first column after the preceding heading or column 1 if there is no preceding heading. For example, the valid range of columns for the node number is from column 1 through the column under E in NODE. The valid range of columns for the NODEID is from the column following the E in NODE to the column below the final D in NODEID. As in the Run-Control Block, 120 columns are available for each line of input. The dashes prefixing NODEID and XSID are used to show the maximum length of the item and do not delimit the valid column range for those items. The input item may appear anywhere in its valid range of columns. It is not necessary that the number of columns allocated to an item be the full length of a valid item. Thus, if the node id's are all 8 characters or less in length, the valid column range need only be that wide. Old inputs that have used headings in the proper manner for the Branch-Description tables will be processed without change. The Output Files Block also has been changed to a heading-dependent format. This means that all items must fall below the heading for the column, and the headings should be right justified. In most past inputs, the file-name heading has been left justified. The heading must be right justified, or else extended with other non-blank characters to be at or to the right of the right-most character in any file name specified in the column. The Free-Node Initial Conditions Block also has been changed to a heading-dependent format. All items must fall below the heading for the column and the heading is treated as being right justified. Information extending beyond and to the right of a heading will be considered as part of the next column of data. The Pump option and Gate-table option in the Operation of Control Structures Block has been changed to a heading-dependent format. The same rules apply as in the other blocks changed to allow for the increased lengths of table ids. The Special-Output Locations Block has been changed so that the input of the output location is heading-dependent. This change was done to permit longer than four-character-long names for pumps and gates. The names for gates and pumps can now be up to 16 characters long. The extra width of exterior node names when the larger branch range is used will fill some of the input formats to their maximum extent. This is true, for example, in the Backwater-Analysis Block. The width assigned to the CODE and EXN# columns is five. However, the input for the preceding column is such that a blank space can always be provided to visually separate the items. They need not be separated because of software requirements. However, the input is clearer if there is an intervening space or two between items on the same line of input. --Error messages. In processing table ids, FEQ assigns an internal table number to each table. This internal number is used everywhere like the id for the table. When an error message is given that contains a table number, FEQ is supposed to convert the internal table number to the external table id in the message. It is probable that some error messages have not been corrected and give the internal table number and not the table id. FEQ reports the internal table number assigned to each table that is read from an input file. If an error message appears with a table number that does not exist or does not make sense, then scan the output file for the section where the function tables are input. The reported number may be an internal number that was not converted. Also, make a note of the error message number and report the problem so that the message can be corrected. --Increased space for the node id in the summary output may confuse programs that read this output, therefore, a variable has been added to the Run-Control Block to request that the old form of summary output be used. If any node id's are longer than 8 characters, they will be truncated on the right to 8 characters. --Signs for free nodes may be omitted. Previously, if the sign was omitted, a warning message would be generated and the correct internal sign would be used. This was changed such that no warning message is given and the sign of free nodes is defined by the order of their appearance in the dummy branch instruction in the Network-Matrix Control Input. --A utility program, CONVERTFEQ, will convert input to FEQ to the new format. This program assumes that the input is post 7.0 and that headings have been used properly. CONVERTFEQ assumes that all input is in its proper heading range. In testing, one case was found that will require manual correction. The node id field is preceded and followed by a single column that is not read by FEQ. If a character of the node id falls into one of these columns, FEQ versions before 9.50 will process the input without reporting any errors. However, some node ids will be missing a character. This will probably be unnoticed unless the output is scrutinized with great care. However, when CONVERTFEQ processes such an input, the characters extending beyond the valid node-id columns will appear in the adjacent columns and WILL be treated as part of the table id. The table for this id will not be found and the run will fail. The input then must be manually scanned for node ids that fall outside the proper column range. --FEQ can now compute rainfall and evaporation on water surfaces. You must supply a time-series table or a time-series file that contains the rainfall or evaporation intensity. If the time series are given in function tables, these tables appear in the Function Tables Block as do all other function tables. If files are used, they are given in the Input Files Block. Note that if a HECDSS is used the values for both rainfall and evaporation MUST be INST-VAL. This is not the usual data-recording method for these data series. However, the initial support for evaporation and rainfall assumes that the times series are point valued. Currently, the only non-point valued time series supported by FEQ are those used in computing the diffuse inflows. FEQ point time-series files are by definition point valued. Thus, FEQ point time-series files can be created from a WDM using WDMUTL, which has been updated to create additional approximations from a mean-valued, time-series dataset in the WDM. The time series are associated with either a branch or a level-pool reservoir by giving the table id or the unit name for the input file after the other integer items for the Branch instruction code, CODE = 1, or the level-pool reservoir instruction code, CODE = 7. Because this version ONLY supports the NEW form of the Network-Matrix Control Input, the values in an instruction need only be separated by one or more blanks. If the time series are in time-series function tables (TYPE=7), we would have, for example, 1 25 RF_Fort_Pierce EV_Fort_Pierce if we wanted the effect of rainfall and evaporation on branch 25. Notice that the rainfall time series is given first and the evaporation series is given second. If only evaporation is to be computed, give a 0 (zero) as the value for rainfall time series designation. A 0 will be taken as indicating that no rainfall time series is given. These fluxes on the surface of a level-pool reservoir are requested by: 7 F5100 Capacity_Table 1 F5000 RF_Fort_Pierce EV_Fort_Pierce If the FEQ point time-series files are used, the currently recommended usage, then we give the file names in the Input Files Block, each with a unique unit name of length of 16 characters or less. Then, we give this unit name, prefixed by a minus sign with no intervening spaces, in the same position as the table ids. Using time-series files, the above examples become: 1 25 -RF_Fort_Pierce -EV_Fort_Pierce 7 F5100 Capacity_Table 1 F5000 -RF_Fort_Pierce -EV_Fort_Pierce The Input Files Block would then look like INPUT FILES BLOCK -------UNIT_NAME FILE_NAME--------------------------------------- FACTOR----- RF_Fort_Pierce D:\projects\florida\C24\rainfall\fort_pierce 2.31481E-5 EV_Fort_Pierce D:\projects\florida\C24\evaporation\fort_pierce 9.64506E-7 -1 The same time series can be reused as many times as needed for different branches or level-pool reservoirs. Each branch or LPR can also have a unique series associated with it. FEQ computes the flow at the water surface by the product of the rainfall or evaporation intensity (length unit/time unit) and the surface area of the branch or LPR at the start of the time step. FEQ does not attempt to include the effect of the change in surface area on these fluxes. The effect of rainfall and evaporation, in the short term, is small, especially during flood flows. During low flows, when the effect of rainfall and evaporation is larger, at least in a relative sense, the changes in surface area are small. Therefore, the error introduced by using the area at the start of the time step is small also. The units expected for the intensity are feet/second (meters/second). If the values in the function table are not in this unit, and normally, they would not be, the FAC option in the function-table header should be used to convert the values to the desired intensity. For example, if the rainfall is in units of inches per hour, a typical unit, then the value of FAC should be inch 1 hour 1 foot 1 feet 1 ----- x --------- x ------- = ------- ------- = (approx) 2.31481 x10^-5 hour 3,600 sec 12 inches 43,200 sec Because evaporation data are sometimes available on a daily basis, in inches per day. The factor to use in this case is inch 1 day 1 foot 1 1 ------- x ------------ x ----------- = -------- = (approx) 9.64506 x 10^-7 day 86,400 sec 12 inches 1,036,800 For a file, you can add an optional factor following the file name as shown in the example above. The goal is that after the application of the factors, the intensity is in feet/second (meters/second). However, the values stored in the table or in the file can be in any other convenient intensity units. Two additional items have been added to the water-balance computations: WSI and WSQ. WSI is the cumulative total volume of inflow to the water surface from the rainfall source, and WSQ is the same for evaporation. These values are printed at the end of each detailed output in the main output file for FEQ. The units are cubic feet (cubic meters). Version 9.51 August 20, 1999 --Updated various error messages. Converted additional messages to mixed upper and lower case. --Discovered bug in computation of detention parameters. Apparently, this bug did not reveal itself with the Lahey 4.5 compiler. However, the Lahey/Fujitsu compiler produced code that gave incorrect answers. The program would not properly compute the split of tributary area between non-detention and detention areas. The impervious fraction was always shown as 1.0 when it should have been less than 1.0. Version 9.52 November 9, 1999 --Factor on value in HECDSS files in the Input Files Block had a default of zero when it should have had a default of 1.0. Version 9.55 December 14, 1999 --Two new items have been added to the Run Control Block: GISID_TO_NODEID and TABID_TO_NODEID. If GISID_TO_NODEID=YES, then FEQ will place the GISID string from a cross-section function table into the NODEID for any branch-node node id that is blank. If TABID_TO_NODEID=YES, then FEQ will place the TABID string from a cross-section function table into the NODEID for any branch-node node id that is blank. IF both items are YES, then GISID_TO_NODEID dominates. In order for the GISID to be available, the Function Tables Block must be given before the Branch Tables in the input. --Added bottom-slot depth to cross-section function tables to remember the original invert elevation. The slot depth is used to convert the depth value in the printed results so that the depth will be computed relative to the original invert. This yields some negative depths in the results. A negative value indicates that the water surface is below the original invert level. --Interpolated sections now include an interpolated value of bottom-slot depth, Easting, and Northing. --Added additional error detection code for the beginning node for the formation of the Network Matrix. Invalid beginning nodes were not detected and caused enigmatic errors later in the processing. Version 9.56 February 1, 2000 --Fixed bug in output of Free nodes in the summary of extremes at the end of the run. The last digit of a free-node variable name was truncated if OLD_SUMMARY=YES was used in the Run-Control Block. Version 9.58 February 16, 2000 --The GATETABL control option in the Operation-Control Block has been extended to permit two tables per control point instead of one. The first table (in left to right sequence) is for rising elevation at the control point, and the second is for falling elevation at the control point. An elevation increment is also provided to reduce switching between the two tables too rapidly. If one control point in a control block has two tables, then all control points for that block must have two tables. If only one table is desired, give the same table in both locations with the same selection rules. An example of the new input follows. This is adapted from above where the original GATETABL option was introduced. 01 BLK=00002 02 BLKTYPE=GATETABL 03 MINDT=3600. 04 PINIT=0.0 05 CTLN UPSN DNSN TABR DOM AUX DIREPS TABF DOM AUX 06 U20 U20 F29 5300 MAX MAX 0.03 5310 MAX MAX 07 -1 The input in lines 5 and 6 are heading dependent. That is, each heading defines the columns available for the items below it. The headings are treated as being right justified. If only one control table is given be sure that no extra headings are given. The number of headings found by FEQ determines the input expected. Version 9.60 February 18, 2000 --Changes were made to the interpolation routines to correct a bug in FEQUTL. FEQ and FEQUTL use common code for interpolation of cross-section function tables. In order to correct the bug in FEQUTL, I had to generate internal table ids for each request for an interpolated table that did not contain a table id from the user. The internal tabids have been generated in a form that is invalid as an external table id. Thus, they will never conflict with a user table id. The internal table ids can appear in error messages about an interpolated table. The internal table ids contain a sequence number that is unique for each id. The interpolation routine adds the starting and ending internal tabid number to its output. Thus, it is possible to locate which table is the source of the error even though the table was generated from a request for interpolation with the minus sign in the tabid field. Version 9.65 March 7, 2000 --Fixed significant errors found in the STDCX governing-equation option. There was an error in the computation of some of the derivatives of the residual function, as well as an error in one part of one function-table lookup routine. The governing-equation options other than STDX have had little use and, therefore, some errors have been present for some time. Currently, the STDCX option appears to be working consistently with STDX. Version 9.66 April 14, 2000 --Increased the length of a fully qualified file name in the Function-Tables Block to 128 characters. The previous limit of 64 was too short for large projects with many levels in the directory tree. --The standard input and output file names given on the command line are now written to the standard output file. This will help in keeping track of files used to create the standard output file. Version 9.67 April 20, 2000 --Added option of another column of input values in a branch. These follow the standard flood-elevation field and give an adjustment factor on conveyance. For example, a value of 1.1 will multiply the conveyance by 1.1. This implies a REDUCTION in Manning's n of close to 10 percent across the section. That is, all values of Manning's n are reduced by this amount. The Manning's n values are in effect divided by the factor. If Manning's n within a subarea (subsection) of the cross section varies with depth, then the adjustment becomes more complex. Be careful with repeating table ids and using this adjustment factor. FEQ remembers which tables have been adjusted and will adjust once only. FEQ outputs a record of each adjustment made and of adjustments requested and not made. If you are adjusting roughness this way, be sure that all tables being adjusted are unique. Version 9.68 April 26, 2000 --Found case where only a single node was given on a branch and FEQ did not issue any warnings and the model appeared to compute without error. Added detection for this case to prevent it from occurring in the future. Version 9.69 May 18, 2000 --Changed handling of error reporting for negative depths in the Free-Node Initial Condition Block. Now a warning is given if the depth is negative and the depth datum is zero. This was done to permit giving an initial elevation for the level-pool reservoir that is negative and still have the depth datum as zero. Having a depth datum of zero for both nodes of a level-pool reservoir has been the signal that the depths are really elevations. However, a negative depth is invalid. At the point of reading the inflow node for a LPR we do not know that it is an inflow node to a LPR. Thus, we issue a warning that a negative depth is invalid if the node is not an inflow node to an LPR. --Added two new options to Code 14, Side Weir Flow, in the Network-Matrix Control Input Block. Two optional time-series table ids can be given following the table ids for the flow tables. The first of these time-series tables gives an adjustment factor on the flow taken from the flow tables as a function of time. This permits adjusting the flow computed, either as a calibration tool or to simulate structures with changes taking place during a flow event. If the table is omitted the multiplying factor is taken as 1.0. The contents of the time-series table is not checked so be sure that the multiplying factor is in a reasonable range. Errors in the factor in the table can cause computational failure. The second of these optional time- series table ids references a crest-adjustment factor as a function of time. When this table is present, the third floating point value should also be given. This value gives the elevation of the toe of a levee that is subject to failure. FEQ computes the current levee crest elevation using: HC = HCREST + P*(HCREST - HTOE) where HCREST = the original elevation of the crest that defines the reference point for heads used in the flow tables. This would be the minimum elevation for the length of levee crest used in defining the flow tables using EMBANKQ or, in some cases, CHANRAT. HTOE = the elevation of the toe of the levee on the protected side of the levee. P = the factor stored in the time-series function table. The meaning of P is P = 0 -> levee crest is unchanged P < 0 -> the levee crest is shifted downward by the given decimal fraction of the levee height, where the levee height is defined as HCREST - HTOE. For example, if HCREST = 14.0 and HTOE = 10.0, the levee height is 4.0. If at a given time point P= -0.25, then the new levee crest is 13.0. P = -1.0 indicates that the levee is completely eroded to the toe elevation. P > 0 -> the levee crest is shifted upward in the same manner as P < 0 shifts downward. P > 0 can be used to represent the raising of a levee crest by sandbagging or other means during a flood fight. Please note that this means of simulating changes to a levee during a flow event is simple and limited in scope. The shift in levee height occurs uniformly over the complete length of levee represented by the flow table. In typical applications the flow table could represent several hundred feet of levee. If only a 50 foot section is to fail, then two courses of action are available. 1. Use a flow table for only a 50-foot length of levee. This leads to many short branches. 2. Modify the longer levee crest for the flow table so that only a 50-foot portion is available for overflow. That is, we assume that the remaining portion of the levee represented by the flow table is not subject to overflow under any circumstances. 3. Use a time-dependent multiplying factor, from a time-series table, to reduce the flow to approximate the 50-foot failure length. Version 9.7 August 2, 2000 --Added an option to create an IntelliCAD script file to draw a geographic schematic. A geographic schematic mimics the true length and location of branches. It is distinguished from a topological schematic that only shows the connectivity among branches and nodes without regard to their true length or location. In order for this schematic to be drawn, several additional items of information must exist in the input to FEQ: 1. Every cross section must have an EASTING and NORTHING value given that agrees with the actual location of the cross-section invert. These values are used to draw the branch. 2. Level-pool reservoirs must have their location given in the Free-Node Initial Conditions Block. This block has been modified to accept this information. 3. Dummy branches that connect between junctions that each have at least one branch-exterior node in them do not need to have their (x,y) location given. FEQ uses the branch-exterior node locations to define the location of the dummy branch ends. 4. Dummy branches that have only one end connected to a branch are reported to the output file in a format that is recognized by a utility program, MODFEQIN. This program will place these values in the proper locations in the Free-Node Initial Conditions Block with the location of the other end of the dummy branch filled with a standard character string that the user must replace with a valid location description. 5. Dummy branches that connect between junctions with at least one branch or dummy branch node in the junction with a known location do not need location information in the Free-Node Initial Conditions Block. Here is a fragment from the Free-Node Initial Conditions Block showing how the locations of free nodes may be given. The first two lines of the input show blanks in the three right-most columns of information. Blanks in these columns denotes that no location is given. Blanks for the last two columns do not mean zero! The last two lines in the fragment show the two ways of defining a location for an exterior node. The first, for node F1009, has blanks in the column under Base_node and, therefore, the values for the following two columns are the coordinate location of the node. The first value given is the EASTING value and the second is the NORTHING value in most cases done so far. If the columns under Base_node contain a NODE designator, then the values in the following two columns are offsets from the location of that node. In the example below, the offsets are from F1009 the other end of the dummy branch. NODE NodeId---------- Dpth/Elev DISCHARGE DpthDatum Base_node X_or_Xoffset Y_or_Yoffset F1008 RAAHAG_M.3LU 2.0 0.0 0.0 F1508 RAAHAG_M.3LD 2.0 0.0 0.0 F1009 RAAHAG_M.3RU 2.0 0.0 0.0 1216954.36 675618.77 F1509 RAAHAG_M.3RD 2.0 0.0 0.0 F1009 -400.0 0.00 --Two options have been added to the Run-Control Block that affect the datum for head used in the two-node control structure instruction (Code 5 Type 6) and in the side-weir instruction (Code 14). All previous versions have always used the datum given in the instruction. It has proved useful to provide for the option of always using the datum in the table or tables given in the instruction. If CD14_TAB_DATUM is given, the value of YES in the Run-Control Block, then the head datum given in the outflow table in the instruction will be used as the head datum in the instruction. The default value is CD14_TAB_DATUM=NO to reflect the behavior of past versions. The other option, CD5T6_TAB_DATUM, is the same for the Code 5 Type 6 instruction. These options were added to make it easier to represent changes in levee- crest elevations for major floods occurring in different years. Repair and enhancement operations could change the levee-crest elevation at many locations. Facilities exist for the semi-automatic recomputation of the flow tables but any changes in crest level would have to be made manually in previous versions. With CD14_TAB_DATUM=YES, the changed crest levels, if any, in the flow tables for the levee will be used in the instruction. Thus, a manual change in the input is avoided. Version 9.71 August 15, 2000 --Added optional input field for a free-node station in the Free-Node Initial Conditions Block. The station given to a free node is user defined. The station is printed with the summary output of extremes at the end of the run. --Used new feature of the latest Fortran compiler from Lahey to check for undefined variables and subroutine interfaces. Found various undefined variables that had not been initialized. Most of them were values that were not used in the computations but were transferred from one variable set to another at the end of a time step. The internal details of the pattern of storage is often ignored at these points to make the transfer faster and simpler. --Found subtle problem in the two-D table checking that sometimes omitted a table from the list. Also, fixed a bug in that code for multiple tables in one instruction. Version 9.73 September 27, 2000 --FEQ now forces the invert elevation for both nodes on a dummy branch to match the invert elevation of the branch node attached to one end of the dummy branch using the Equal-Elevation instruction in the Network-Matrix Control Input. This will make subsequent checking easier and will suppress many warning messages in models that make extensive use of dummy branches to represent connections between parallel flow paths in a stream system. --A new optional value in the Run Control Block selects the format for the *.FEO file in the NEW GENSCN block. The default value is set to the old format. The value need not be changed until and if the new *.FEO format is created. Version 9.74 October 12, 2000 --Changed the Network-Matrix Control Input instructions that refer to two-D tables for flow ratings to use TAB or tab for the datum elevation of the table. Thus, there is no need to hard code the datum values. However, for this to work properly, the Function Tables Block must be given before the Branch Description Block so that the function tables will be known. This is a good standard practice for this as well as earlier versions. In order for FEQ to sense the TAB or tab value, it must be preceded with a semicolon. This signals the end of the previous group of input which is all integers or character strings. TAB or tab is detected as a character string and FEQ becomes confused when the semicolon is not added. This does not currently work for 1-D tables because we have no standard method for giving the datum for them. If a datum is not given in the instruction, an error is reported. --Adjusted various aspects of the newest GENSCN output to agree with the latest revision of the output. Pumps and gate status reporting is supported in the files for GENSCN. GENSCN modifications to be released near the end of the year may support these as well. --Added checking of side-weir instructions. Most often, side-weir flows should be zero at the initial condition in the model. If they are not, the initial elevation difference across the side weir should be small or the flow and the difference should be in reasonable agreement. Currently, the check will disable any side weir for which the water-surface elevation across it differs by more than 0.1 foot. All "problem" weirs are reported in a table shortly before the start of unsteady-flow computations. Problem weirs are those with non-zero flow and those with the invert of the source or destination node being above the datum for the flow over the weir. The latter indicates some sort of error in the data--one that is often difficult to see when many side-weir instructions are involved, such as when modeling flow over levees or high banks along a river. Problem weirs are disabled in the computations and must be corrected before they will be enabled in the computations. Version 9.75 October 25, 2000 --Changed checking of code 4 type 3 to take place in EXIN instead of in CONTRL. Needed to support changes in handling nodes on dummy branches and level-pool reservoirs. --Initial values for all free nodes on dummy branches are now set by FEQ. Both ends of the dummy branch are set based on connection by any EqZ instruction. In some cases, models may exist in which a dummy branch is not connected to any other node using EqZ. In this case, the dummy branch must be explicitly set in the BACKWATER block because the elevation/depth value given to the nodes in the Free-Node Initial Conditions Block is ignored. The elevations for level-pool reservoirs must either be given in the Free-Node Initial Conditions Block or defined in the BACKWATER block. All branches must be defined in the BACKWATER block as well. FEQ now checks for missed branches. Version 9.76 November 2, 2000 --Fixed bug in TAB option for code 5 type 6 when multiple flow paths are involved. Only the TAB option for the first path was done. --Changed FEQ's response when the TAB option is used in the NMCI and the function tables are unknown. It is now called an error and a request is made to move the Function Tables Block ahead of the Branch Description Block. --Added additional messages to the output to describe what is being checked. Version 9.78 November 20, 2000 --Fixed bug in handling the conversion between tabids and internal table numbers for the two default tables used in modeling detention storage. --Added ability to include a HOME name value in the Function Tables Block. The string in the HOME name is prepended to any file name given after the definition of the HOME-name value. This makes it possible to change to a different drive letter by changing one input value. It also makes it possible to shift to a Linux-style directory structure with a minimum of change required to the input file names. The home-name value is defined by the key word HOME followed by = and followed by the value. To define a drive letter the following form: HOME= d: would work if all the subsequent file names start with a \ or / depending on the operating system. Note that Lahey Fortran compilers are able to process a slash (/) in Microsoft Operating Systems (DOS/NT/W2K/XP/95/98/Me). I recommend using / instead of \ if there is a chance that a major input will be transferred to Linux/Unix. --Added Easting/Northing output to the new-format FEO file. Version 9.79 January 2, 2001 --Found and fixed bug in the automatic adjustment of the invert elevation of nodes on dummy branches when the node on the dummy branch is in an EqZ instruction with a node on a branch. The depth value was not correctly adjusted at the same time. --Corrected some formats in error messages so that larger numbers can be printed for the interior nodes on a branch. --Added checking for a carriage return character in each line of input. This makes it possible to copy files from Microsoft Operating Systems to Linux without having to convert the file, which ends each line of a character file with a carriage return-line feed, to Linux, which ends each line of a character file with a line feed. Without this change, unconverted files would fail when running in Linux/Unix because the carriage return would be in the data line given to the Fortran program. FEQ and FEQUTL would not process that character properly. Now, the character is converted to a space before any input processing takes place. Version 9.80 15 March 2001 --Found problem in interpolation of cross sections when a bottom slot was present. Corrected problem and introduced the restriction that interpolation can take place only when: 1. The sections bounding the interpolation interval both have a bottom slot 2. The sections bounding the interpolation interval have no bottom slot Interpolation intervals in which only one of the bounding sections has a bottom slot are invalid and will cause an error. --Found an error in the computation of the Hager side-weir correction value. This error, in limited testing, has the greatest effect on flow estimates when the head on the side weir is small, on the order of tenths of a foot. In this range, it appears that the error caused the flows for the weir to be too large by about 25 percent. As the head on the weir increases to about 1 foot, the overestimate is about 5 percent and at heads of 3 or more feet on a side weir (not normally expected on most side weirs), the overestimate is about 2 percent. The effect of this error on the results for a model depends on the head on the weir, the time spent at various heads, and the effect of the side-weir on the stream system. --Added additional options for debugging models. Three options in the Run-Control Block add new features: 1. DTMIN_OUT gives a minimum time step for detailed output. Whenever the time step is <= DTMIN_OUT, detailed printout for that time step is given no matter what the print interval. Currently, DTMIN_OUT ignores DPTIME. However, that may change. 2. START_EQ and END_EQ gives the starting and ending equation numbers in the Network Matrix for detailed printout of the equations but only if the current time is at or greater than the time given in DPTIME and the current time step is <= to DTMIN_OUT. You should be aware that printing the equations in part or in whole can increase the size of the output file to large limits. Adding the equation printout added substantially to the memory requirements for FEQ. If they do not prove to be useful, these options may be partially or completely disabled. --Disabled the shallow-depth correction in the STDCX governing equation option because it appeared to be counter productive in applications to cross sections with a bottom slot. More work is needed on how to improve computations with bottom-slotted cross sections. --Problems in code 13 were revealed in a model with highly detailed cross sections. Situations arose in which there was no inflow or outflow but the solution converged with a noticeable difference in water-surface elevation. This was caused by there being a solution with such a difference due to variation of alpha in the energy equation. Further, it was found that the rate of change with depth of alpha and beta were often greatly in error so that the derivatives in the Newton linear system were in error. This caused convergence failure or slow convergence. The following changes were made: 1. Critical flow is now required in the cross-section function table used at the code 13 junction. Heretofore, only the first moment of area about the water surface was required. It turns out that the required derivatives with respect to depth are simple functions of the Froude number. The critical flow is computed with some effort for consistency within FEQUTL. Therefore, using the critical flow in the code 13 computations to compute the Froude number avoids the problems of the invalid rate of change of alpha or beta with respect to depth. 2. Heretofore, the case with no inflow or outflow was computed using the same code as for inflow and outflow. This has been changed so that the inflow/outflow must be greater, in absolute value, than (abs(QL) + abs(QR) + QSMALL)*1.E-4, where QL is the flow at the upstream node and QR is the flow at the downstream node, before either the momentum or energy conservation principle is used. If the inflow/outflow is too small, FEQ forces equal water-surface elevation at the junction. So far, this has avoided the problems outlined above. A side benefit is that the computations are slightly faster if there are many cases of potential inflow/outflow in the system. Most of the time there will be no inflow or outflow and the equal-elevation option avoids any need for table lookup and involves minimal computation. --Problems in running a large model, more than 1,000 branches, with major flows over levees and high ground adjacent to the stream, revealed two errors in the computations of some partial derivatives in FEQ. These derivatives are used in Newton's method for solving a system of non-linear equations. Convergence was slow or failed at locations that did not have any of the typical signatures of computational problems. The partial derivatives for eddy losses (given by KA and KD in the Branch-Description Block) should have been multiplied by the parameter, WT. Because WT is always <=1, this made these derivatives, on average, too large. However, the eddy losses are generally small, so this effect was small. The error had an effect when there was a larger change in velocity, say from 8 feet per second to 4 feet per second in a short computational element. The computations would fail with a time step too small. Setting KA and KD to zero at this location would permit successful computation. The second error in partial derivatives was found for the velocity-head derivative in Code 13 for the conservation of energy option. If the outflow was large, the velocity head became large and this error also caused computational failure with too small a time step. Correction of this error allowed the large model to compute at reasonable time steps (1,800 seconds), whereas before the correction any time step much larger than 240 seconds would cause failure. --A significant problem was found when running the model with more than 1,000 branches. There were thousands of lines of warnings about table overflows of various kinds with overflow amounts so large that the number could not be printed within the format provided for it. The model could not provide a reliable computation of streamflow. The sparse matrix methods used in FEQ do not provide correct solutions in single precision for very large (greater than 250-450 branches or with more than 28,000-80,000 matrix elements) models. Consequently, a double- precision version of FEQ was compiled and is available to users. The use of double-precision has two drawbacks: 1. It increases the runtime storage for the software. Various larger vectors used in the program have doubled in size. However, the relative increase for the whole program is not large. Typical PC's today should have enough RAM. However, if you try to run a 1,000 branch model on a PC with only 64 MB of RAM, you may find the computation to be too slow. The active memory foot print for the 1,000-branch model appears to be about 26MB. 2. Runtimes are increased for models that run properly using single precision. The table below gives some statistics on the various models that have been compared. The models above the break in the table run properly using a single-precision solution for the linear system. Those below the break have increasing problems. The last two models in the table could only be run in single precision after the models were refined and tuned using the double-precision version. However, even then the behavior of the computations was erratic with many spurious variations in the time step (recognized as such only after running with the double-precision version). Number of Number of Number of Number Number of Element Single Double Approx branches Ext. Nodes Nodes on of Elements Count per Precision Precision Percent Branches Equations In Matrix Equation Time Time Increase --------- ----------- --------- ----------- ---------- --------- ---------- ----------- ----------- 25 82 1741 3546 20104 5.7 97.3 105.7 9 192 814 711 2282 34403 15.1 20.7 23.8 15 255 812 781 2166 27469 12.7 17.6 20.1 14 ----- 506 1960 1622 5140 89203 17.4 58.9 70.4 20 554 2006 1450 4696 131960 28.1 369 227 ---- 1066 3966 3123 9934 223598 22.5 772 397 ---- A comparison of the extreme results for the last model showed that they were essentially identical. This comes about from the nature of the iterative solution of non-linear equations. The error caused by the ill-conditioned matrix when single precision was used affected only the corrections to the unknowns. If these corrections become small enough, then convergence is declared. Apparently, when the corrections became small, the errors in them became smaller as well, so that the final results of the two runs were essentially the same. However, it probably would have been impossible to run the final model with the single-precision corrections. The first model in the second group did run without apparent problems even though a detailed analysis of the solution showed that it had some significant errors caused by ill-conditioning in the single-precision solution. It is not clear what characteristic measure of model size should be used to select between the two solutions. Part of the problem is that the behavior of a model with many computational problems in it appears similar to the behavior of a model with an ill-conditioned matrix. If the user finds a large model simulation to have computational difficulties (many warnings, large table overflows, spurious time-step changes or failure to complete), a definitive test would be to do a test run of the model using the double- precision solution to see if the problems disappear. If they do, then it is probable that ill-conditioning of the matrix at single precision is the source of the problem. Version 9.81 3 April 2001 --Changed method of applying corrections to depth in Newton's method. In previous versions only large negative corrections were adjusted in order to prevent or minimize the occurrence of negative depths in the final value of depth. There was no limit on the increase in depth. However, now the change in a depth value caused by the application of the correction from Newton's method is limited to 50 percent of the value of the depth. Currently, this value is hard coded and not accessible from input to FEQ. That will probably change in the near future. --Corrected an error in a format in the end-of-run report on 2-D tables. If the final extreme flow used as an argument to a 2-D table of type 14 was greater than the maximum flow in the table of type 14, then a system message would be issued and the run terminated because an improper format was encountered in the process of printing the end-of-run report. Version 9.83 9 July 2001 --Added check for hour of day in time-series table. The hour must be between 0.0 and 24.0. --Found differences in treatment of reading an integer value using list format, that is * for a format, between two Fortran compilers. Programs compiled using the Lahey compilers generated an error if the item being read was not an integer. This functioned the same under both Microsoft Windows and Linux. Programs compiled using the Portland Group, Inc. (PGI) compiler under Linux did not generate an error if the first character encountered in the item was a D. It is assumed this would probably happen if the first character was an E as well, or if the two letters were lower case, but this was not tested. The value returned to the integer was zero. This resulted in FEQ failing to run a model that had a time-series table id at the upstream-most boundary that started with D. This table-id was ignored when using PGI and as a result the run failed when zero flow at the upstream boundary caused shallow depths to occur in the main channel. I outline this in detail to remind users that there are always potential problems when shifting from one compiler to another. The standards cannot specify every detail. I have changed the read statement in question to use an I16 format code. Using this code gives the same behavior for both compilers. Version 9.84 1 October 2001 --Changes have been made to the input for Tributrary area. Named-item and heading-dependent input has been used to replaced the remaining fixed formats. --Compiling with LF90 revealed some unexpected stack requirements in the double-precision version of the linear-system solution in FEQ. Thus, the handling of the temporary WORK vector in INFO2 was changed. This vector is now a local variable declared in INFO2 only and not part of an overlay on PDAVEC, as it was before the double-precision, linear-system solution was implemented. This still requires a large stack in LF90 but now the messages make sense, whereas before the messages from the compilation of the double-precision version were confusing. LF95 handles local variables in a different manner and the stack-size issue does not arise. --The Point Flows Block is no longer supported. Point flows should be connected to the model at a junction created for each point flow with a dummy branch added to provide the boundary node to reference the point flow. Version 9.85 8 October 2001 --Fixed problem in processing delay and detention with the updated input for the Tributary Area Input Block. --Problems with non-printing characters in table ids is a recurrent problem that can be difficult to find. A new error message has been added to flag that possibility. It does not appear in the documentation at this time: *ERR* 408} FEQ has found nn function tables but nn unique table ids. These numbers differ thus some table ids with non-printing characters may appear somewhere in the input. One or more tables may also be missing. Note: The number of function tables and the number of table ids found by FEQ should match. When they do not match, a function table may be missing from the input even though you have referred to it at some point in the input. On the other hand, there may be some non-printing characters in a table id so that FEQ sees it as being different even though it will print as being identical to the human eye. If the internal table number assigned to a function table when it is read by FEQ and the internal table number given for the table id when it is reported as missing differ, then one instance of the two occurrences of the table id must contain one or more non-printing characters. Your editor might have the ability to show certain non-printing characters, such at tab characters, and, if so, can be used to find them. Otherwise, delete what look like spaces after the table ids that are suspect and try the run again. Version 9.86 21 November 2001 --Changed code in RDFIN to perhaps give better error messages on user errors in the Input Files Block. Certain user errors would cause a message that did not seem to be correct when, in fact, it was correct. Be sure that there are no spaces, minus or plus signs, or any other special characters in a file name other than the underscore character. Although the operating system may allow these, FEQ does not! Version 9.87 14 December 2001 --Corrected an error in an error message that caused an abnormal termination of the program when using the LF90 compiler. This termination gave no details on the location. Version 9.88 27 December 2001 Relaxed requirement on Code 13 so that old models would run. However, they may not give exactly the same answers as before. You should change the cross-section table type to 22 or 25 as soon as possible. In a future version, these cross-section table types will be required. An error/warning message is printed when tables of the undesired type appear. The model may run, but it may fail if the Froude number gets too close to 1.0. Version 9.89 11 January 2002 --Added a new feature to the side-weir instruction. If the table defining the failure/fight option is present, then if F(4), the fourth floating point entry, is non-zero, FEQ will compute the elevation of the levee toe by subtracting F(4) from F(2), the datum for heads. Any value given for the toe elevation of the levee will be ignored. This makes it possible to vary the effective crest level for side-weir flows in absolute terms. For example, if F(4) is given as 1.0, then FEQ will treat the levee height as being 1.0 because the toe of the levee is then 1.0 below the levee crest. The value of p in the failure/fight table will then give the change of crest in the same units as the levee height. If the levee height is 1 foot, then p= 0.25 will raise the levee by 0.25 feet. Note that the levee height computed this way is false and is only a convenient way to control the elevation of the surface that was used to compute the values in the flow table. The values in the flow table can be from any source, not flow over a levee at all. The reason for doing this is that the flows over side weirs are sensitive to changes in the elevation of the water surface or the control surface. Thus, a change of only 0.25 feet in the control surface could change the computed overflows by 30 percent or more. When attempting to sort out the uncertainties of a stream system, the ability to vary key items controlling the model results is vital. The elevation of real control surfaces is often uncertain by 0.25 feet or more. --Add QCHOP operations to the output of flow files to avoid problems with flows that result from round-off and truncation errors in the computations. QCHOP has been applied to the printed output since version 7.0! --Added optional time-series table to Code 6 to allow an adjustment factor as a function of time. Use with care! If table is not given, the adjustment factor is taken as 1.0 from this source. Version 9.90 4 February 2002 --Changes to the processing of Tributary-Area input in October 2001 introduced problems not found in initial testing. The tributary areas for level-pool reservoirs were processed correctly. However, those for branches were not and the tributary areas were returned as zero. This results in a dramatic reduction in the computed flows in most cases! --Also, make sure that any labels you have added following the required headings in the Tributary-Area Input Block are prefixed with a single quote. This signals that the remainder of the line is a comment. The tributary-area values are now in heading-dependent format so that the contents of the heading is significant in defining the range of columns below it. Version 9.91 17 May 2002 --Modified the side-weir checking to include the effect of any bottom slot computed in FEQUTL. This was added to prevent side-weir flow when the water was still in the slot. This came about because the crest for the side weir was in error. However, in some models, having hundreds of side-weir flows, it is not possible to check all crests. Thus, FEQ checks and disables any that do not make sense. The user can then review the list and decide what should be done with the problem. --Found a subtle bug in the computations of one derivative in the side-weir code. This only appeared if the weight factor on head in the source channel differed from 0.5. Most models have used 0.5 so that the error never manifested itself. --Added option to check high-water marks against maximum results at the end of a run. FEQ will seek a file with the exact name: hwmark.loc in the current directory, the one from which you invoked FEQ, and, if found, will process the file. An example of such a file is: ; Listing of high-water mark locations for Reach 5 ; Offset is in the same units as stationing in the model. If ; offset > 0 then it is a distance dns of the given node on a branch. ; If offset < 0, it is a distance ups of the given node. Note that ; offset is ignored if Bran is 0. In this case, the Node must be an ; exterior node either on a branch or a free node. Location/description-------------------- Bran Node Offset Elevation S-1 in southwest Sumas 0 D650 0.0 39.6 S-2 west of BNRR Sumas 0 D5554 0.0 43.5 S-3 west of BNRR Sumas 5562 556201 0.014 43.5 S-4 in south-west Sumas 651 65106 0.0 40.2 S-5 west of BNRR Sumas 5564 556401 0.019 43.6 S-6 central Sumas 0 D641 0.0 40.8 S-7 northwest Sumas 0 U641 0.0 40.4 S-8 north Sumas 626 62603 0.0 37.4 S-9 northeast Sumas 628 62802 0.0 38.5 S-10 east Sumas 0 D626 0.0 37.8 S-11 southeast Sumas 643 64306 0.0 38.9 S-12 far southeast Sumas 653 65302 0.0 38.7 S-13 far southeast Sumas 0 D652 0.0 39.0 S-14 south Sumas 652 65206 0.0 40.6 S-15 far northeast Sumas 630 63005 0.0 36.7 10.84 m B. C. 0 D636 0.0 35.56 9.37 m B. C. 0 F202 0.0 30.74 Max stage Huntingdon Gage 0 D362 0.0 33.1 ; Following points are in overflow corridor Top utility box at Shuksan and EvrGrn 6210 621002 -0.021 80.74 ;The following mark appears to be so high relative to others that it is in error ;East edge corridor south of Tom Rd 6230 623002 0.025 80.37 Near barn dns of looong culvert 0 D5074 0.0 70.25 Top step at Jim Glass's house 5118 511802 0.011 68.40 Mud stain in Jim's barn 5118 511802 0.011 68.54 Near Johnson Creek and Clearbrook Rd 5190 519002 0.0 58.43 North and west of Clrbrk and Nksk Rd 0 F5804 0.0 56.40 Sth Badger Rd in Trib 1 on barn????? 6438 643802 0.0 65.71 END This format must be followed exactly, columns are important. Use the heading line as the template. A summary is given as follows: High-Water Mark Summary High-Water Mark Location Bran Node Elevation Sim Elev Diff S-1 in southwest Sumas 0 D650 39.600 39.961 0.361 S-2 west of BNRR Sumas 0 D5554 43.500 43.330 -0.170 S-3 west of BNRR Sumas 5562 556201 43.500 43.292 -0.208 S-4 in south-west Sumas 651 65106 40.200 39.719 -0.481 S-5 west of BNRR Sumas 5564 556401 43.600 43.272 -0.328 S-6 central Sumas 0 D641 40.800 39.451 -1.349 S-7 northwest Sumas 0 U641 40.400 40.048 -0.352 S-8 north Sumas 626 62603 37.400 37.768 0.368 S-9 northeast Sumas 628 62802 38.500 37.158 -1.342 S-10 east Sumas 0 D626 37.800 37.367 -0.433 S-11 southeast Sumas 643 64306 38.900 39.130 0.230 S-12 far southeast Sumas 653 65302 38.700 39.273 0.573 S-13 far southeast Sumas 0 D652 39.000 39.280 0.280 S-14 south Sumas 652 65206 40.600 39.445 -1.155 S-15 far northeast Sumas 630 63005 36.700 36.127 -0.573 10.84 m B. C. 0 D636 35.560 35.299 -0.261 9.37 m B. C. 0 F202 30.740 30.874 0.134 Max stage Huntingdon Gage 0 D362 33.100 32.972 -0.128 Top utility box at Shuksan and EvrGrn 6210 621002 80.740 80.251 -0.489 Near barn dns of looong culvert 0 D5074 70.250 71.075 0.825 Top step at Jim Glass's house 5118 511802 68.400 68.996 0.596 Mud stain in Jim's barn 5118 511802 68.540 68.996 0.456 Near Johnson Creek and Clearbrook Rd 5190 519002 58.430 57.899 -0.531 North and west of Clrbrk and Nksk Rd 0 F5804 56.400 56.833 0.433 Sth Badger Rd in Trib 1 on barn????? 6438 643802 65.710 64.973 -0.737 Difference-distribution summary Difference range Count Proportion -100.00 < Diff <= -1.00 3 0.12 -1.00 < Diff <= -0.50 3 0.12 -0.50 < Diff <= -0.25 6 0.24 -0.25 < Diff <= -0.10 3 0.12 -0.10 < Diff <= 0.00 0 0.00 0.00 < Diff <= 0.10 0 0.00 0.10 < Diff <= 0.25 2 0.08 0.25 < Diff <= 0.50 5 0.20 0.50 < Diff <= 1.00 3 0.12 1.00 < Diff <=100.00 0 0.00 Analysis of 25 high-water marks completed. Currently, the ranges for the distribution of differences are hard-coded and in feet. In the future, they will become controlled by the user. --An additional item of information is placed at the end of processing of the Branch-Description Tables. This is a line that gives the total length of all branch flow paths in the model. As example output is: Length of all branch flow paths= 103.940 The units of the value are the same as given in the branch input. In this example, the units are miles. --Added options to the Output Files Block to add or subtract values at different locations in the model to create the value to be stored in the file. This is useful when there are multiple flow paths that form and vanish along a stream. It becomes difficult to debug in some cases when the flows change in a flow path because an additional flow path becomes active. Only modest changes were needed to the input to accomplish this. Old inputs should continue to work as before. The following example block illustrates the new features. The comments explain what is happening. Note that the UNIT column, which is no longer required, has taken on a new role and that is to define the action to be taken with information on the input line. OUTPUT FILE SPECIFICATION ACTN BRA NODE ITEM TYPE NAME---------------------------------------- ; The action field is blank so that the given location will be output to ; the named file as in the past. U4000 FLOW STAR demingq.sim ; The flow at exterior node D4142 will be output in the named file. OUTA ; requests output AND adding the value to an internal location for use ; in the QUAD action. Using OUT would give the same result. ; The following line requests that the flow at D4142 be integrated (QUAD ; stand for quadrature, an older term for numerical integration) numerically ; to yield a time series of the cumulative flow at D4142. OUTA 0 D4142 FLOW STAR blweverson.sim QUAD 0 0 FLOW STAR cumblweverson.sim ; Here, we want to output the sum of flows at D3577, D3177, and D3677 to ; the file given. Note that the file field is left blank for the ADD action ; because no output is requested. Again the QUAD action computes the cumulative flow ; from the sum of these three nodes. ADD 0 D3577 FLOW STAR ADD 0 D3177 FLOW STAR OUTA 0 D3677 FLOW STAR abvgdmrdn.sim QUAD 0 0 FLOW STAR cumabvgdmrdn.sim ; The first two lines merely output the flow at the given node. ; The ADD action adds the flow at D4541 to an internal location. ; The SUB action subtracts the flow at F4501 from the same internal location. ; The QUAD action computes the cumulative net flow at the two nodes and places ; it in the named file. F3006 FLOW STAR stcknyislndrd.sim D4541 FLOW STAR mnstrtevrsnq_2002.sim ADD 0 D4541 FLOW STAR overflow + baseflow SUB 0 F4501 FLOW STAR take out baseflow QUAD 0 0 FLOW STAR cumqover.sim -1 I have a simple plot routine that then reads these files and creates plots of the results. --Found an error in the forced-boundary instruction, code 6. The multiplying factor was applied to both the base value and the time-series value. It has been changed so that the multiplying factor only applies to the time-varying value. The base value serves as a constant lower limit that provides a floor on the flow or elevation at that location. If you always left the multiplying factor at 1.0, the usual course of action, then this bug made no difference. This is probably why no one has reported a problem. --Added option to permit one file name as a command-line argument to FEQ. If only one name is given, FEQ strips off the final extension of that name, if there is one, and appends .out to it to form the user-output file name. --Added optional home directory values in the Run Control Block, in the Tributary Area Block, in the Special-Output Locations Block, in the Input Files Block, and in the Output Files Block. In each case the variable is called HOME. The following rules apply to the use of these values: 1. A value of HOME given in the Run Control Block is considered to be a global value, that is, in the absence of additional information, it applies everywhere that a home directory has meaning. Such a directory has meaning for file names in a special format. 2. If a value of HOME is given in one of the other blocks, then that value applies in that block until another value appears in that block. The global value of HOME is ignored in a block if a local value is given in that block. 3. The home directory is only added to a file name if that file name begins with either a / or a \. Thus, you can still give file names that are local to the directory from which FEQ was invoked. In Microsoft Operating Systems you can also prevent application of a home directory by using a full path name including the drive letter. However, that will not work for Linux/Unix because they have no drive letters. 4. If HOME is not defined anywhere, then file names are not changed anywhere. 5. HOME must be given in the first four characters of the input line. The following equal sign and value must also occur in that order before the end of the line which should be considered to be at most 80 characters long. 6. File names, except in the Function Table Input or in FTABIN, are limited to 64 characters in length including drive letter, colon, slashes, or back slashes. The file name can be 96 characters long in the Function Table Input or in FTABIN. This allows for a deeper directory structure. However, using long directory names with a deep structure can exceed these limits. At some point, I will probably increase the file name length to 128 or more characters. Here are some examples: 1. Special-Output Locations Block. Notice that here the home value contains all of the path except the final /. The final / must be placed on the file name in order to have FEQ add the home directory to the name. Notice that deleting HOME and deleting the slash on spout yields the same location for the file. Special OUTPUT LOCATIONS HOME=D:/nooksack/lower/feq FILE=/spout BRA NODE 12345671234567 0 D4541 EvrsnMnStrt -1 2. Output files: This is the same as for the Special-Output Locations Block. Again, deleting HOME and the leading slashes on the file names gives the same location for the file. OUTPUT FILE SPECIFICATION ACTN BRA NODE ITEM TYPE NAME---------------------------------------- HOME = D:/nooksack/lower/feq U4000 FLOW STAR /demingq.sim OUTA 0 D4080 FLOW STAR /upsoverq.sim QUAD 0 0 FLOW STAR /cumupsoverq.sim OUTA 0 D1010 FLOW STAR /ferndaleq.sim QUAD 0 0 FLOW STAR /cumferndaleq.sim F3006 FLOW STAR /stcknyislndrd.sim D4541 FLOW STAR /mnstrtevrsnq_2002.sim ADD 0 D4541 FLOW STAR /overflow + baseflow ADD 0 F3006 FLOW STAR /flow over Stickney Island Rd. SUB 0 F4501 FLOW STAR take out baseflow QUAD 0 0 FLOW STAR /cumqover.sim ADD 0 F4501 FLOW STAR QUAD 0 0 FLOW STAR /cumqbase.sim -1 The purpose of adding these features is to make porting of input files to Linux/Unix easier. First of all, note that the Lahey Fortran compilers running under Microsoft Windows will properly process slashes, even though Microsoft Operating Systems use a backslash. I am starting to use slashes in all my inputs so that I have one less thing to change in the input to run under Linux. By a careful design of the file layout, I can eventually transfer an entire project and only have to change a global HOME value in the user-input to FEQ and FEQUTL in order to run under Linux/Unix. FEQ and FEQUTL already properly process the end of line differences from files that come to Linux/Unix from Microsoft Windows. The files are left unchanged and the editors that I use in Linux do not change the end of line and any added lines have the same end of line. Thus, it is possible to move such a project back to Microsoft Windows again with only a change in a global HOME value in each user-input file. However, if the project is initiated and developed under Linux, the transfer to Microsoft Windows is more complex. The lack of the "extra" carriage-return character at the end of each line will cause problems for most Microsoft programs. Version 9.92 12 June 2002 --Corrected improper handling of the two-digit form of the year in the date/time strings in the Extreme-values summary output. The year 2200 was being used for a run and the year was printed as 200 instead of 0. --Added a Fortran 90 CASE statement to handle output formats in order to test the INTEL Fortran compiler. This compiler no longer supports the ASSIGNED GOTO statement. Version 9.93 14 August 2002 --Added selector variables to the master input file for FEQ. The master input file to FEQ is the input file given as the first command-line argument when invoking FEQ. An FEQ model may involve hundreds of input files but only one of these files is the master input file. This file contains the various blocks that describe the model. The other files are referenced in some of these blocks. Thus, the name "master input file" is used because it contains the references to all other files needed to fully specify the model. All the other files will be called slave files because they depend on the master file. Selector variables are provided to help manage the complexities of applying an unsteady-flow model to a stream system. Selector variables are provided to help bring order to a potentially confusing process. By careful definition of a small set of selector variables, and the inclusion of selection blocks, we can create a single master input file for FEQ that contains the descriptions of all scenarios. Thus, a change to the master input file will affect all scenarios that select that part of the master input file. A master input file will have large sections of its contents outside of any selection block. These are the parts of the input file that describe conditions that apply to all scenarios. There will then be one or more selection blocks that contain input specific to only certain scenarios. The following rules apply to the use of selector variables: 1. A selector variable can be up to 16 characters in length with no spaces. Although not required with this version, starting the selector with an alphabetic character is a good idea to fit with possible future changes. 2. Every line within the selection block is involved in the transfer. Thus, all lines must be valid for input to FEQ at that location in the file. 3. FEQ does not check during the transfer of selector variables to the actual input to FEQ. You must make sure that the final result creates a valid input to FEQ. 4. Currently, FEQ opens and retains a file named: f_e_q_i_n_temp.default to save the actual input used by FEQ. This file should be checked because it contains the values of the selectors and selection blocks used. You can give your own name for the file to be created by specifying the name in the Set-Selectors Block, for example: FILE= myoutputfilename The file specification must all appear on one line of input. The file name should not contain spaces and should be less than 128 characters long including the drive letter, colon, and / or \. 5. The keywords IF, ELSE, ELSEIF, and ENDIF must be in upper case. The selector variables can use upper and lower case, and are case sensitive. That is, Q1990 and q1990 are not the same selector, they are different! 6. It is an error to have more than one ELSE in a given IF-ENDIF block, and if it is present with ELSEIF's, the ELSE must be the last option. For example: IF xyz ELSEIF ABC ELSEIF CDEE ELSE ENDIF is valid. However, IF xyz ELSE ELSEIF CDEE ENDIF is invalid and FEQ should give an error or warning message. 7. The tilde, ~, is the "not" prefix operator. For example, if G2002 has the value true, then ~G2002 has the value false. This option was added to make the logical operations a bit more complete; the "or" and the "and" logical operators may be added if a need develops. The following discussion illustrates cases when the use of selector variables is advantageous. An unsteady-flow models is used to analyze a variety of situations. A model may be calibrated on one or more observed events of interest. These events may be historical floods or may be flow periods during which special measurements were made. Once the model is deemed to be suitably calibrated, it will be used in one or more of the following ways: 1. Approximate flows and water-surface elevations for a historical flood. 2. Approximate the flows and water-surface elevations when a historical flood is applied to a model modified to represent conditions different from those that existed when the historical flood occurred. These differences could involve, among others, modified flow paths, changes to bridges, culverts, dams, levees, changes in levee failure assumptions, changes in operation of gates or pumps, and so forth. In order to discuss such applications, we will call such a flood a transposed flood. That is, we take the point of view that the configuration of the stream system, which we will call the geometry of the system, is the principal factor being modified. It also means that we use the timing of the historical flood. That is, if the flood occurred in October of 1995, then the date/times in the FEQ model run will be in October of 1995 even though some of the structures in the model or the geometry of some of the flow paths were not present until 1999. 3. Approximate the flows and water-surface elevations when a design flood is applied to the model. A design flood has been selected to represent some conditions deemed important by a regulatory group. For example, in the USA, the 100-year flood is used to estimate water levels for purposes of flood-insurance rate mapping. In this case, the geometry used may not actually exist yet. It may represent a proposed future condition. We must then select some date/time sequence for the flood to be able to run FEQ. It is often convenient to select a date in the future that relates to the return period assigned to the event. FEQ can process dates at least to the year 9999, so that using a year of 2100 for a 100-year flood is convenient. A 50-year flood could be assigned to the year 2050 for computational purposes. We will call a run of FEQ for one of the above purposes a scenario. It is possible that a project could have 25 or more scenarios that need to be developed and analyzed. In this process, confusion can occur with the many different master input files for FEQ. In most cases, the changes from one scenario to the next are of limited extent in the master input file. Consequently, when a change is required that affects all of the scenarios or a large subset of them, we must make changes to many different files. This is prone to error and oversight. The pressure of deadlines often results in changes being made to the current scenario only. Later, confusion results when other scenarios are run under the assumption that the changes have been made, when they have not been made to that master input file. These typical applications of an unsteady-flow model show the three major sets of factors that define a given scenario. A scenario for an unsteady-flow model is defined when the geometry, boundary conditions, and system performance are established. We first discuss these sets of factors from the point of view of defining selector variable names, and then for possible directory-structure implications. 1. The geometry of the system includes any surfaces over which or through which water flows. Thus, each scenario will have a predefined shape and size of flow paths, bridges, culverts, levee crests, dams, gates, and so forth. Failures in levees and dams, as well as flood fighting and changes in gate openings, are included in the system performance discussed below. The geometry will be dated by the year and, if needed, smaller divisions of the year. A selector variable in FEQ can be up to 16-characters long and should begin with an alphabetic character. We suggest that the selector variables that relate to geometry should all start with a G and the year should be the full four digits of the year. Thus, the selector variable for geometry that is unique to a flood in 1995 would be G1995. If the geometry varied in the year and there are two or more floods of interest in the year, a selector variable like G1995.4 might be used to denote April in the year 1995. It is also possible to use G1995.April, but considerations discussed below suggest that the selector variables be kept short. 2. The boundary conditions describe the nature of the flows and stages that are imposed at the boundary nodes in the model. We will have to break this into two subgroups: hydrology and tides. We do this because there could be variations on tides that have nothing to do with the hydrology. The choice of which tidal record to use may be present when no gaged record is available. The hydrology may be an approximation to the conditions during a particular historical flood, or it may be based on a design flood. The hydrology refers to flows imposed on the model, so that we will denote any particular hydrology set using a selector variable that begins with a Q. Thus, the flows at the various boundary points for the 1990 flood would be in the set selected by Q1990. The 1995 flood would be selected by Q1995. Design floods do not have a year associated with them but we must associate a year to run FEQ. As outlined above, the 100-year flood could be placed at some point in 2100, so that flows associated with this event would imply a selector variable of Q2100. In some cases, the smaller tributaries may not have the same return period flows as the main stream. In that case, the date to use must correspond to the return period of the flows applied to the main stream of the stream system. The tides would be selected by using a selector variable starting with a T. A tide sequence may cover many years, so that we may not always define the selector variable on the year of the event. If the tide series is an approximation or a record of a historical series, then the selector variable will have some indicator of the gage used or the means by which the approximation was created. For example, the tide record at Cherry Point could be denoted by Tchrry. The tide period used for a design flood must have the same year as used for the design flood. 3. System performance is a collective term for changes in the geometry that take place during an event. These changes include levee failures, flood fights, gate operations, and so forth. Because these changes take place during a flood, they will be denoted by the year as well. We will use the letter P as the initial character of the selector variable. There is a complication with describing performance, for example, how should the performance be defined when a transposed flood is applied? Option 1. Shift the transposed flood's time of occurrence to match the approximate time of occurrence of the flood for the geometry. For example, if we want to apply the 1995 flood to the 1990 geometry, we shift the flows in 1995 so that it occurs close to the time that the 1990 flood occurred. We might choose to match the time of peak as one way of defining the shift. Then the actual performance of the levee system in 1990 might apply, at least approximately, but only if the floods are similar. This is unlikely to be true. Option 2. Model the transposed flood at its time of occurrence using the geometry. In this case, the performance of the levee system during the occurrence of the transposed flood might be an approximation to the performance to use. However, as in option 1, differences in the floods or geometry will make such application suspect. Option 3. Assume that no levee fails and no flood fighting occurs whenever a transposed flood is involved. This condition may be unrealistic, but it creates a consistent pattern so that comparisons can be made. Option 4. Use Option 3 as a basis for comparison, and compare the results using the transposed flood to the actual flood results. This involves reviewing the extent of flows over levees and levee failures in the actual flood, and then constructing a performance for the transposed flood that appears to make sense. For example, if flow over a levee during an actual flood was prevented by a successful flood fight, and if the levee is overtopped by the transposed flood, then apply flood fighting to that location for the transposed flood as well. Thus, based on past actual floods, we must infer some typical flood response, as well as the success of that response, which we then apply to a transposed flood. This can probably be done in some cases for flood fighting, but with levee failure we encounter increased difficulty. If consistent failures occur along a given levee, we can make some assignment of levee failure for an alien flood. However, most failures do not follow a consistent pattern from flood to flood. The location, timing, and size of a levee failure could be critical in any comparisons using transposed floods. Clearly, the performance of a stream system for a given flood can be assigned to the year of the flood. For example, we would have G1990, Q1990 and P1990. The no-fail no-fight performance assumption can be denoted as Pnoff. However, how shall we denote the performance of a system when a transposed flood is applied? The selector variable is limited to 16 characters, however, we do not want these variables to become too long because we will also use the selector variable name as part of a directory naming convention to help keep track of the output files from the different scenarios. I propose the following: give both the date of the flood and the date of the geometry in the performance selector name. For example, if we apply the 1995 flood to the 1990 geometry, and we use other than the no fail/fight performance, then the performance selector name is P1995on1990. This name contains 11 characters and is not too cryptic, the use of it in the recommended directory naming convention is discussed below. There may be a need to define additional selector variables to describe the scenario with sufficient detail. If so, we will select names that make sense in the context being studied. Below is an example of a Set-Selectors Block wherein we define selector variables: SET SELECTORS ; Devise a set of selector names so that we have one master input ; file for FEQ for R1-R4 for various geometries and flows. ; ; Variable name Meaning ; ------------- ------------------------------------------------------------ ; G1990 Hydraulic geometry for 1990 flood ; G1995 Hydraulic geometry for 1995 flood- assumes new bridges at ; Everson, Lagerway Dike. ; G2002 Hydraulic geometry for 2002- currently taken to be same as ; for 1995 pending information. ; Q1990 Flow for 1990 flood from USGS as modified and tributaries as ; computed from rainfall by way of rainfall-runoff modeling. ; Q1995 Flow for 1995 flood from USGS and tributaries as computed ; from rainfall by way of rainfall-runoff modeling. ; Q2002 Flow for 2002 floods from USGS at gaged locations with ; factored values applied to other tributaries. ; Q2200 Flow at Deming is a 200-year event, tribs at 1990 estimates ; with timing as for 1990 (approximately) ; P1990 Levee failure/flood fighting approximately like that in 1990. Has 1990 timing. ; P1995 Levee Failure/flood fighting approximately like that in 1995. Has 1995 timing. ; P2200 No levee failures nor flood fighting for 200-year flood with ; same time of occurrence within the year as 1990. Only ; the year was changed from 1990 to 2200. ; Tfixed Tide at Bellingham Bay and Lummi Bay held fixed. ; Tnos Tide at Bellingham Bay based on recommended source from NOS. ; Tchry Tide based on Cherry Point gaging. G1990 = false G1995 = false G2002 = true Q1990 = false Q1995 = false Q2002 = false Q2200 = true Tfixed = true Tnos = false Tchry = false P1990 = false P1995 = false P2200 = true FILE = feq.in END SELECTORS This block appears as the very first set of lines in the master input file to FEQ, ahead of the title lines. In this example, I have defined four groups of selectors that relate to the four sets of variables comprising a scenario. Note that each selector variable must appear on its own line. The equals sign is required. The value for the variables, true or false, can be in all lower or all upper case. An example title, or Run Description Block, with the first part of the Run Control Block appears in the following lines: Model for Lower Nooksack River: Deming to Bellingham Bay Model for Reach1, Reach 2, Reach 3, and Reach 4 July 23, 2002 IF G1990 Geometry: Flow geometry approx 1990 conditions of importance to the overflow: Lagerway Dike is not present, and old bridges on Everson Main Street. ELSEIF G2002 Geometry: Flow geometry approx 2002 conditions of importance to the overflow: includes Lagerway Dike, and new bridges on Everson Main Street. ENDIF IF Q1990 Boundary conditions: Run of 1990 flood hydrograph as revised: Use 1990 tribs from rainfall-runoff modeling. ELSEIF Q2200 Boundary conditions: Run of 200-year flood hydrograph: Use 1990 tribs with 1990 timing. Use 200-yr flows at Huntingdon. (Affects flow at Main Street with 0.03 of Huntingdon allocated for base flow.) ENDIF IF P2200 Levee/high ground performance: No levee failures or flood fighting for 200-year event. ENDIF IF Tfixed Tide at Bellingham and Lummi Bays: Tide is fixed. ENDIF RUN CONTROL BLOCK IF G2002 NBRA=1481 NEX= 5362 ELSEIF G1990 NBRA=1481 NEX= 5334 ENDIF . . . The keywords associated with the selectors are: IF, ELSE, ELSEIF, and ENDIF. These keywords must be in upper case. They do not have to start in column 1, but each must be on its on input line. Also, the first condition that evaluates to true in an IF-ELSEIF-ELSEIF ... ENDIF sequence will be selected. For example, if by some error both G2002 and G1990 were true in the Run-Control Block fragment, the G2002 values would be the ones selected. The final example is for the Function Tables Block, where one or more slave files are referenced. . . . IF G1990 FILE= /nooksack/lower/futl/r4/xsecdn.tab ' Pre-Lagerway Dike ELSEIF G2002 FILE= /nooksack/lower/futl/r4/xsec/lgrwy/xsecdn.tab 'Post-Lagerway Dike ENDIF . . . In this example, we have changes to some cross-section files that occurred when the Lagerway Dike was built. --Added an option to the Set-Selectors Block to define a home directory. This home directory can serve as the global home directory if no home directory is given in the Run-Control Block. A global home directory given in the Run- Control Block will override a home directory given in the Set-Selectors Block. In that case, the home directory in the Set-Selectors Block applies only to the file used for storing the results of the selection process and the master output file; that is, the output file given as the second command-line argument when invoking FEQ. The option is invoked by: makehomename= D:/nksk or MAKEHOMENAME= D:/nksk The home name or directory is constructed as follows: 1. The left-most part is taken as the character string given in the option. In these two cases, the leftmost part is: D:/nksk. Note that I have specified a slash instead of a back slash because the Lahey Fortran compilers are able to process both forms when running under Microsoft Operating Systems, which typically use the back slash. I use the forward slash because it is also compatible with Unix and Linux. 2. The remainder of the name is formed by scanning the values given for the selector variables, and adding each selector variable name whose value is true to the base name given by the user. Note that the selector variables are scanned in the order they are given in the Set-Selectors Block. 3. The first selector-variable name added is prefixed with a slash to form a subdirectory under the base name given by the user. All subsequent selector-variable names are prefixed with an underscore as they are added. For example, if we added: makehomename= D:/nksk to the Set-Selectors Block given above, we would get: D:/nksk/g2002_q2200_tfixed_p2200 as the global home directory. Alternatively, if we added: MAKEHOMENAME= D:/nksk to the Set-Selectors Block given above, we would get: D:/nksk/G2002_Q2200_Tfixed_P2200 The difference being that all alphabetic characters in the first instance are forced to be lower case, and in the second instance, the case is left unchanged. If FEQ were invoked with: FEQ feqin out where "feqin" is the master input file and "out" is the master output file; then, if the Set-Selectors Block were present as given above with "makehomename= D:/nksk" added, and a home name was not given in the Run- Control Block, then the following applies: 1. The result of the selection block processing would appear in the file "feq.in" and this file would be under the path name: "D:/nksk/g2002_q2200_tfixed_p2200". This file is ran by FEQ to compute the results for the given scenario. 2. The master output file, "out" also would appear under this same path name. 3. In order to read the various slave files in their blocks, we would have to give a local home name to override the global home name. Note that the global home name defined in the Set-Selectors Block is for output files and not for input files. The only exception is the file, feq.in, which is both an output file, from the Set-Selectors Block, and an input file, for FEQ. 4. The file names given in the Special-Output Locations Block, the Output Files Block,and the GENSCN blocks that were prefixed with a / or \ will also appear under the path name created by makehomename. If they are not prefixed by a / or \, then they will appear in the directory in which FEQ was invoked, which is the standard default location for all file names with no path name given. I am assuming that a local home name was not given in these blocks. Recall that a local home name always overrides a global home name. Final notes: 1. FEQ has a limit of 64 characters for file names except in the Function Tables Block, where the limit is 128 characters. 2. The names of the selector variables should be carefully chosen so that one can store each scenario's results in a unique subdirectory for later access. 3. FEQ does NOT create the home-name directory, the user must do this. If the directory does not exist, FEQ will report an error when it tries to create or access the file. 4. THIS IS IMPORTANT: If one of the makehomename options in the Set-Selectors Block is used, then you must look in that directory for the master-output file for the run. A truncated master-output file, created when the Set-Selectors Block was being processed, will appear in the directory from which FEQ was invoked. This truncated file can be used to sort out problems in the Set-Selectors Block and in the processing of the selection blocks. Errors encountered in processing these blocks will appear in the master-output file in the directory from which FEQ was invoked (often called the current directory). The contents of this file, if the selection- block processing goes without error, also appears in the master-output file in the directory created by the makehomename option. 5. Currently, the CAD script file that can be used to create a CAD schematic will be stored in whatever global home directory applies. The script file is created after all of the input for the model has been processed. Therefore, the location appears as follows, where: Set-Selectors Block(SSB) HOME name given in Directory in with a makehomename Run Control Block which schematic.scr is present (RCB) is placed. -------------------- ------------------ -------------------- no no current directory no yes RCB home directory yes no SSB home directory yes yes RCB home directory Remember that the Run Control Block home directory name overrides the Set-Selectors Block home directory name. --Added automatic counting of the number of branches and the number of exterior nodes in an input file. The input of NBRA and NEX is still allowed, however, the values that FEQ finds in the input will be ignored. This eliminates a source of error in the entry of the NBRA and NEX numbers when modifying a model. However, certain input errors may be more difficult to find now that these numbers are not given. This change has not been thoroughly tested. I have tested the code using a variety of master-input files I have on my system, however, all of these files had the correct number of branches and exterior nodes. Only experience gained from building models with this version will reveal if there are special problems in detecting certain errors when FEQ counts the number of branches and the number of nodes. I have tested it by leaving out a branch and the messages made sense. I also have tested it by leaving out a simple junction and the messages made sense. Finally, I also have tested a larger model under Linux with this version, and it worked as expected. --A reminder: it is recommended that FEQ tables of type 22 or 25 be used at all locations using Code 13: conservation of energy/momentum. FEQ will accept other table types but issues a ERR/WRN message. I have encountered problems with convergence using Code 13, and the code was reorganized to avoid those problems, but doing so made use of the velocity-head correction factor, alpha. If alpha is not given, FEQ uses 1.0, and in some cases, this can result in non-convergence or convergence to an invalid solution. --Made an internal change that disables additional I/O units based on information found on the Lahey users forum. This change has no effect on the end user. Disabling units 1 through 7 should avoid all "hard-coded" assumptions that may exist in some compilers and operating systems. --Added reporting of the compiler/operating system used to create the executable. This may be needed to enable handling of files between compilers in the future. Also, this may be able to intercept certain strange errors when incompatible unformatted files are opened. Version 9.94 23 August 2002 --Added support of the global home directory to miscellaneous slave output files created by FEQ. These files are defined in blocks of input that do not have the option for setting a local home directory. Therefore, if a home directory is invoked by prefixing the file name with a slash or backslash, it will be taken relative to current global home directory. The files affected are those given in the Run Control Block input options: BWFDSN-file name for storing initial conditions when using a DTSF; GETIC-file name for obtaining initial conditions; and PUTIC-file name for storing initial conditions. Version 9.95 21 October 2002 --Changed reporting of the executable to include more information. This may be used in the future to tailor other operations to the compiler or operating system. Currently, this is used to inform the user of the compiler, precision of solution, and prefetch options. --Fixed problems in addressing nodes on branches in the NEW GENSCN OUTPUT block. --Added reporting of computational element length to the Branch-Description Block output. Version 9.96 23 October 2002 --Fixed bug in the processing of the OPTIONS line in the Special-Output Locations Block. --Fixed bug in reporting that an odd number of exterior nodes was found. Version 9.98 21 April 2003 --A new message is written to the master-output file when a synchronizing time step is computed. Without this message, the output can be confusing because the time step is first set to the maximum, and then it is changed to something smaller. FEQ tests for the need to synchronize the time with the maximum time step, that is, to update the time so that the values are in agreement with the maximum time step. For example, if the maximum time step is 1800 seconds, then the computation points should be shifted so that the results are at every hour and every half hour. The synchronizing time step must be at least as large as the minimum time step before it is used. --The node field in Output Files Block was processed for nodes on a branch using only five character positions; therefore, valid node numbers of six characters were not read properly. Currently, the node number for a node on a branch is limited to six characters. Version 10.0 22 April 2003 --Options are available in the Run-Control Block to control the Newton solution. This is targeted at the problem of having only a few variables fail to meet convergence before the time step is automatically reduced to the to the minimum step size set by the user. The options are referred to as High IQ Newton Solution (HI_IQ_NS). The general idea is to identify the internal variable that last appeared in the iteration log at convergence failure. This variable is the added to an action list: a list of variables for which the user can take special action. The user can choose to not use the full correction for that variable in the Newton solution. Instead, a partial correction can be specified with the partial correction being determined by other options given in the Run-Control Block. In this way FEQ can respond to some computational problems on the fly rather than requiring a manual change of the input file set up conditions and the "time-step too small" message frequency can be reduced. Note: There are two variables at each node in the model: flow and depth/elevation. Even though only one variable is not converged both variables are added to the action list by placing the internal id number for the flow variable on the list. The id number for the flow variable is always odd and the depth/elevation variable is the next following even number. This is needed because a frequent problem near full submergence for a two-dimensional structure is that the depth variable may oscillate within its convergence tolerance but the flow variable will be outside its tolerance. Thus we link the two variables at a node: both must be converged before they can be removed from the action list but only one being out of convergence forces them to be placed on the action list. The iteration log may show an even number as the internal id number for the out of convergence variable but the message about adding the variable to the list will give an odd number that is one less than the out-of-convergence variable. The options in the Run-Control Block are: HI_IQ_NS=NO This is the default value for this option. HI_IQ_NS=LEV1 At level 1, variables stay on the list until the entire model converges. At each additional reduced time step attempted in seeking convergence, the Newton correction is reduced by the factor given in HI_IQ_NS_DWN unless the correction is at or belowHI_IQ_NS_LMT. When convergence is achieved for the whole system, the correction fraction is increased by HI_IQ_NS_UP until the fraction becomes 1.0 and the variable is removed from the action list. HI_IQ_NS=LEV2 At level 2, variables stay on the list only until that variable is converged. Thus even if system-wide convergence is not achieved, a variable on the list will be removed if it has converged. Also on system convergence, all variables are removed from the list and the correction fraction returns to 1.0. HI_IQ_NS_NUMGT The count of variables outside of the convergence tolerance must be less than or equal to this number before the IH_IQ_NS levels become active. The corrections used in the HI_IQ_NS process make sense only if the number of variables areout of tolerances is small. The ability to converge is sensitive to this variable, and the user may need to try a number great or smaller than 7. Note that this count is for variables out of convergence and does not include any that are in convergence. However, as noted above both variables at a node will be added to the list even though only one is out of convergence. HI_IQ_NS_DWN is the reduction factor for the Newton correction. Default is 0.5 HI_IQ_NS_UP is the factor to increase the Newton correction up to the limit of 1.0. Default is sqrt(1.0/HI_IQ_NS_DWN). If the value for HI_IQ_NS_UP is < 0, then the value is redefined as (1.0/HI_IQ_NS_DWN)^(1.0/abs(HI_IQ_NS_UP)). For example we get, assumming, HI_IQ_NS_DWN= 0.75, Input value of Internal value of HI_IQ_NS_UP HI_IQ_NS_UP -------------- ------------------ 1.25 1.25 -1.0 1.3333 -2.0 1.1547 -3.0 1.1006 This parameter is only used for LEV1. In LEV2 a variable is removed from the action list as soon as that variable satisfies the convergence tolerances. HI_IQ_NS_LMT is the lower limit for the fraction defining the partial-Newton correction. Default is 0.1. Users may find values as low as 0.01 useful in some cases. Here is an example that was tested on a large model with more than 13,000 variables: HI_IQ_NS=LEV2 HI_IQ_NS_NUMGT= 7 HI_IQ_NS_DWN = 0.5 HI_IQ_NS_UP = -1.0 HI_IQ_NS_LMT = 0.01 In this example, a small value for HI_IQ_NS_LMT was required to get convergence at at one point. In this example, the default values for time-step increase and decrease were also modified because the default values moved the time step too rapidly. The values that worked with HI_IQ_NS=LEV2 were: AUTO=0.67 MAXDT=1800. MINDT= 1.0 LFAC= 0.8 HFAC= 1.25 --The format of the iteration log has been changed slightly. A new column listing the internal variable number has been added. This makes it possible to connect a given branch/node or free node with an internal variable number. The node column now has a one-character column that will contain q for flow or y for depth instead of using a sign to indicate the difference. This may prove helpful in deciding what High IQ Newton Solution option to try. --Additional output has been added. Brief messages are given in the master-output file when variables are placed on or removed from the action list. Also the runtime is now given when the run fails due to excessive reduction of the time step. Refined upgrades of cross-section table types 20-25 are now available. The new internal cross-section function tables are numbered 30-35. These include the derivatives to render conveyance, alpha, beta, Ma, and Mq, to be at least piecewise cubic with continuous first derivatives everywhere and continuous second derivatives at all but some breakpoints. There is also an extension of type 13, type 43, that also ensures at least piecewise cubic variation with continuous first derivatives everywhere and continuous second derivatives except near regions that had to be adjusted. This option may prove useful for situations where the Newton solution fails to converge because of the simple approximations for the required derivatives that were not continuous at the breakpoints for conveyance, alpha, beta, Ma, Mq in the cross-section function tables and for function table types 2, 13, and 14. These tables have been added to yield functional representations that are smoother in the sense of having continuity of the first derivative and sometimes the second derivative at the tabulated depth values (breakpoints). It may be that some of the computational problems in an unsteady-flow model originate at the discontinuities in first derivative at breakpoints. A review of the convergence theorems for Newton's method shows that they all depend on continuity of the first derivative near the root. If one of the roots is close to a breakpoint, a likely occurrence with a few thousand cross-section function tables and each table with 30-100 breakpoints, then the model may have convergence difficulty. The increased order of interpolation may yield more accurate values of conveyance, for example, but it is not clear that the change is of any significance. None of the effort in including increased smoothness in approximations was motivated by increased accuracy. It was done to increase the robustness of the computations. To convert all cross-section function tables to types 30-35 for an unsteady-flow model, add UPGRADE_XSEC_TAB=YES to the Run-Control Block. All cross-section function tables found by FEQ will be automatically converted to the corresponding new type. If you use UPGRADE_XSEC_TAB=YESO, then FEQ will dump the new tables as they are created. This results in a large output file. In any case, FEQ currently runs some checks for problem areas on the new tables and outputs what it has found. An external form for type 43 has not yet been defined. Thus to get this type use TY13_TO_TY43=YES in the Run-Control Block. All type 13 tables found will be converted to type 43. Tests will be run on each table and a summary printed. Some tables may need to be recomputed to reduce problems. Again if TY13_TO_TY43=YESO is used, a complete dump of the table will be done. This results in quite large output files but can be used for testing and checking. It should be apparent that both the storage-space required and the lookup-time required for these tables is appreciably larger than for their simpler relatives. Thus not all models will benefit with respect to computation time by using these options. The user can try these options and compare with the standard defaults option. Results so far indicate that for: 1. For large models, those with more than 500 branches, the greater smoothness in derivatives provided by these options provides somewhat greater robustness, that is, there seem to be fewer computational failures, and there is a reduced run time because the extra time in lookup is repaid in there being fewer iterations required to convergence. It is also possible to use the extrapolation options in the Run-Control block and still maintain good performance. This can reduce runtimes by as much as 25 percent alone. Trying to use that large an extrapolation without the smoother tables either fails to converge or the iteration count rises to the point that it takes longer to compute. However, it is not uniformly true that the smoother tables produce better convergence. Therefore, the standard tables options is always available. 2. In rough comparisons so far, the results for extreme values are essentially the same if both approaches result in a completed computation. The maximum elevation differ on the order of a few thousandths of a foot and the flows may vary in the third or fourth digit, which is the same order of variation that occurs when using executable code compiled by different compilers. It should be noted that may large models make heavy use of automatic argument selection for type 13 tables whereever FEQUTL supports it. This is important because this means that all 2-D tables of type 13, save those from CULVERT, are computed so that linear interpolation in the table would yield errors at most of 1 to 2 per cent. If manual spacing for arguments is used, then the differences may be much larger because the piecewise cubic fit and the piecewise linear fit will differ by much larger amounts, especially at small heads. It is almost always true that manual placement of the upstream heads results in far too many large heads and far too few small heads. --It was found that FEQ can fail to converge before the minimum time step is reached because of a bug in the code used to synchronize time when the maximum time step is attained. Some years ago users found it disconcerting that the time would be at some odd interval even though the maximum time step was being used for long periods. To avoid this, FEQ tests for the need to synchronize the time with the max time step, that is, bring the time so that its value would result in values in agreement with the maximum time step. For example if the maximum time step were 1800 seconds, then the computation points should be shifted so that there are results at every hour and every half hour. Because the default factors for decreasing and increasing the time step were 0.5 and 2.0 respectively, the roundoff error in the process of changing the time step was small, especially since, FEQ maintins the time step in double precision. However, when using the HI_IQ_NS options, the reduction factor was changed from 0.5 to 0.7 which introduced more potential roundoff error. After many runs with no problems the synchronizing algorithm computed a time step close to zero but larger than the tolerance allowed. Thus FEQ found that the time step was less than the minimum allowed time step and stopped. In fact, the time step was being reset to the maximum when this occurred and the computation for the time step needed to synchronize was just roundoff error. Therefore, this bug has been fixed by requiring that the synchronizing time step must be at least as large as the minimum time step before it is done. In addition a new message is written to the master output file when a synchronizing time step is computed. This message is needed because the output can become a bit confusing because the time step is first set to the maximum and then it is changed to something smaller. --FEQ now has the ability use a time-series table that gives the maximum time step as a function of time. This can prove useful when models require an extensive warmup period before the computations for the period of interest begin. It is also useful for those cases when long periods of low flow must be simulated in order to drain water stored on the flood plain behind levees or behind dams when modeling a sequence of storms. This table is of type 7 and gives the time and the maximum time step at that time. Please note that the time and the time step given in the master output file give the time at the end of the time step reported. The max-time step table needs the time at the beginning of the time step. To make this task easier, FEQ, also has the ability to output a file containing the record of all time steps with the time at the start of the time step given. This file can then be used as a basis for creating the max-time step table. To do this make a run with a fixed time step and request that this file be created. Then edit the file to create a valid sequence of maximum time steps and rename the file. Then add the file to the list of files read in the Functions-Table block. The two new Run-Control Block options are: MAKE_DT_TAB= /dt_tab This requests that the time-step and time information be written to a file named dt_tab and appearing in the global-home name directory if any exists. If none exists the file should appear in the current directory. Note that this file DOES not contain a vaild type 7 table. You must use the contents of this file to create such a table. USE_MAXDT_TAB= max_dt_2002 This requests using a table with a table id of max_dt_2002 to define the maximum time step. The maximum time step is reset before the computations are started. Again it takes some practice to make good use of this feature. However, by carefully editing the time-step pattern from a successful run, reductions in runtime on the order of 25 percent can be attained. However, again, at least with large models, computational failure can also be created. --Changed meaning of the flow-factor adjustment table in the side-weir instruction, code=14, when it appears with a crest level adjustment. When both optional tables are present, FEQ does the following: 1. Computes the flow at the head at current crest level = Q1 2. Computes the flow at the head at original fixed crest level = Q2 3. Finds the current flow factor, fac, from the flow-factor table. 4. Computes the flow as: Q = fac*Q1 + (1 - fac)*Q2 This approximates the combination of flow over the portion of the flow surface that is at its original level and the flow through the portion of the flow surface that is at its new level. This computation assumes the crest is approx horizontal and that the flow interaction between the two flows is small. The factor is estimated by the ratio of the length of weir that is being shifted by erosion of the surface to the total overflow weir segment. Previously, this time series table consisted of factor values determined by the user. See the side weir description in http://il.water.usgs.gov/proj/feq/software/release_feq998.txt Version 9.69. Version 10.01 17 May 2003 --Increased the title block to 201 lines. --Skip blank lines at the start of the master-input file. Version 10.02 19 June 2003 --Fixed bug in reporting the interpolation for high-water marks. The stationing factor was not applied to the output when interpolating. Thus small errors on the order of a few tenths of foot were made. This is relevant only for users who utilize a file hwmark.loc containing high-water marks for their model. --Added code to the GENSCN output to compensate for a bug in GENSCN. GENSCN is unable to plot results at a free node when that node is not a reservoir node. The problem appears to be that GENSCN is seeking a non-zero index to a function table for every node. However, no such table is needed nor does one exist for a free node on a dummy branch or for the inflow to a level-pool reservoir. However, GENSCN will plot the results if a valid ftab index is given for these free nodes. Therefore, this code places a valid ftab index for all free nodes not having any such table. If under some circumstances, GENSCN should try to lookup some values in this function table, we will have problems. So far this has not happened. Version 10.03 21 July 2003 --Fixed bug in code 4 type 6 output. This feature was coded for one particular model needed by the FEQ developer. Version 10.04 1 December 2003 --Changed the console output for FEQ to try to get the same behavior in MSW and Linux. The new output prints a horizontal series of periods (dots) for each consecutive time step that is unchanged from the earlier one. A new line of information is printed whenever the time step changes. In GNU/Linux where the whole set of dots is printed at one time. Version 10.05 22 April 2004 --Changed computation of the minimum time step to exclude time steps that were set in order to sychronize time with even increments of one hour. Sometimes that adjustment produces a time step that is small and the minimum time step does not reflect the level of computational difficulty found during the run. Version 10.06 6 May 2004 --Changed format for Time of max Z, Time of max Q, and Time of min Q to include leading zeros in the year, month, and day fields. Version 10.10 22 May 2004 --Changed format of debug output of the equations in the Network Matrix to allow 5 digits for the variable number. Needed to do debugging on a model with more than 14,000 equations. --Corrected several errors in the handling of eddy losses (KA and KD being the parameters) in branches: 1. In some cases of strong reverse flow and larger values of either KA or KD an error in the partial derivatives of the eddy-loss term caused either slow convergence or failure of convergence. This did not occur if all the flows in a branch were positive. This might be part of the reason for slow convergence observed in some cases for branches subject tidal influence. 2. In some cases of reverse flow, another error in handling the eddy losses could cause the eddy losses to be ignored when they should have been included. This probably only took place when there was reverse flow and the time step converged in only one iteration. This would normally be during periods of essentially constant flow everywhere or at most slowly varying flow. Again the potential effect is very small was limited to reverse flows in a branch. A very large reverse flow might have caused the model to fail to converge. --Corrected an error in the setting of a partial derivative in tables of type 14 when the flow moved from zero to non-zero. This could cause convergence failures when a type 14 table was used for a location where the flow was zero most of the time and became non-zero only during major floods. --Added additional code to lookup of tables of type 14 to catch inconsistent heads and flows. Detailed output added to FEQ to diagnose severe problems for flow in a dry bridge, that is, a bridge through a road fill in the flood plain that has zero flow except during floods. The flows at such a bridge are subject to roundoff and convergence tolerance noise when the flows are negligible. Thus FEQ now forces the flow to be zero in the lookup process if the head physically upstream of the claimed non-zero flow is less than zero. Obviously the flow is wrong and is the result of computational round-off error. This noise causes the Newton iterations to sometimes fail repeatedly until the time-step limit is reached. --Added output of a summary of the number of equations produces by each of the codes used in the Network Matrix Control input. This may help the user find patterns in how many equations come from various sources. The summary also indicates which of the equations are linear and which are non-linear. Version 10.11 29 June 2004 --Added code so that FEQ can determine the operating system under which it is running and then make sure that the directory name and subname divider is consistent with the OS in file names. Thus it is possible to use either the backslash or the forward slash in a file name or a mix of the two! Lahey Fortran allows this but some other compilers may not do so. Consequently, the FEQ now makes the conversion so that an input can be transfered between the two operating systems and not require changes in the file names. --Added home-name addition to the macro file name. So far the macro file name has not been used except in limited test cases. Version 10.12 16 July 2004 --The TAB-defined head datum for code 5 type 6 and code 14 can be modified by specifying an increment/decrement. This feature is quite useful when there are many flow tables with head datum defined by looking it up from the table. For this case, in the instruction in FEQ only the special string TAB or tab appears. Previously, to change the datum it is necessary to find the datum from that table, apply the shift and then change the TAB or tab to the correct numerical value. This new feature makes it possible to add the change to TAB or tab with no blanks, and FEQ then computes the shift. For example, to shift the flow table datum up by 0.25 use: TAB+0.25 A downward shift is then: TAB-0.25 The user is of course, as always, responsible to make sure that such a shift makes sense. FEQ does no checking on the size of the shift. Version 10.15 25 August 2004 --Expanded the file-name length for all but command-line arguments to be 128 characters. The home name can be up to 128 characters as well so I suppose one could end up with a 256-character file name. The user should be aware of the limitation of their operating system if utilizing a very long file name. This change required many adjustments in many locations in order that the increased lengths could be read from the input files and would be written to the output files. Tests using file names with lengths varying from 80+ to about 124 characters have been run for: Special Output file name Output file name for time-series GENSCN output base file name Function Tables Block with a home name of about 90 characters Function Tables Block with a file name of about 90 characters. They all were read and output properly. Two existing models without longer file names also ran properly. One of them used HECDSS and the other did not. Version 10.16 20 October 2004 --Increased node limit in GENSCN to 20,000 nodes. Lahey Fortran compilers appear to support the resulting record lengths. Version 10.17 5 November 2004 --Increased special output count to 250. Version 10.20 13 January 2005 --Point time-series files (PTSF's) are now read and written as direct-access unformatted. This change allows these files to move among various operating systems and compiled executables. This has been tested on five different compilers (two under Linux and three under MS Windows). PTSF's created with earlier versions of FEQ must be converted using the PTSFUTL utility or the models that generated the PTSFUTL must be rerun with FEQ version 10.20 or later. --Diffuse time-series files (DTSF's) are now read by FEQ as direct-access unformatted. This change was made to allow these files to move among the various operating systems and compilers, similarly to the PSTF's. Old files must be converted using the version of TSFUTL included in the FEQ 10.60 release package. HSPF 12.0 allows a direct output to the new unformatted DTSF using the PLTGEN routine. --In the process of making the changes for DTSF handling, it was noticed that the variables used in computing the average runoff in an FEQ time step were declared single precision. All of the variables leading up to that point were double precision. This appeared to be a long term oversight. Therefore, the declarations for these variables have been changed to be double precision also. Testing on an old version of a DuPage County Trib4 model, showed that the changes in flows, if any, were in the last one or two digits printed. The time of extreme values sometimes shifted by an FEQ time step or two as well. All of these changes appear to be inconsequential. --The response of FEQ to a selector variable with an unknown value has been changed from generating an error message and stopping to setting the unknown value to false and continuing after issuing a warning message. This may make using selector variables more natural. Currently every variable in the input must have a value explicitly set. These requires setting an ever growing number of selector variables to false and to changing existing scenario blocks when new scenarios are added. With this change only the selector variables that have to be true need be set in any scenario- defining block. All others not set will be given the value of false when they are found. Version 10.21 14 February 2005 --Changed detection tolerance for side-weirs with non-zero flow in the initial conditions. This was designed to permit these flow points to become active. Ideally the flow over any side weir should be zero in the initial conditions but that is not always possible. --The special output block will now write a descriptor file, with a standard extension of *.spi, to describe the special output file itself to feqplot, the GUI result viewer for time series. This allows us to better control the working of feqplot and still leave the format of the special output file itself unchanged. Version 10.22 7 July 2005 --Changed output units for QUAD option in the Output Files Block to use acre-feet when GRAV > 15.0 and 1,000 cubic meters otherwise. Version 10.23 2 August 2005 --Home name for output files was truncated to its actual length for output. --Added the "top of stack" line when processing selector blocks in order to help find problems with matching IF with ELSE, ELSEIF, and ENDIF. In some cases the output file is complete, especially if an ENDIF is missing, and gives no clue as to where the problem might be. The top of stack line will usually be the IF statement that has not been matched. If there are multiple IF statements with the same selector variable, a common occurrence, it proves helpful to label each of these identical IF statements with a unique label given after a single quote, for example, IF Q1990 '1 ENDIF IF Q1990 '2 ENDIF Then if an ENDIF is missing, and FEQ finds the end of file searching for it, the error message will have the offending IF statement given with the error message and the unique label will allow finding that statement without an extensive and sometimes error prone search. Version 10.24 29 August 2005 --Add ability to set a selector variable outside of the SET SELECTORS block. This is done using the keyword: SET. This feature was added to make the selection process easier to define. For examples: IF Q1990 SET lowflow = false ELSEIF Q1995 SET lowflow = false ELSEIF Q2002 SET lowflow = true ELSE SET Lowflow = true ENDIF IF lowflow Select stuff that is only to be used with the smaller flood events ELSE Select stuff that is only to be used with the larger flood events ENDIF A Selector variable may be defined or redefined by this process. Do not use SET within a SET SELECTORS block. Version 10.25 27 September 2005 --Corrected an error in time reporting when using a DTSF. The reported time offset depended on the initial hour of each event. If the starting hour was zero then the offset was zero. Otherwise the offset would depend on the starting hour. If the starting hour was 1, then then offset was 1 - 1/24.0 hours, which is slightly less than one hour. Most DTSF's of which I am aware have events which start at hour 1 or sometimes hour zero. Thus the maximum offset in reported time is one hour or less. Version 10.30 30 September 2005 --Added support for creating a dual-source forced boundary condition using a new block of input. This block of input creates a PTSF from two data sources given by the user. The sources may be either a PTSF or a time-series table (TSTAB). The purpose of dual sources is to make real-time forecasting using FEQ somewhat smoother. An example block follows: DUAL SOURCE DEFINE forecast._doc_ FILE= /forecast/sbrook05/flowhist/busse_flow.ptsf TRAN_START=2005/09/01: 0.d0 TABLE = busse_Q_frm_tab END forecast._doc_ END DUAL SOURCE This block, if it appears, should be just before the INPUT-FILE-SPECIFICATION block. The name following the DEFINE is the name of the file that will be created by FEQ. The strange extension for this file requests that FEQ delete the file when it is closed (delete on close--doc). The PTSF given in FILE contains the first source in this example. The data in this PTSF is to be used until the date/time given in TRAN_START If a time point exists in the file, busse_flow.ptsf, at this time point, it will appear in forecast._doc_. The first time point from buss_Q_frm_tab will be the first one following the date/time given in TRAN_START. If TRAN_START is not present, then the first source is used until its last time point. Any combination of a PTSF and a TSTAB can be used. The file created, forecast._doc_, is then referenced in the INPUT-FILE-SPECIFICATION block just like any other PTSF. --To better support dual-source creation, the internal storage pattern for time-series tables has been changed to full double precision. In earlier versions, time-series tables of types 7, 8, and 9, only had an existence until input, at which time they were changed internally to types 2, 3, and 4. This is no longer true. There is a special internal representation for time-series tables that now represents long time spans accurately. Extensive comparisons have been made between old and new versions on a small collection of models. The effect is small with most changes being in the 4-th significant digit. These changes come about from at least three sources: 1. The time value is now stored in double precision whereas in earlier versions it was stored in single precision. 2. The values for the flow or elevation are now stored in double precision yielding a small change in trunctation error. However, values returned from the table lookup are still given in single precision. 3. The above two effects can cause either a change in the number of iterations at one or more time steps or even a change in the number and size of the time steps. This comes about because the computer is a "slave to precision". A minute change can sometimes cause a change in number of iterations and this can potentially change one or more time step sizes. The effect in step 3 is the primary one and it reflects the inherent uncertainty in using an interative process to find a solution. --A new Run-Control-Block option has been added to enable forecasting with a DTSF present. The standard behavior when a DTSF is present is to force the start time to match the start time of each event in turn. That was the original purpose of have a series of events placed into a DTSF. To avoid this behavior, include the option, FRCST_WTH_DTSF= YES in the Run Control Block. When this option is present, the start time is used to locate the event in the DTSF. The event selected for simulation is the event that contains the start time. Only one event will be run because only one event can contain the start time. This follows because events must be non-overlapping and in ascending order in a DTSF. --The GETIC and PUTIC files have been updated to be written and read using direct access statements. This means that old forms of these files cannot be used with this version. The files have been expanded to support additional control structures as well, so old files are obselete for that reason too. Note that this version requires that all PTSF and DTSF files be in direct-access format also. Any that are not in that format can be updated using a utility program for that purpose. --The total count of iterations to a given time step has been added to the output to enable comparing runs more closely for differences. If the total iterations differs then a possible source of any differences found is in the number of iterations. Version 10.31 10 November 2005 --Changed formats used in processing heading-dependent input formats. Some floating-point input fields had been read using list-directed input, that is, with no format given. These have been changed to using f20.0 to allow more comprehensive error checking. With the list-directed format option the following string, '. 662.0', could have been read as zero by at least one compiler's executable of FEQ. Version 10.32 8 December 2005 --Changed processing of selected input fields to detect spaces interior to the field that result from a character falling outside the assigned columns in a heading-dependent input line. Any spaces within a number are invalid and will produce an error message. There should be no spaces between a leading sign and the number to which it applies. --Fixed bug in the option to automatically generate the master-output file name from the root of the master-input file name. FEQ failed to add the .out extension to the root name from the master-input file name. --The last extension on the master-output file name is now applied to all output files so that they are uniquely named and not overwritten. This happens automatically so if you do not want this action, then no extension should be used on the master-output file name. The extension from the master-output file is not placed at the end of the file name unless it has no extension. If an extension is already in place, the added extension is place just before the final extension in the name. This is done because some of the extensions are keyed to software that processes names based on the final extension. The extension placed on the master-output file defines a sub-scenario so to speak, perhaps involving variations not included in the other components of the scenario, or perhaps it proves simpler to create variations of a scenario without defining a whole new subdirectory for the results. --Home-name processing has been changed. The home name defined in the set-selectors block is now only used for file output. This home name becomes the global home name for output. All other HOME values will refer only to input of files with one exception. The GETIC and PUTIC files are both treated as being in the directory given by the global home name for output. GETIC refers to a file used for input of the initial conditions but it is given the output home name because the only way that file can be created is by output from a previous run of FEQ. In that run it will be placed in the directory given by the home name defined in the set-selectors block. It does not make sense to have to copy the file to some other location to use it for input on subsequent runs. Once set, the global home name for output cannot be changed by any other home-name input. The home name given in the run-control block is the global home hame for input. Each block of input to FEQ that contains files, will also have a local home name value that can be set by the user. The rule for deciding which home name to use is simple: if the local home name is not given, then the global home name is used, otherwise, the local home name is used in that block. Version 10.33 6 March 2006 --Changes to home-names caused a problem with finding the special file, hwmark.loc. Corrected home-name addition to add the global-home name for output to hwmark.loc. --The information file for the special-output file got two instances of the master-output file extension when only one should have been added. Version 10.34 17 March 2006 --Removed spaces from blank lines in the schematic script file. These caused a problem with Autocad applications. Version 10.35 28 March 2006 --Made final changes to GENSCN for direct-access file usage for both the TSD and the FTF file. Users should also request that the new-format for the FEO file be used once GENSCN is updated. This has been made the default. Thus if you wish to retain the old format for the FEO file, then be sure to add NEW_GENSCN_FE0=NO in the Run-Control Block. Version 10.4 11 December 2006 --Changed the nature of input for the time-series table giving the structure capacity fraction as a function of time. This is specified in NC(7). A time-series table id (includes a table number as a special case) must be given with a minus sign prefixed to it. This makes that input option consistent with, for example, Code 5, Type 7. A positive integer in this field is taken to be a Operation Control Block number for setting the capacity fraction. An additional input is defined at NC(9), giving the name for the control structure so that output on the capacity fraction and flow state can be appear in the Special Output Block. A simple example of a valid instruction is: 5 6 F8983 F1 F1 Seawall_S_boxes Seawall_S_boxes 1 * gate ; TAB where "gate" gives the name of the structure for reference in the Special Output Block. Notice that a place holder, the asterisk, is required because there is an input value that exists between NC(7) and NC(9), that is, NC(8), which we wish to skip and therefore to take its default value of blank. Version 10.42 19 December 2006 --Added output of a version/run date-time string to the *.spi file for special output. This is a comment and any program reading this file must be able to process comment lines as defined for feq/fequtl. This information will permit tracing the parentage of files produced by FEQ. --Added to the header for point-time series files that record the FEQ version number, the version date, and the date/time of the run that created the file. --Added output of the version/run date-time information from input point time-series files. --FEQUTL has also been modified so that each function-table file created by it will have two echoing comment lines that record that the file was created by FEQUTL and also giving the version, the version date, and the date and time of the run. These values will appear in the output from FEQ. It is also possible to put echoing comment lines at some point in every *.mtb file so that the name of the creator, or modifier, plus dates can be list as well. This will make it possible for the master output file to document details such as the sources of information for each file that is used in the input. --Found that HI_IQ_NS_NUMGT was not being output to the master output file. Added it. Version 10.43 22 January 2007 --Changed precision of the start and end JTIME values in the special-output information file and the special output file to 7 decimals. Thus the resolution of the output is now 5x10^-8 day or so. Version 10.45 8 March 2007 --Added additional checking of the Network-Matrix input to catch free nodes that are unattached. The problem often arises with nodes on a dummy branch, when the user forgets to add the dummy branch, which results in the message that the number of nodes is odd. It could be true that the number of exterior nodes is even but they are not properly related. Therefore, the check is done before the check for the number of exterior nodes being even. Hopefully this will make finding these model input errors easier. Version 10.46 29 March 2007 --Correct an error in handling the transition from zero flow to non-zero flow in a structure using a table of type 14. There were conditions in which the flow would have the wrong sign in the structure and the computations would fail when the time step was automatically reduced below the minimum time step. In testing, this correction has increased robustness of models in which a table of type 14 is used to represent flows that rise from zero or fall to zero. Version 10.47 7 June 2007 --Refined the detention-storage option from the linear iteration to Newton's method for solution. There are cases of time step and flow versus storage variation for which linear iteration will not converge even though its starting value is close to the root. Newton's method is not subject to that limitation and converges more quickly as well. Because Newton's method is much more efficient, the convergence tolerance was reduced by a factor of 100 and changes were made so that every correction from Newton's method was reflected in the final results. These changes reduced the tributary-area relative-balance value from the range of 10^-4 to 10^-7 in the test case used. Therefore, small changes in output from models run with earlier versions may occur. These changes are a correction of the results with added solution robustness should a truly extreme runoff event be simulated. Also the chance of non-convergence in solving for the outflow from a detention reservoir has been reduced to essentially zero. Version 10.50 25 October 2007 -- FEQ now has the option to output an index for all the function tables used in a given run. This is requested by adding MAKE_TAB_INDEX=YES to the Run-Control Block. FEQ will then create a file named, ftab_index containing the alpha list of tables by tabid together with the table type and the fully qualified file name where the table was found. Currently the file name is built in but you can always add an extension to the output file name on the commandline that invokes FEQ, and that extension will be appended to the ftab_index. For example feq feqin out.new will result in the table index being stored in: ftab_index.new. Note that the alpha list depends on case. The all lowercase names appear after those that begin with an upper case letter. --Add handling of the new values used to track datums, unit systems, and the basis for function tables. Optional information in the header block of function tables is now processed and in some cases checked. The information items are: (All are max 8-char long descriptions) HGRID --name for the horizontal grid being used for easting and northings. For example, SPCS83, for the state-plane coordinate system defined in 1983 using the NAD83. ZONE -- the zone designation for the horizontal grid, for example: 4601 for the north zone in the state of Washington VDATUM-- The vertical datum such as: NAVD88, NGVD29, NA, ... UNITSYS -- unit system: ENGLISH, METRIC, NA, ... BASIS -- Describe the era, date, our other item that denotes the principal source of the data in the function table. The following are floating point numbers: EASTING -- the easting or x value for the coordinate system NORTHING -- the northing or the y value for the coordinate system. Here is an example from a cross-section table: TABID= RDXSEW_S.93 TYPE= -25 ZONE=4601 HGRID=SPCS83 VDATUM=NAVD88 UNITSYS=ENGLISH BASIS=Pre_05 STATION= 3.63734E+01 GISID=RDXSEW_S.93 EASTING= 1310129.680 NORTHING= 662863.810 ELEVATION= 2.11226E+02 EXT=-99.900000 FAC=1.000 SLOT= 0.0000 Below is an example for a two-dimensional table generated by the CHANRAT command: TABID= RDXSZZ_ML TYPE= -13 HDATUM= 91.370 CHANRAT zrhufd= 0.0000 ZONE=4601 HGRID=SPCS83 VDATUM=NAVD88 UNITSYS=ENGLISH BASIS=Pre_05 EASTING= 1275853.412 NORTHING= 702952.255 LABEL=Flow from MM at RDXSZZ_M.93 to L1 NHUP= 31 NPFD= 7 New values have been added to the Run-Control Block to specify the desired values for these new items in the header block of function tables. Below is an example from a current model: G_ZONE= 4601 G_HGRID= SPCS83 G_VDATUM= NAVD88 G_UNITSYS= ENGLISH G_BASIS= Pre_05 Note that all of these must be given,if any are. Note that the values given are prescriptive with respect to vertical datum and unit system, that is, the vertical datum and unit system for all function tables for which these values have a meaning, MUST agree with those values given in the Run-Control Block. There are cases of function tables for which the vertical datum or the unit system may not apply. The special value "NA" is then used to indicate "not applicable". As an example, a user can specify an adjustment factor as a function of time for a flow at a boundary. Below is an example of such a table: TABID= Dogleg_out_fac TYPE= -7 ZONE=4601 HGRID=SPCS83 VDATUM=NA UNITSYS=NA BASIS=Pre_05 EASTING= -33d6 NORTHING= -33d6 REFL=0.0 FAC=1.0 YEAR MN DY HOUR Flow FAC Adj of Sott Creek connection to Nksk 1990 01 01 0.0 1.50 3205 12 31 24.0 1.50 2001 12 30 In this case the table has a location. Therefore zone, hgrid, easting and northing are meaningful. However, there is no need of a vertical datum because no elevations are involved. Furthermore the function value has no units so the unit system is not involved. Also note the special values for easting and northing, -33d6. This denotes that the location of the function table has not been given yet. The user should be aware that meters, US survey feet, and international feet are used in the State Plane Coordinate System. The US survey foot differs from the international foot by about 2 parts out of a million. The international foot is exactly 0.3048 meter and the US survey foot is 1200/3937 meter. To further add to possible confusion, the official unit for some states is the meter but surveys are still done in feet-- international feet or perhaps survey feet. For precise work the difference becomes important but for most application of FEQ the difference between survey feet and international feet is too small to matter. It may be that the units for elevation may differ from the units for horizontal location. Therefore, FEQ assumes that the units for horizontal location are implicit in the zone/hgrid specification. In practice this may be specific to a given model and may not agree with what others in the same zone do--set a standard and use it uniformily for a given model. Here is another example of a header for a function table: TABID=weirc TYPE= -2 ZONE=4601 HGRID=SPCS83 VDATUM=NA UNITSYS=ENGLISH BASIS=Pre_05 EASTING= -33d6 NORTHING= -33d6 REFL= 0.0 FAC= 1.0 Head WeirC- 0.0 3.0 200.0 3.0 -1.0 In this case the units for head are feet but they are relative to the datum for head and not the datum for elevation. Consequently the vertical datum is not applicable but the unit system is. Again the location of the table may be meaningful but has not been given. Note that 33d6 in feet is larger than any plane coordinate likely to be found. Furthermore, many plane coordinate systems are devised so that no valid coordinate is ever < 0. Consequently the selection of -33d6 as the indicator of an unknown location. FEQ prints a summary of the status of each function table in a given run if MAKE_TAB_INDEX=YES exists in the Run-Control Block and if G_ZONE has a value other than the default value of "NONE". This summary is printed in the master output file and is headed by a line: "List of Function-table status found in FEQ". As the table is constructed, FEQ also supplies the easting and northing for any table lacking those values and for which FEQ is able to deduce a reasonable value. The updated values of easting and northing are shown with a trailing 'u' to denote having been updated. --FEQ is able to apply a constant shift value to output elevations to have the output relative to a vertical datum that differs from the datum used in the computations. This shift value is given as DZ_FOR_OUTPUT=-3.926 in the Run-Control Block. In this example a shift is made between computations based on NAVD88 to results in NGVD29 for the model of the Nooksack River in northwest Washington. A constant shift is within about 0.05 feet at every point in the area modeled. By default the shift is 0.0. With the exception of DZ_FOR_OUTPUT, FEQ does not make any conversions between datums. Such conversions can be very complex and software is generally available to make those conversions to full precision using methods developed by the NGS and others. From the point of view of FEQ/FEQUTL, these values are descriptive and under user control. Currently only the vertical datum and the unit system are checked for consistency. Version 10.51 30 November 2007 --Many minor fixes to all of the changes required for the support of explicit vertical datum. --Added message for code 4 type 3 when the slope is taken from the previous time step's results and the slope goes zero or negative. Computations must stop at that point. Version 10.52 27 February 2008 --Increased the number of PTSF's for output to 1200. This is no longer limited to counting the number of files, but instead, it counts the number of output locations involved. Using the ADD, SUB, OUT, OUTA, and QUAD options can often involve far more than 120 locations from which flows are taken. The new limit is based on 10 locations per output and 120 output locations. Version 10.53 3 September 2008 --Corrected errors in the code that checks for the status of the vertical datum and unit system for each function table in a run. There were cases of reporting of false errors as well as cases in which errors could slip by without being reported. Version 10.60 6 October 2008 --Added output of the Subversion version number and URL for the source code files. This defines the files that created the executable file that produced the output. --Added output of the Subversion version number for the global and local home names. If no home names are used or the working files are not under Subversion, nothing is printed. --Information on the versions is also placed in other output files: special output, point time-series files, and the files for Genscn. This will help in tracing what the state of the working files for each output file. Version 10.61 15 October 2008 --At some point between version 10.35 and 10.60, the treatment of REFL in tables of types 2, 3, 4, 7, 8, and 9, was changed. Prior to this change, this value was not used by FEQ or FEQUTL. This variable has two possible roles: (1) it could be a shift to be applied to all values of the function, or (2) it could be a reference level that would be used by the software at some point, like a reference level for head. NOTE: Currently no use is made of a reference level for head. If the value is a shift value, prefix it with a lower case letter 's', otherwise prefix it with an 'h', or just leave it 0.0 if nothing is to be used. The default treatment for non-zero values for REFL not having either a 's' or a 'h'prefix proved to be flawed. It proves to be the case that many users of the model have placed non-zero values in this field either as a reminder to themselves of some reference level, or in error, thinking that FEQ/FEQUTL requires a non-zero value. The default treatment had a bug in it so that the function value was shifted if REFL was non-zero. The default behavior is to treat a non-zero value without any prefixing letter as if it had a prefix of 'h'. This means that the value is stored in the function table for future reference, but that the function values are unaffected. A warning message is printed for every table found with a non-zero REFL and no prefixing letter. The output for these tables was also changed to reflect how the REFL is treated. Descriptions of changes made to FEQUTL -------------------------------------- Version 4.12 February 23, 1995: Corrected error in format statement for the roadway momentum flux factor, RMFFAC. Version 4.14 March 20, 1995: Extracted subprogram units that are common to both FEQ and FEQUTL so that only one copy exists. Version 4.15 April 21, 1995: Corrected problem with varying lengths of character strings for file names. Version 4.16 April 28, 1995: Added support of flap gate losses for submerged flow from a culvert. Needs further refinement for losses when they are increased to account for heavier gates than those used to develop the formula. Version 4.17 May 10, 1995: Blank lines are now echoed to the output. Version 4.18 May 16, 1995: Correct HEC2X so that the NC card was processed so that values left blank on the NC card remained unchanged instead of being set to some large positive value to signal a missing value of Manning's n. Version 4.30 May 29, 1995: Added support for culverts with risers and for flow through orifices for SFWMD. Version 4.31 July 3, 1995: Changed manner of supporting flapgate losses. Pervious method created abrupt changes in the submerged flows that did not make physical sense. This abruptness resulted from applying the losses to full-conduit flow only. Moved the losses to the departure reach and applied them to all submerged flows. The part-full conduit losses were estimated using a loss coefficient and a velocity head as if the conduit was flowing full. The head loss was applied to the flow area in the conduit in order to estimate a force in the simple momentum balance used in the departure reach. Version 4.32 July 11, 1995: Corrected error in controlling input of multiple conduits in MULCON and MULPIPE. Error caused misreading of pipes when the number of pipes was an integer multiple of 6. Corrected error in computing normal depth in culvert barrels that caused an attempt to find the square root of a negative number when the barrel invert had an adverse slope and the barrel was non-prismatic. Version 4.33 July 21, 1995: Corrected Fortran 90 compiler detected errors in declarations in subroutine SCNPRO. Apparently the errors had not caused problems to date. Version 4.35 August 14, 1995: Corrected problem with SFAC in CULVERT command when the unit of length for the culvert pipe required an SFAC different than 1.0. This caused the routine that distributes the nodes along the barrel to fail. Corrected error in determining coefficients of discharge. Bell mouthed or beveled concrete pipe, denoted by culvert class, RCPTG, was corrected for projecting entrances when it should not have been. Also the coefficient of discharge for RCPTG for types 4 and 6 was incorrectly given the value of 0.95. The coefficient should be looked up in table 5 in the USGS TWRI report. The rounding/beveling value is then used based on the nature of the bell-mouthed or beveled end. Version 4.36 September 26, 1995: Corrected problem in WSPROT14 caused by WSPRO's outputting asterisks for Froude number and water surface elevation for flow over the road when there is no flow over the road. Also made code slightly more general to allow for user placing the cross sections labels off register with the field. Version 4.37 November 14, 1995: Corrected problem caused by not turning off computation of sinuosity elements before a call to FEQX in the main program. This would lead to erroneous errors if an FEQX command followed a CHANNEL command. Version 4.4 January 23, 1996: CHANRAT was found to give a result for free flow that was invalid. A problem in the SECANT subroutine used to solve the non-linear equation for the critical depth at the end of the prismatic channel would sometimes declare convergence when the residual was still much too large. The result was that the free flow would be too large and there would be an abrupt decrease in free flow value as the upstream head increased. Also there would be an abrupt decrease between the free flow and the first value of submerged flow for the upstream heads that had this problem. The problem was correcting by using a modified regula falsi root finding algorithm to find the critical depth instead of using the SECANT method. The submerged-flow solution was also changed to use modified regula falsi to find the flow rate. Version 4.42 January 29, 1996: The option for saving a cross-section function table did not set the maximum unextrapolated argument value. This would only affect FEQUTL computations. This value only comes into use if cross sections are interpolated in FEQUTL using one or more cross-section function tables that were SAVED. This can happen in the barrel of a culvert and it could happen in XSINTERP. The error has appeared in one CULVERT example. Version 4.45 February 2, 1996: Added estimated relative errors to CHANRAT tables. The relative error is based on using a power function as the fit between to points and estimating the error as the error in linear interpolation in that power function. Recall that a power function is a straight line on log-log graph paper when it is plotted. If the local power, also called the local exponent, varies only slightly then the estimated error is probably quite good. If the power varies significantly, then the estimate is subject to greater error. The submerged flow estimates appear to have an error of about 20 per cent as full submergence is approached and the estimates are too small. Work in progress will provide the option to have FEQUTL compute the distribution of partial free drops so that the interpolation error is more uniform and possibly more precise. Clearly the current distribution for CHANRAT wastes lots of points since the initial effect of submergence is weak. Version 4.50 February 13, 1996: Changed CHANRAT and EMBANKQ input to allow optional input of LIPREC and MINPFD to request optimization of the interpolation in 2-D tables of type 6 or 13. LIPREC is a Linear Interpolation PRECision specification in terms of relative error. That is, LIPREC= 0.02 requests a maximum relative error in linear interpolation in the 2-D table of 2 per cent. MINPFD is the MINimum Partial Free Drop to be computed. MINPFD=0.01 states that the minimum value of partial free drop should be 0.01 of the free drop for a given upstream head. These two values follow after the specification of NFRAC and POWER and just before the first upstream head in the input. For EMBANKQ, where NFRAC and POWER are optional, LIPREC and MINPFD may appear without NFRAC and POWER. Here is an example for CHANRAT: CHANRAT TABLE#= 528 TYPE= 13 0.01 LABEL= Control for pond 6. U TO D XSTAB#= 6530 BOTSLP=0.000 LENGTH=000000100. MIDELEV= 643. HEAD SEQUENCE FOR TABLE NFRAC= 30 POWER= 1.5 LIPREC= 0.02 MINPFD= 0.01 0.3 12.0 -1.0 Notes: 1. NFRAC should have a value between 30 and 60 and is used to define a series of upstream heads as well as partial free drops to use in defining the final spacing. 2. POWER is not now used if table optimization is requested. However, it may be used in the future and is retained for consistency with past inputs. 3. Only two heads need be given. If more are given, only the first and last are used and all intermediate heads are skipped. 4. CHANRAT and EMBANKQ will try to meet the interpolation request but there may be some regions where it is exceeded. All the techniques used are approximate including the estimates of the errors. The techniques assume that the flow varies smoothly with head variations. Discontinuous changes or abrupt small scale variations will probably not be detected. 5. The default integration error tolerance in CHANRAT is reduced to 0.05 if table optimization is requested. If you explicitly set the integration error tolerance to a value different than the default, that value will be used. The smaller tolerance is used to get greater consistency in error estimates. Some of the lack of meeting the error tolerance is a result of the other tolerances in the computations. If they are too loose, erratic results, on the order of a fraction of a per cent, result. However, this is often enough to be a large part of a small relative error. Requesting relative errors smaller than 1 per cent is not wise and may not work. A 1 per cent error may only work with CHANRAT because most of its computations are double precision. 6. MINPFD should not be too small. 0.01 is probably small enough. This often gives a factor or four or so between the largest and smallest flow tabulated for a given upstream head. A MINPFD of 0.005 or less will probably show many locations with LIPREC exceeded. This is primarily the result of the difficulty of computing reliable flows when the drop is small, sometimes less than one-ten thousandth of a foot! 7. Using small minimum heads can result in there being many upstream heads. This is especially true in CHANRAT because the flow tends to increase with head roughly proportional to the cube of the head. The same can happen in EMBANKQ if the crest of the overflow section is similar to a triangular or parabolic crest. Again the flow increases close to the third power of the head. This requires close spacing at small heads in order to maintain a small uniform relative error. 8. If the errors reported in the output are larger than LIPREC by significant amounts, try increasing NFRAC to 60 or so. Do not go much larger because internal space may be exhausted. This can sometimes improve the results. 9. The methods used for EMBANKQ and CHANRAT are not yet available for CULVERT because the patterns in CULVERT are much more complex. However, observing the distribution of heads and partial free drops from CHANRAT and EMBANKQ should permit assigning better values to CULVERT. Currently, the partial free drops in CULVERT can only be controlled via POWER and NFRAC and that is somewhat limited. 10. The transition between high-head and low-head flows in EMBANKQ is somewhat rough, especially for GRAVEL surfaces. This comes about because the definition makes no mention of what do when close to the transition. The transition region may show larger errors because it is currently possible to have a discontinuity in flow at the point of transition. This of course assumes that the standard tables are being used 11. The standard tables in EMBWEIR.TAB and TYPE5.TAB have been refined. The coefficients for embankment-shaped weirs have been estimated with greater precision to avoid unwanted noise in the error computations. Please note that the precision of these numbers is more than 100 times the accuracy of the numbers. However, if they are recorded in the tables according to accuracy, high order noise is increased in the error evaluation. In the future these tables will be fitted with smooth function and then these functions will be evaluated and differentiated to full single precision before being tabulated in tables of type 4. The accuracy will still be the same but this will further lower the noise in error estimation. Keep this in mind when supplying alternative tables for weirs of different shapes. Make sure the coefficient variation is smooth and there are no sharp corners. Tables of type 4, those that have a continuous first derivative as well as the function, will probably work best. If LIPREC and MINPFD are not given, then the input should be as in previous versions. However, the output has been changed. CHANRAT and EMBANKQ now compute intermediate values to estimate the maximum relative error as well as the root-mean-square relative error for the 2-D table. You will probably find that the errors in interpolation are larger than expected. Version 4.51 April 2, 1996: Changed a convergence tolerance and extended the convergence testing when computing the subcritical inverse of a specific energy value in INVSE. Problems had arisen when the flow was close to critical. Also changed the estimate of the momentum and energy flux over the roadway in CULVERT. Problems were found when the roadway was really a weir that experienced submergence when the tailwater was at or near the crest. Totally invalid values of flux over the road would occur. The selection of the depth of flow used to estimate momentum and energy flux has been changed to make sure that the depth used is never less than the depth estimated for this purpose for free flow. We take this course of action because submergence reduces the flow and the flux for submerged flow should never be greater than for free flow. This change may affect the submerged flow fluxes in other cases. However, the region of submerged flow for usual flow over roads is quite limited. Thus only the last few flows might be affected. Version 4.52 May 18, 1996: Discovered that FLAP_FORCE was not set to zero in all cases in CULVERT computations when no flapgate was present. May have affected some computations of type 7 flow, that is, cases where critical flow occurs at the end of the departure reach. Appears to have been present since version 4.31. Version 4.60 October 18, 1996: Added command to compute pump rating curves for SFWMD project. Version 4.65 January 31, 1997: Added command to compute pump loss tables and modified the SFWMDPMP command to include unit choice for flow. Version 4.66 February 21, 1997: --Added vertical scale factor, VSCALE, and horizontal shift amount, HSHIFT, to FEQX cross sections. --Added vertical scale factor, vertical shift, horizontal scale factor to EMBANKQ. --Added an argument scale factor to the input of function tables of type 2, 3,and 4. NOTE: The argument scale factor was placed after the function scale factor. This moved the SHIFT input item to the right. The SHIFT item is little used and may be discontinued. In any case if you are using it you will have to move the SHIFT item to the right by 16 columns to leave space for the reading of the argument scale factor. --Added two new commands to create a bottom slot in a cross section. The commands are SETSLOT and CLRSLOT. The first command defines a bottom slot and this slot is add to ALL cross sections that FEQUTL encounters until the command CLRSLOT is found. Thus the addition of a bottom slot to a cross section is like a switch: either on or off. When it is on it will appear in all cross sections processed. The following is an example of these commands: SETSLOT WSLOT= 2.0 NSLOT= 0.02 ESLOT= 20.0 ; Cross sections on the right-hand side of Sumas River upstream ; Jones and Conchman Roads FEQX TABLE#= 1000 OUT22 EXTEND NEWBETAM STATION= 0. NAVM= 0 NSUB 12 0.040 0.040 0.040 0.040 0.040 0.040 0.040 0.040 0.040 0.040 0.040 0.040 OFFSET ELEVATION SUBS RSR1 0.0 37.6 1 170. 37.4 350. 37.0 500. 37.3 770. 37.8 2 770. 40. -1 . . . CLRSLOT The SETSLOT command has three associated values: WSLOT gives the width of the slot at the top of the slot. The bottom width of the slot is zero. NSLOT gives the Manning's n for the slot. ESLOT gives the bottom elevation of the slot. The command CLRSLOT has no other values. All sections that occur after it will NOT have a slot added to them. The slot will be added to the cross section at the minimum elevation point in the cross section. If there is more than one such minimum elevation point, the slot will be added at the minimum point that has the widest horizontal bottom adjacent to it. The slot has its own subsection, that is, an additional subsection is added to the cross section. Experience to date indicates the following: 1. The roughness of the slot should be made small. 2. The slot should be small relative to the width of the section when flows of major interest are present. 3. A constant value of the elevation of the bottom of the slot can be used for several cross sections along a channel. The slot should probably have a constant elevation between points of complete or partial control. 4. Expect additional computational problems as the water rises out of the slot. The change in width at this point is large. 5. Do not make the slot too shallow. Frequent messages about negative depths in a slotted section when the flow is in the slot or just emerging from the slot probably signals that the slot must be made deeper. 6. Expect that the slot option will be modified as experience with it accumulates. 7. FEQ does not know about the slot. FEQ will treat the bottom of the channel as being at the bottom of the slot. This means that the TAB option should be used when the branch descriptions are given. You may have to vary the elevation of the bottom of the slot in the course of model development. Explicit channel invert elevations in the branch description for a branch with a change in the bottom elevation of the slot must be changed manually--an onerous and error prone task. Using the TAB option for elevation has FEQ do the work of extracting the current invert elevation from the cross sections. 8. Because FEQ treats the slot as part of the cross section, all depth values reported for a slotted section will be measured from the slot bottom. They will not represent the depth of water in the unslotted section. At a future release of FEQ and FEQUTL this may be changed so that FEQ does know that a slot is present and will remember the invert elevation of the section BEFORE the slot was added and will use that invert elevation in computing depth values reported to the user. Then negative depths in the output would signal that the water is in the slot. Internally FEQ would not use the negative depth in its computations because internally negative depths have no meaning. The negative depths would only appear in the output for the user. Version 4.67 May 23, 1997 --Changed convergence testing in subroutines SECANT and REGFAL so that argument convergence would be in relative terms. This was done to bring these two root-finding routines into agreement with the others in FEQUTL. This also means that the convergence testing is independent of scale. This may change the output tables from GRITTER, UFGATE, and EXPCON. --Added use of global EPSF and EPSARG to CRITQ, INVTSE, and GRITTER. These routines had local values of the convergence values. May affect output tables from CRITQ, GRITTER and UFGATE. --Changed returned value for FISE and FRIT to a residual in relative terms so that the root-finding routines will use a relative criterion for convergence. These changes may change the output tables from CRITQ and GRITTER. --Added support for metric units to CHANRAT. The default tolerance is now set depending on the value of GRAV, the acceleration due to gravity. The convergence tolerance is retained as an absolute tolerance. Thus the value given by the user, if the default is not used, must be in the units of length, feet or meters, in use. The same is true of the absolute tolerance for detection of normal depth. --Added another global convergence tolerance to the header block for FEQUTL. In previous versions there were two values: EPSARG, a relative tolerance for changes to the root being sought. That is, whenever the absolute value of the relative change to the current estimate of the root was less then EPSARG, the routine would declare convergence. EPSF, a relative tolerance for the residual function value. This tolerance was to be used to decide if a residual function was essentially zero in a relative sense. Thus the residual function itself had to return a relative value. In some cases EPSF was used for functions that did not return a relative value. This would cause problems when moving between units of measurement because some uses of EPSF would be independent of these units and others would be dependent on these units. Consequently, EPSF could not describe both. The new convergence tolerance, EPSABS, is to be used in those cases in which the residual function returns a length value. Thus when switching to using meters for the length unit, EPSF will remain unchanged but EPSABS must be changed to reflect the larger length unit. In the US standard unit system, EPSABS has the same numeric value as EPSF but has a different meaning. --Comparisons were made for results from CULVERT, EMBANKQ, and CHANRAT after the various convergence tolerance changes were made. Few differences were found and most of them were in the least significant digit of the output value. Thus it appears that the changes had essentially no effect on the results. --Changed some tolerances at the limit of near zero depth from EPSF to EPSABS. This may change computational failure conditions slightly but should not affect successful runs. Such small depths either case clearly indicate an error condition. --Modified the Preissman slot, that is the slot in the top of a closed conduit, to reflect the unit system. The maximum slot level is 150 meters, not quite the same as the 500 feet used in the US unit system. However, a round number is indicative of a value set by fiat and not by measurement or method. The slot detection code was modified also to find the vertical diameter of closed conduits. The slot width used for detection of closed conduits remains at 0.07 feet or 0.021336 meters. A slot width larger than this will not be detected and FEQUTL will treat the cross-section function table as being a normal open channel and not a closed conduit in any context in which a closed conduit must be detected. --Changed the means for eliminating close values of depth in computing cross section tables. Previous versions had used an absolute tolerance for the minimum difference between adjacent depths. This has been changed to a relative tolerance to be scale independent. --Added elimination of close values of depth when interpolating cross section tables using command XSINTERP. Uses the same routine as in computing the cross section function tables. This avoids having depth entries in the interpolated table that differ from the previous depth by amounts that are often close to the limit of numeric representation in the hardware. Such close values serve no purpose and only confuse review of the output. --Closed conduits now output the computed value of invert elevation even if it is small. Previous versions set the invert elevation to 0.0 if the elevation was smaller than 0.001 in absolute value. Recall the replacing a closed conduit with a polygon that matches the area of the closed conduit, yields small excursions OUTSIDE the closed conduit boundary at some points. At the invert of the closed conduit this means that the invert of the cross section function table will be slightly below the true invert. Thus if the true invert was given as elevation 0.0, which is often done since the invert elevation in the cross-section function table is often overridden in the FEQ input, the invert elevation printed in the cross-section function table will be a small negative value. This value is now printed no matter how small it is. --Modified the computation of the piezometric level at a culvert exit for flows of type 6. The argument in the USGS basis document was scale dependent. Added a factor to get the correct result when the METRIC unit option is used. --Changed various output formats to gain greater precision in output of values. In some cases the output values will have a precision far greater than the accuracy of the result can support. This is done so that consistency of results can be checked and so that sufficient decimals will be output in both unit systems. The process of changing formats is not complete and will take place over the next few versions as time permits. --HEC2X command has been modified to convert units from metric to english or from english to metric under user control. The default action is to NOT convert the elevations and offsets on the cross section. Adding the word CONVERT after the MODE response will cause conversion of units. The conversion of station values is governed by SFAC only and is set by the user. --Provided additional options following the unit system selection in the standard header to force FEQUTL to use the more exact value for the factor in Manning's equation. The factor is technically the cubic root of the number of feet in a meter which to single precision in a 32-bit IEEE floating point representation is about 1.485919. For nearly all practical purposes this can be taken as 1.49. However, in the practical purpose of comparing output using the metric and the US standard set of units, we get annoying differences when using 1.49 in the US set because the metric set has the exact value of 1.0 in the metric form of the equation. The differences are small but they confuse the search for other causes of differences in results. --Also provided the option to use an equation to compute the value of g given a latitude and an elevation. This happens whenever the more exact value for the factor in Manning's equation is requested. Again for all practical purposes g is 32.2 f/s^2 or 9.815 m/s^2 with an error less than 0.2 per cent for the US. However, when searching for the reason for differences between results using two sets of units, making sure that the value of GRAV is the same everywhere is helpful. --Comments on support for metric units: All the commands in the standard example file, FEQUTL.EXM, have been checked to some extent. Not every output value has been compared. Only spot checks have been made. The subdirectory METRIC under TEST under USF\FEQUTL contains the metric version of the standard example file and the metric version of the weir coefficients for embankment-shaped weirs. Subtle differences can appear in the results even though the two sets of units are made as equal as the software and hardware will allow. For example, in FEQX, DZLIM may cause different spacing in a cross-section function table because small differences in internal value can cause one set of units to interpolate an additional point. This effect is more significant with CULVERT. Flow in culverts involves several different flow patterns, not all of which agree in value at their adjacent limits. Thus FEQUTL has many decision rules for detecting the limits and what to do at the limits to make the flow transition smoother. These limits are, from the point of view of the software, exact. Any small deviation beyond a limit invokes a new flow case. Small differences, on the order of 0.001 foot or less can sometimes change the flow type that is reported. In some cases the flow type will differ but the flows will be essentially the same. There maybe cases in which the difference results in a local difference in flows. This just means that comparisons made between flows obtained from an identical structure using two different unit sets may not agree to the desired level of precision at all levels in all cases. A difference could reflect the presence of a term that is scale dependent that was not detected in my limited testing. On the other hand, it could just reflect the effect of being close to one of the boundaries between flow types. It is likely that some undetected scale dependent values remain at some points in the commands in FEQUTL. Over time I will be checking additional options but only usage will detect the remaining scale dependent features in the software. Version 4.68 May 29, 1997 --Modified SECANT to have both the relative and absolute test for the changes to the current estimate of the root. --Modified FHPL in EXPCON to use the total head Version 4.70 July 7, 1997 --Added the GIS id string to the values stored with cross section tables. If no GIS id string is given the value is stored as blanks. --Added the location of the invert of a cross section in a coordinate system in the plane. One coordinate is called EASTING and the other at right angles to it is called NORTHING. If not given the values are stored as 0.0. --Changed the processing of the cross section header for the FEQX and FEQXEXT commands. The header is the information from the line after the FEQX or FEQXEXT command through the roughness values. In previous versions these values were required to fit within prespecified fields on each line. This has been changed to allow greater freedom in entering the values. Here is an example taken from a cross section input to FEQUTL. The line numbers are not part of the input but are used for reference in this discussion. 01 FEQX 02 TABLE#= 153 SAVE22 OUT22 MONOTONE 03 STATION= 1.E4 04 NAVM= 0 05 NSUB 3 0.08 0.055 0.09 06 OFFSET ELEVATION SUBS 07 -750. 720. 1 The header is lines 2 through 5 in this example. As shown, the header is suitable for all versions of FEQUTL. The order and position of the values can be changed to anything the user desires with the following rules: 1. All values must be spelled as shown in the input description. This was not true in the past. The only requirement was that the values appear in the correct columns on the correct line of input. 2. Any value followed by an equal sign must have its response, that is the value following the equal sign, must appear on the same line. For example STATION = 1000.0 is valid. However, STATION = 1000.0 is invalid. 3. The roughness values must always be last in the items entered. The roughness values serve as the end-of-header indicator. 4. The other values in the header can be entered in any order and at any point on the line. The line may be as long as 120 characters. 5. Values needed by FEQUTL but not appearing in the header will be assigned default values. Here are the defaults: TABLE#= 0 NOSAVE OUT21 OLDBETA LEFT = 1.E30 RIGHT = -1.E30 SCALE = 1.0 VSCALE = 1.0 SHIFT = 0.0 HSHIFT = 0.0 NAVM = 0 VARN= NCON STATION = 0.0 EASTING = 0.D0 NORTHING = 0.D0 Values for the table number, the station, and the roughness options should always be given because their defaults will not be reasonable in most cases. 6. The roughness values must always be the last given in the header for the cross section. The keyword, NSUB should appear on a line by itself. 7. A blank will not be taken as zero. If some value is to be zero you must explicitly give the value as 0 or 0.0 depending on the nature of the item. Here is an example of a header that FEQUTL can now process: TABLE#= 153 SAVE22 OUT22 MONOTONE STATION= 1.E4 NAVM= 0 NSUB 3 0.08 0.055 0.09 Notice that the various values are separated by one or more spaces. If more than one line of input is required for the roughness values they should appear in order following the line containing NSUB. Here is an example taken from an existing input. NSUB 11 0.015 0.030 0.015 0.030 0.015 0.030 0.015 0.030 0.015 0.030 0.015 This could be given as NSUB 11 0.015 0.030 0.015 0.030 0.015 0.030 0.015 0.030 0.015 0.030 0.015 with all values appearing on one line. FEQUTL will count the number of subsections if the count value is left out. For example NSUB 0.015 0.030 0.015 0.030 0.015 0.030 0.015 0.030 0.015 0.030 0.015 If there are too many values to fit on one line and the count value is left out, then a continuation signal must be given to tell FEQUTL that the values continue on the next line. If the count value is given, this continuation signal is not needed and must not be given. Here is multiline input when the count of subsections is left out: NSUB 0.015 0.030 0.015 0.030 0.015 0.030 / 0.015 0.030 0.015 0.030 0.015 The continuation signal is a single slash following after every line of roughness input but the last. FEQUTL will count the number of subsections if the count is left out. The GIS id string is given at any point in the header just as any of the other values. It must be identified with the variable GISID in a manner analogous to the other variables assigned values using the equal sign. The same rule applies to the coordinate location of the cross section invert, given by the optional inputs EASTING and NORTHING. Here is an example with these three values added to an existing FEQX command: FEQX TABLE#= 938 EXTEND SAVE22 NEWBETA NOOUT GISID=291APVQ0938 STATION= 11.290 EASTING=1868573.55 NORTHING=569013.79 NAVM= 0 NSUB 5 0.100 0.120 0.030 0.080 0.032 X-SEC 38 60 FT. U/S OF THE BNRR (DEL TO LEFT & RIGHT OF TOP OF BERMS) RERAN W/ DZLIM=.10 FOR CONTRACTION -73.00 666.6 1 1868587.65 569085.42 TOB 13A -49.10 665.8 2 1868583.04 569061.97 TOB 13 -25.40 655.0 3 1868578.46 569038.71 WEO 12 -21.90 654.6 3 1868577.78 569035.28 CHP 11 0.00 653.9 3 1868573.55 569013.79 CFL 10 8.20 654.4 3 1868571.96 569005.75 CHP 9 11.10 655.0 4 1868571.40 569002.90 WED 8 15.00 658.3 4 1868570.65 568999.08 TOB 7 33.90 663.63 5 1868567.00 568980.53 CLP 6 44.60 666.0 -1 1868564.93 568970.03 TOB 5 The GISID may not contain blanks and should not contain any character other than the digits 0 through 9 and the alpha characters A through Z. Lower case alpha will also be accepted but may not function in other software that analyzes the GISID. FEQUTL only reads the GISID and places it in the resulting cross-section function table. The items need not be in the order shown so the GISID, EASTING, and NORTHING can appear on any line between the line containing FEQX and the line containing NSUB. Thus, the follow order is also valid: FEQX TABLE#= 938 EXTEND SAVE22 NEWBETA NOOUT GISID=291APVQ0938 EASTING=1868573.55 NORTHING=569013.79 EASTING=1868573.55 NORTHING=569013.79 STATION= 11.290 NAVM= 0 NSUB 0.100 0.120 0.030 0.080 0.032 This gives considerable flexibility in designing the pattern to follow in the input to make the information clearer or easier to modify. --The location of the information message for interpolated cross-section function tables has been changed to a comment to avoid interfering with the processing of the options as outlined here. Old interpolated cross sections can be retained if FEQ version 9.03 or later is used because they will be detected. If earlier versions of FEQ are used the interpolation message will cause an error in processing the cross-section function table. Version 4.72 November 5, 1997 --Discovered problem in CULVERT in computing a type 2 limit when the return flag from one routine was not properly tested in the calling program unit. This created nonsense values that caused a BUG message to be issued. --Discovered problems in CULVERT in computing the type 2 limit which sometimes exists above the type 1 limit. That is at low flows the flow type is type 2, as the flow increases the flow type shifts to type 1 and remains there until some upper limit is reached and the flow type again shifts to type 2. The initial estimate of the local elevation at section 1 was set so that the root-finding scheme concluded that no upper type 2 limit existed. This then caused problems later as the upper limit of type 1 flow was surpassed but the limits for type 5 or type 6 flow had not been reached yet. This change may allow culverts to be computed that failed in previous versions. It may also cause some culverts to fail that computed with previous versions. Version 4.73 November 10, 1997 --CULVERT got confused in a case of type 1 flow detection. CULVERT finds all matches between the bottom slope of the culvert and the critical slope to seek boundaries for type 1 flow. If there is only one match and the match is greater then the maximum depth allowed at section 2, then type 1 flow is rejected even though a match is found. This is done because the critical slope for a closed conduit approaches infinity as the water level in the conduit approaches the soffit of the conduit. This means that some matches of the bottom slope with the critical slope may occur essentially at the soffit. Thus, it is unlikely that type 1 flow will prevail. The default limit for the depth at section 2 is 0.95 of the vertical diameter of the culvert. There may be more than one match between bottom slope and critical slope in the culvert. This may mean that the low-head flow starts as type 2 and then changes to type 1 as the smallest depth for a match is encountered. The flow type will remain at type 1 until the upper match for type 1 flow is encountered. CULVERT tries to find both limits. If there is more than one match CULVERT assumed, incorrectly, that the lower match would be less than the depth limit for the culvert. CULVERT now checks to make sure that the smallest depth for a match of slopes is less than the depth limit before it concludes that type 1 flow prevails. Version 4.75 November 11, 1997 --Problems with CULVERT in computing the submerged flow part of flow type 51 were uncovered. The submerged flow computations would fail with the message: FRFT7: Minimum depth= ffff at section 43 found seeking a negative residual. where ffff was some real number. Flow type 51 is a transitional type between the upper limit of type 1 and the lower limit of type 5. As such various coefficients are set to force close matches at each end of the transition. In order to match at both ends of the transition these coefficients must be given values that sometimes are non-physical, that is, they would never be found from a measurement. The momentum-flux coefficient was not properly computed in the submerged flow computations when the free flow type was 51. Since type 51 is a transition between types 1 and 5 its submerged flow computations must transition between these types. That is, at the lower limit, near type 1, the submergence levels should be close to those for the upper limit of type 1. In the same way, when the flow is close to the lower limit of type 5, the submergence computations should be similar in their result to that obtained from the type 5 computations. The free flow computations must also match at the two limits. The approach taken in CULVERT is to force the high-head free flow equation, in this case type 5, to match the flow conditions at the low-head, in this case type 1, upper limit. This also means that the submergence limit must be matched. In order to do this a special value of the momentum-flux coefficient must be used with the type 5 free-flow equation when it is pushed to the upper limit of type 1 flow equation. The coefficients for intermediate type 51 flow are computed linearly between the two limits of type 1 and type 5. Now when type 5 flow is drowned by tailwater, CULVERT assumes that the submerged flow type becomes type 4. Thus, to be consistent and to produce a smooth transition, the submergence of type 51 over its range must be by type 4 flow. Thus, the exit of the culvert is assumed to be flowing full for all submergence computations involving type 51 flow. At the limit of free-flow, that is, at the initiation of submerged flow, CULVERT computes a special value of the momentum-flux coefficient so that the momentum flux exiting the culvert barrel assuming a full barrel will match the value that exited the barrel during the free-flow submergence computations. The barrel may have been part full for the free flow computation and there might have been the complication of a hydraulic jump at the barrel exit as well. This special momentum-flux coefficient only applies at the limit of free flow. Previous versions assumed that the tailwater at the free-flow limit would be below the exit soffit of the culvert barrel. Then, if a special momentum-flux coefficient were present, the momentum-flux coefficient would be interpolated between the special value and the true full-barrel value based on the tailwater level between the free-flow limit tailwater and the soffit of the barrel at the exit. However, submergence of type 1 flow can require submergence of the barrel exit. Type 5 flow may be submerged with the barrel only part full at the exit. Thus, there are cases in which the transitional flow type 51 will have a free-flow limit tailwater above the soffit of the barrel exit. Thus, the special momentum-flux coefficient was not used because the rule for interpolation only applied for tailwater levels below the soffit of the barrel exit. Thus, the true full-barrel value of the momentum-flux coefficient was used and the simple momentum balance could not be balanced because the momentum flux computed for the culvert barrel was incorrect. An additional interpolation rule for special values of momentum-flux coefficient was added to the submerged flow computations. This rule comes into play if there is a special momentum-flux coefficient from the free-flow limit computations and if the free-flow limit tailwater is above the exit soffit of the culvert. The momentum-flux coefficient under this rule is interpolated linearly from its free-flow limit value at the free-flow limit tailwater to the true full-barrel value at 0.25 of the distance to the tailwater that causes zero flow for the culvert. The effect on pre-existing culvert computations is hard to evaluate. The failure to converge occurred at upstream heads close to the type 1 upper limit. If the upstream head were close to the type 5 lower limit, then the computations for submerged flow for type 51 would complete. Version 4.75 will probably give different answers in this region than did early versions. However, the head range between the type 1 and type 5 limits is generally a small part of the range of either type. A search of the CULVERT code shows that special values of momentum-flux and related coefficients are used for flow types 5, 51, 52, and 62. These flows could be affected in some cases by the correction made in Version 4.75. Version 4.76 January 15, 1998 --Found an error in the value of submerged orifice flow shown in the user output file. The correct value was placed in the table file. This error in the user output file appears to have entered at version 4.68 when a statement was converted to a comment in error. Apparently usage of the new version and for UFGATE is small so that no one reported the error. Version 4.77 January 28, 1998 --Changed root finding routine RGF5 so that complete suppression of argument collapse convergence is impossible. Forced a local minimum of a relative difference of 0.5*10^-6 for argument collapse convergence. This was done because an instance of failure to converge on critical depth occurred with the residual function nearly converged but the two arguments still differed slightly. This problem can occur in cases where the residual function is increasing rapidly near the root. Then roundoff in the computation of the intermediate argument is such that the residual function does not fall within the residual-function convergence tolerance. This change in the computation of critical depth may result in slight differences in flow at some points. However, the differences should be on the order of the cumulative roundoff and truncation error. These errors appear to be on the order of 1 part out of 10,000 to 1 part out of 100,000. Version 4.80 April 27, 1998 --A new command has been added to FEQUTL to assist in developing tables to control underflow gates using the GATETABL option in Operation Control Block. This command is named INV_GATE because it inverts an underflow gate rating and creates the skeleton of a control table. The basic problem with the GATETABL option is that it is so flexible that it is difficult to establish the values in the control table. INV_GATE will help the process. However, the process still takes considerable patience and skill. The approach taken is as follows: 1. Add a fixed level overflow weir to the model at the site of the proposed overflow gate. 2. Run this model writing output files, on the FEQ connection file option is supported at this time. HECDSS may be added later if there is demand for it. These output files should be as follows: 2.1 A file contains the times series of water-surface elevation at the control node for GATETABL. 2.2 A file containing the times series of water-surface elevation at the upstream node of the sluice gate. If this is the same location as the control node, then this file is not needed. 2.3 A file containing the time series of water-surface elevation at the downstream node of the sluice gate. 2.4 A file containing the time series of flow over the weir. 3. These files are then used in the INV_GATE command together with the rating tables for the underflow gate to develop an initial control table for the GATETABL option. The INV_GATE command does the following: 3.1 Reads the rating tables for the underflow gate and a collection parameters used to control what is done. 3.2 At each time step in the time series compute what the gate opening would have had to be in order to match the flow over the weir using the sluice gate with exactly the same upstream and downstream water-surface elevation. In some cases no solution is possible because the weir is not as subject to tailwater as is the sluice gate. In this case the gate is set to its maximum setting and a flag is set internally to note that no solution is possible. 3.3 Keep a record of the gate opening and the appropriate range of head difference as well as the flood stage at the control node. The input values for INV_GATE are as follows: UD_TABLE table number giving the type 15 rating table for the gate for flow from the upstream node to the downstream node. DU_TABLE table number giving the type 15 rating table for the gate for flow from the downstream node to the upstream node. CPNT_ELEV_TS file name for the time series of elevation at the control point for the structure. The control is the exterior node giving the elevation defining the operation of the gate. UPS_ELEV_TS file name for the time series of elevation at the node upstream of the gate. Upstream is defined by the user with the rule that the flow, given in another time series, must always be positive for flow from the upstream node to the downstream node. Given only when the control-point location is different than the upstream-node location. DNS_ELEV_TS file name for the time series of elevation at the node downstream of the gate. GATE_FLOW_TS file name for the time series of flows through the gate. FLOOD_ELEV the elevation at the control point that defines the zero point on the sequence of arguments for rows of the control table. This is the elevation at the control point that often signals flood hazard at some point downstream. FLOOD_FLOW the flow at the control point when the elevation is at FLOOD_ELEV. Used to estimate the gate operation for draining of the reservoir. The difference between the FLOOD_FLOW and the current flow at the control point gives the maximum flow release possible. CONTROL_TAB the table number for the control table computed by the GATE_INV command. COL_BDYS the sequence of boundary values defining the cells for the columns of the type 10 table. The midpoint of the cell will become the argument value in the table. ROW_BDYS the sequence of boundary values defining the cells for the rows of the type 10 table. The midpoint of the cell will become the argument value in the table. See the description of the GATETABL option above for more details on the arguments for the type 10 table. DRAIN_LOC the location defining the node that represents the reservoir to be drained. Has two values UPS or DNS. MIN_FLOW all flows less than this value are treated as zero flow. OUTPUT_LEVEL user control on output level: MIN gives the minimum level with the summary tables only. MAX gives the results for each flow greater than MIN_FLOW. REVERSE_FLOW if YES, reverse flows are included. If NO reverse flows are excluded. If YES, MIN_FLOW refers to the absolute value of the flow. An example input follows: INV_GATE CONTROL_TAB= 125 UD_TABLE=530 DU_TABLE= 630 FLOOD_ELEV= 673.50 FLOOD_FLOW = 1234. MIN_FLOW= 1.0 DRAIN_LOC = F13 . . . CPNT_ELEV=F:\WDIT\D19ELEV DNS_ELEV=F:\WDIT\F32ELEV GATE_FLOW=F:\WDIT\F32FLOW . . . COL_BDYS= / -6.0 0.0 0.2 0.4 0.6 0.8 1.0 1.5 2.0 4.0 6.0 50. ROW_BDYS= -5.0 0.0 0.2 0.4 0.6 0.8 1.0 1.25 1.50 1.75 2.0 2.25 2.50 2.75 3.0 3.25 3.5 3.75 4.0 4.25 4.50 4.75 / 5.0 END INV_GATE In this example all of the input is defined by a keyword followed by an equal sign and followed by one or more items of information. The keywords MUST be spelled as shown. Any deviation will be flagged as an error. Note that the COL_BDYS and ROW_BDYS keywords take more than one item. The forward slash, always set off from other items by one or more spaces, is a continuation signal. If a continuation signal is not given, the input processing software expects to find all responses to the keyword to appear on a single line of input. For this command a line of input is limited to 120 space per line. The command is ended with an explicit END command, followed by the command name. This example, taken from Salt Creek in DuPage County Illinois created the following skeleton for the type 10 table for the GATETABL option: TABLE#= 125 TYPE= -10 '( 12A6)' '(1X, 12A6)' '(F6.0)' '( 12F6.0)' '(1X,F6.1, 11F6.2)' HDATUM= 673.500 LABEL= Replace with desired value Fstage -3.00 0.10 0.30 0.50 0.70 0.90 1.25 1.75 3.00 5.00 28.00 -2.50 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.10 0.0 0.091 0.0 0.0 0.0 0.0 0.007 0.0 0.006 0.0 0.006 0.30 0.0 0.168 0.087 0.0 0.0 0.0 0.045 0.032 0.022 0.0 0.021 0.50 0.0 0.289 0.151 0.126 0.108 0.109 0.080 0.059 0.055 0.0 0.047 0.70 0.0 0.390 0.280 0.197 0.137 0.126 0.117 0.115 0.089 0.084 0.080 0.90 0.0 0.451 0.362 0.276 0.265 0.234 0.199 0.159 0.129 0.113 0.119 1.13 0.0 0.531 0.473 0.420 0.353 0.317 0.279 0.239 0.175 0.178 0.178 1.38 0.0 0.608 0.523 0.518 0.495 0.450 0.364 0.306 0.272 0.248 0.246 1.63 0.0 0.604 0.652 0.0 0.543 0.0 0.463 0.417 0.353 0.346 0.339 1.88 0.0 0.638 0.0 0.671 0.0 0.562 0.555 0.530 0.458 0.417 0.449 2.13 0.0 0.692 0.0 0.760 0.691 0.647 0.600 0.588 0.584 0.586 0.570 2.38 0.0 0.730 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.627 0.636 2.63 0.0 0.758 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.88 0.0 0.786 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3.13 0.0 0.833 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3.38 0.0 0.886 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3.63 0.0 0.945 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3.88 0.0 0.971 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 4.13 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 4.38 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 4.63 0.0 0.997 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 4.88 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -1.0 Note the many zero entries. This shows that no water levels occurred in those parts of the table. Since the values in the table are relative gate openings, this table, used as produced would keep the gate closed in those regions. Also, note that the precision of output is far greater then the accuracy of the output. Some of the cells in the table only had one observation. The significance of each cells average value can be judged from the detailed output from the command. This follows this example: Summary of Results Cell Type -6.0 0.0 0.2 0.4 0.6 0.8 1.0 1.5 2.0 4.0 6.0 Bdys 0.0 0.2 0.4 0.6 0.8 1.0 1.5 2.0 4.0 6.0 50.0 -5.00 AvrP: 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 MinP: 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 MaxP: 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 N : 0 0 0 0 0 0 0 0 0 0 0 0.00 AvrP: 0.0 0.091 0.0 0.0 0.0 0.0 0.007 0.0 0.006 0.0 0.006 0.20 MinP: 0.0 0.02 0.0 0.0 0.0 0.0 0.00 0.0 0.00 0.0 0.00 MaxP: 0.0 0.19 0.0 0.0 0.0 0.0 0.01 0.0 0.01 0.0 0.01 N : 0 60 0 0 0 0 35 0 34 0 766 0.20 AvrP: 0.0 0.168 0.087 0.0 0.0 0.0 0.045 0.032 0.022 0.0 0.021 0.40 MinP: 0.0 0.05 0.05 0.0 0.0 0.0 0.02 0.02 0.01 0.0 0.01 MaxP: 0.0 0.27 0.12 0.0 0.0 0.0 0.06 0.05 0.03 0.0 0.03 N : 0 199 40 0 0 0 4 31 12 0 1128 0.40 AvrP: 0.0 0.289 0.151 0.126 0.108 0.109 0.080 0.059 0.055 0.0 0.047 0.60 MinP: 0.0 0.14 0.10 0.09 0.09 0.09 0.05 0.04 0.03 0.0 0.03 MaxP: 0.0 0.39 0.21 0.16 0.13 0.12 0.11 0.09 0.07 0.0 0.06 N : 0 196 21 18 10 10 38 15 28 0 1194 0.60 AvrP: 0.0 0.390 0.280 0.197 0.137 0.126 0.117 0.115 0.089 0.084 0.080 0.80 MinP: 0.0 0.33 0.22 0.15 0.13 0.13 0.11 0.09 0.07 0.06 0.06 MaxP: 0.0 0.46 0.33 0.23 0.14 0.13 0.12 0.13 0.13 0.10 0.10 N : 0 129 14 13 3 1 9 15 42 20 1221 0.80 AvrP: 0.0 0.451 0.362 0.276 0.265 0.234 0.199 0.159 0.129 0.113 0.119 1.00 MinP: 0.0 0.37 0.28 0.24 0.24 0.22 0.18 0.13 0.10 0.10 0.10 MaxP: 0.0 0.49 0.45 0.34 0.28 0.25 0.23 0.19 0.18 0.14 0.14 N : 0 165 17 19 3 4 15 51 55 50 1201 1.00 AvrP: 0.0 0.531 0.473 0.420 0.353 0.317 0.279 0.239 0.175 0.178 0.178 1.25 MinP: 0.0 0.44 0.40 0.32 0.32 0.27 0.23 0.20 0.14 0.15 0.14 MaxP: 0.0 0.64 0.51 0.49 0.39 0.37 0.34 0.29 0.22 0.21 0.21 N : 0 180 9 20 4 5 11 28 38 75 1243 1.25 AvrP: 0.0 0.608 0.523 0.518 0.495 0.450 0.364 0.306 0.272 0.248 0.246 1.50 MinP: 0.0 0.50 0.50 0.49 0.41 0.39 0.30 0.28 0.24 0.21 0.21 MaxP: 0.0 0.80 0.58 0.53 0.52 0.48 0.45 0.33 0.31 0.29 0.29 N : 0 164 9 4 8 5 19 10 27 16 1320 1.50 AvrP: 0.0 0.604 0.652 0.0 0.543 0.0 0.463 0.417 0.353 0.346 0.339 1.75 MinP: 0.0 0.53 0.56 0.0 0.53 0.0 0.41 0.38 0.29 0.30 0.29 MaxP: 0.0 0.87 0.76 0.0 0.56 0.0 0.51 0.46 0.41 0.40 0.40 N : 0 198 7 0 3 0 5 4 26 45 853 1.75 AvrP: 0.0 0.638 0.0 0.671 0.0 0.562 0.555 0.530 0.458 0.417 0.449 2.00 MinP: 0.0 0.56 0.0 0.64 0.0 0.56 0.53 0.50 0.41 0.40 0.40 MaxP: 0.0 0.90 0.0 0.70 0.0 0.56 0.58 0.57 0.50 0.44 0.52 N : 0 215 0 2 0 2 5 6 23 18 683 2.00 AvrP: 0.0 0.692 0.0 0.760 0.691 0.647 0.600 0.588 0.584 0.586 0.570 2.25 MinP: 0.0 0.59 0.0 0.72 0.66 0.60 0.58 0.57 0.52 0.52 0.51 MaxP: 0.0 0.94 0.0 0.79 0.73 0.69 0.61 0.61 0.62 0.62 0.62 N : 0 182 0 3 4 3 11 9 34 28 556 2.25 AvrP: 0.0 0.729 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.627 0.636 2.50 MinP: 0.0 0.62 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.62 0.62 MaxP: 0.0 0.98 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.63 1.00 N : 0 197 0 0 0 0 0 0 0 6 46 2.50 AvrP: 0.0 0.758 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.75 MinP: 0.0 0.66 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 MaxP: 0.0 1.00 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 N : 0 140 0 0 0 0 0 0 0 0 0 2.75 AvrP: 0.0 0.786 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3.00 MinP: 0.0 0.73 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 MaxP: 0.0 0.99 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 N : 0 119 0 0 0 0 0 0 0 0 0 3.00 AvrP: 0.0 0.834 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3.25 MinP: 0.0 0.77 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 MaxP: 0.0 0.99 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 N : 0 126 0 0 0 0 0 0 0 0 0 3.25 AvrP: 0.0 0.887 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3.50 MinP: 0.0 0.83 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 MaxP: 0.0 1.00 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 N : 0 81 0 0 0 0 0 0 0 0 0 3.50 AvrP: 0.0 0.945 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3.75 MinP: 0.0 0.86 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 MaxP: 0.0 0.99 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 N : 0 51 0 0 0 0 0 0 0 0 0 3.75 AvrP: 0.0 0.971 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 4.00 MinP: 0.0 0.80 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 MaxP: 0.0 1.00 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 N : 0 30 0 0 0 0 0 0 0 0 0 4.00 AvrP: 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 4.25 MinP: 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 MaxP: 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 N : 0 0 0 0 0 0 0 0 0 0 0 4.25 AvrP: 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 4.50 MinP: 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 MaxP: 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 N : 0 0 0 0 0 0 0 0 0 0 0 4.50 AvrP: 0.0 0.997 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 4.75 MinP: 0.0 1.00 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 MaxP: 0.0 1.00 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 N : 0 1 0 0 0 0 0 0 0 0 0 4.75 AvrP: 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 5.00 MinP: 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 MaxP: 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 N : 0 0 0 0 0 0 0 0 0 0 0 Each cell has four values printed per row. The first value is the average value of the relative gate opening, P. The second value, on the second row, is the minimum value of P found. The third value is the maximum value of P found, and the last value is the number of cases occurring in that cell. The cell boundaries for columns are given in two rows at the top of the table. Each column is headed by the cell boundaries for that column. Adjacent columns share a boundary in common. Each row has its cell boundary given as well and again adjacent rows share a common cell boundary. As an example, if the flood stage at the control point was between 3.75 and 4.00 feet and if the difference: elevation at upstream node minus elevation at downstream node was between 0 and 0.2 feet, then there were 30 cases. The average gate setting was 0.971, the minimum gate setting was 0.80 and the maximum was 1.0. The type 10 table given above was modified manually to read as follows: ; Modification of the table created by the INV_GATE command in FEQUTL. ; Control table for the low-level gates TABLE#= 5300 TYPE= -10 '( 13A6)' '(1X, 13A6)' '(F6.0)' '( 13F6.0)' '(1X,F6.1, 12F6.2)' HDATUM= 673.500 LABEL= Control by Salt Creek Level at the gate. Fstage-9999. 0.00 0.10 0.30 0.50 0.70 0.90 1.25 1.75 3.00 5.00 50.00 -9999. 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.10 0.0 0.0 0.091 0.047 0.040 0.034 0.034 0.007 0.006 0.006 0.006 0.006 0.30 0.0 0.0 0.168 0.087 0.073 0.063 0.063 0.045 0.032 0.022 0.022 0.022 0.50 0.0 0.0 0.289 0.151 0.126 0.108 0.109 0.080 0.059 0.055 0.050 0.050 0.70 0.0 0.0 0.390 0.280 0.197 0.137 0.126 0.117 0.115 0.089 0.084 0.084 0.90 0.0 0.0 0.451 0.362 0.276 0.265 0.234 0.199 0.159 0.129 0.113 0.113 1.13 0.0 0.0 0.531 0.473 0.420 0.353 0.317 0.279 0.239 0.175 0.175 0.175 1.38 0.0 0.0 0.608 0.523 0.518 0.495 0.450 0.364 0.306 0.272 0.248 0.248 1.63 0.0 0.0 0.604 0.652 0.635 0.543 0.532 0.463 0.417 0.353 0.339 0.339 1.88 0.0 0.0 0.638 0.689 0.671 0.574 0.562 0.555 0.530 0.458 0.417 0.417 2.13 0.0 0.0 0.692 0.747 0.760 0.691 0.647 0.600 0.588 0.584 0.584 0.584 2.38 0.0 0.0 0.730 0.788 0.802 0.729 0.683 0.633 0.620 0.616 0.616 0.616 2.63 0.0 0.0 0.758 0.818 0.833 0.757 0.709 0.660 0.644 0.640 0.640 0.640 2.88 0.0 0.0 0.786 0.848 0.864 0.758 0.735 0.684 0.668 0.664 0.664 0.664 3.13 0.0 0.0 0.833 0.899 0.916 0.803 0.779 0.725 0.708 0.704 0.704 0.704 3.38 0.0 0.0 0.886 0.956 0.974 0.854 0.829 0.771 0.753 0.749 0.749 0.749 3.63 0.0 0.0 0.945 1.000 1.000 0.911 0.884 0.822 0.803 0.799 0.799 0.799 3.88 0.0 0.0 0.971 1.000 1.000 0.936 0.908 0.845 0.825 0.821 0.821 0.821 4.13 0.0 0.0 0.981 1.000 1.000 0.946 0.917 0.854 0.833 0.829 0.829 0.829 4.38 0.0 0.0 0.990 1.000 1.000 0.955 0.925 0.862 0.841 0.837 0.837 0.837 4.63 0.0 0.0 1.000 1.000 1.000 0.965 0.934 0.871 0.849 0.845 0.845 0.845 10.00 0.0 0.0 1.000 1.000 1.000 0.965 0.934 0.871 0.849 0.845 0.845 0.845 -1.0 Note that the region for the gate being closed was enlarged. During test runs the Salt Creek model had some rather erratic values that went below the ranges in the original table. Thus, they were extended to larger negative values. The rough rule of thumb for filling in the values was as follows: 1. In table 125, the one produced by the INV_GATE command, each row appears to decline to a nearly constant value. 2. The zero entries will filled in using ratio between non-zero values in adjacent rows that were to the left of a missing value or values. For example the row with argument 0.30 has a string of missing values following 0.087. The row below it, with argument 0.5, has values in each of the columns that contain a zero in the row above. Thus, the first missing value in the row with argument 0.30 was estimated as follows: 0.087 ------ x 0.126 = 0.073 0.151 This pattern was followed in nearly all cases. If the computed ratio was greater then 1.0 it was set to 1.0. Previously computed values were used as needed to fill in missing values as well. In this manner crude estimates of values could be defined for each cell in the table. This process does not define an optimum table for any purpose; all it does is define a valid table. However, the parts of the table based on the performance of the overflow weir will produce results that are similar to those produced by the overflow weir. As test runs are made with the sluice gate, performance of the operation will probably be not quite what was desired. One can then find the general range of the deficiency in performance in terms of flood stage and the difference in water-surface elevations upstream and downstream of the gate. Once this is found appropriate adjustment can be made to the gate setting in that region. Thus, with multiple test runs an improved gate operation rule can be developed. Note: The INV_GATE command has only been tested with the inputs shown in the example. Other input options are generally not supported yet. It possible, however, to have input of all four time series when the control node differs form the upstream node. That option is supported in the code but has not been tested yet. Also, the number of cells for columns has a limit of 19. Increasing tha can be done if needed. The number of rows is currently set at 40. Changing the number of rows only requires changing the parameter value, MRDT10 in the file arsize.prm. Increasing the limit for columns above 19 has broad implication. Formats need to be changed and the line length of the process of input of type 10 function tables must be changed to increase that limit. Version 4.81 May 22, 1998 --Modified output of cross-section tables to the standard user output to include the average value of Manning's n at each depth in the cross section. This can be helpful in checking on variation of Manning's n with changes in depth. Version 4.85 June 18, 1998 --A number of changes were made in the way the various source files are arranged: 1. The COMPROG directory was changed to SHARE 2. COMPROG.FOR was broken into many smaller units 3. ARSIZE.PRM for FEQ and FEQUTL were combined into one file 4. Several .COM files between FEQ and FEQUTL were the same or nearly so. These were adjusted to be the same and moved to the SHARE directory to be used by both FEQ and FEQUTL. 5. One bug was found in FEQ that occurred on a SG computer. All other compilers did not encounter the problem. These tasks were done by RS Regan, USGS, Reston. I have made check runs of the modified code and have found no differences. However, there may be options not tested that could cause problems. Version 4.86 July 21, 1998 --Found and corrected error in CHANNEL command when stationing in the sinuosity definition table was decreasing. Apparently no one had ever used this option here to for. Version 4.90 September 8, 1998 -Modified routines for finding critical and normal depth to improve the search for an interval containing a root. May change results slightly since the improved search now locates a smaller interval before the final solution is sought. --Modified the lower limit for the high-head flows in CULVERT to be (TY1HTD +0.1001)*D. If TY1HTD is kept at its default of 1.4 the results will be the same as in previous versions. --Modified the limit used for calling the low-head routines in CULVERT to use the computed limit if either type 1 or type 2 have a limit set. In previous versions both type 1 and type 2 had to have a limit set before the computed limits were used in the decision of calling the low-head or high-head routines. Might change some flows close to the boundary between high-head and low-head flows. --Added a new command, UFGCULV, that models a sluice gate on the upstream face of a box culvert. There are many restrictions on this command as it is implemented: 1. The barrel of the box culvert must not have an adverse slope. 2. The sill elevation must match the invert of the box culvert at its upstream end. There is currently no provision for a drop from the sill into the culvert. 3. The barrel of the box culvert must be prismatic: constant shape, roughness, and slope. 4. The departure reach must be horizontal and prismatic. In order to support this new command, the CULVERT command was also modified. Flows through the culvert not in contact with the lip of the sluice gate, are computed in a CULVERT command. The results of the CULVERT command computations are stored internal to FEQUTL when the user gives two options following the table number for the CULVERT command. An input fragment is: CULVERT TABLE#= 128 PUTQ= 976 PUTY2= 977 TYPE= 13 LABEL=SIMPLE CULVERT EXAMPLE . . . The two options must appear on the same line as TABLE# but they can be in any order and spacing. The equal signs must also appear and BE sure the names are spelled exactly as shown. PUTQ gives a table number to be used to store the flows for the culvert in a table of type 13 and PUTY2 gives the table number for storing the depth of flow at the entrance to the culvert, section 2, in another table of type 13. These tables do not appear in the function-table file. They are stored in the function-table storage system within FEQUTL. The CULVERT command must appear before the UFGCULV command so that these two tables are known. The CULVERT command must use the same tables for approach and departure sections as the UFGCULV command as well. The barrel description must match and there can be NO flow over the roadway. Any flow over the roadway must be modeled using a separate EMBANKQ command. The input for UFGCULV is a slightly modified form of the input for UFGATE. An example follows: UFGCULV TABLE#= 580 GETQ= 976 GETY2= 977 LABEL= Sluice gate on culvert APPTAB= 4 DEPTAB= 5 SILLELEV= 50. GATEWIDTH= 30. CD= 0.98 CCTAB=200 FWFOTRAN= 0.1 MAXHEAD= 25.0 MINHEAD= 0.2 PRECISION= 0.02 Opening 2-D Table Cc Value Lip Angle 0.2 5801 0.2645 5802 0.3497 5803 0.4625 5804 0.6116 5805 0.8087 5806 1.0694 5807 1.4142 5808 1.8701 5809 2.4730 5816 3.2702 5819 4.3245 5818 5.7186 5810 7.5621 5817 10.0000 5811 -1.0 SFAC= 1.0 NODE NODEID XNUM STATION ELEVATION 100 TESTCLVU 999 0.00 50.00 TESTCLVD 999 100.00 50.00 -1 Partial free drop parameters MINPFD= 0.01 BRKPFD= 0.5 LIMPFD= 0.99 FINPOW= 2.0 Two new options follow the table number input: GETQ and GETY2. These are the counterparts to PUTQ and GETY2 in the CULVERT command. The same rules apply as in the CULVERT command. The table numbers obviously must match those used in the CULVERT command. The other input is the same. Currently the input for CD is required but UFGCULV does not use the value. The lip angle options for tainter gates is not currently supported. It may be added in the future if a tainter gate is placed on a box culvert entrance. The barrel description follows the end of the gate opening table. It follows a format similar to that used in the CULVERT command. However, special losses are not supported at this time. Since the barrel must be prismatic only two points on the barrel can be given: the barrel entrance and the barrel exit. You must be sure that the length, cross section description and the invert elevations match those used in the CULVERT command. The contraction coefficient for the gate can vary with the gate opening relative to the head at section 1. Head is always measured from the entrance invert of the culvert which must match the sill elevation for the gate. The invert elevation for the approach cross section and the departure cross section comes from the cross section tables. Also, the distance between the approach cross section and the culvert entrance is determined from the difference between the station at the entrance and the station of the approach cross section. Currently the stations along the barrel must increase from entrance to exit. In the above example, the station of the approach cross section would be negative to yield a positive distance when the difference is computed. Some special problems have been encountered in convergence of the iterations for the non-linear system of equations. This appears to arise because the tables for the UFGCULV command are not at smooth as those for the UFGATE command. In the UFGATE command it proved possible to force continuity at all of the transitions between flow types or patterns. It has not proved possible to do the same in UFGCULV because the flow in the barrel complicates the flow patterns. Thus, some transitions are rather abrupt, especially in the changes that take place in the first derivatives used in the solution of the non-linear equation system. It has proved necessary to use the SQREPS option. This option is usually set to 1E25 or so that it does not have an effect on the solution. The value to set will depend on the nature of the model and the nature of the convergence failure. In the small model used for testing, the failure was clearly located at upstream and downstream exterior nodes involved in the specification of the underflow gate. The table for the UFGCULV command is used just as if it came from the UFGATE command. The pattern in the iteration log was that the maximum relative correction stayed about the same size iteration after iteration. An example is: SIMULATION ending at 1990/ 1/ 2/ 20.501 with time step of 3.5 sec ITER RCORECT BRA NODE MXRES LOC SUMSQR NUMGT 1 1.5E-01 1 117 3.0E+02 35 1.1E+05 3 2 1.4E-01 2 201 3.6E+02 35 1.3E+05 3 3 1.7E-01 1 117 3.2E+02 35 1.0E+05 3 4 1.4E-01 1 117 3.6E+02 35 1.3E+05 3 5 1.7E-01 2 201 3.2E+02 35 1.0E+05 3 The control structure was located between the dns end of branch 1 (node 117) and the ups node of branch 2 (node 201). Note that the relative correction (RCORECT column heading) is nearly constant. The maximum residual (MXRES column heading) was also nearly constant, and the location of the maximum residual wasconstant. The column headed by SUMSQR is the key to the selection of the value of SQREPS value to try to force convergence. In the current example a value of 1E4 was used for the value. This forced a few partial Newton corrections and the computations converged quickly and the times step was only reduced for a short time. The value may be much larger in a larger model. If the value for SQREPS is made too small, there will be too frequent partial Newton corrections and the time step may again become small enough to abort the run. If it is picked too large, no partial Newton corrections will be forced, and the run will fail. In the current test model, a value of SQREPS of 1E3 worked and a value of 5E4 worked. However, a value of 1E5 failed since it is essentially the same as the values reported in the iteration log. There are some special problems that arise when the gate is fully open to the vertical diameter of the box. The default parameters for type 1 flow may cause the flow type in the CULVERT computations to change from type 1 too soon. The default parameters are now set for both pipe and box conduits. For a box culvert the values can be increased so that the depth at section 1 is nearly the same as the vertical diameter before the flow type switches from type 1. This discussion is assuming that the barrel of the culvert is steep so that type 1 flow will exist until the entrance is submerged leading to type 5 flow in most cases. Optional parameters exist in the CULVERT command to change the default limits for type 1 flow. The input fragment from a CULVERT command example shows the position, names, and approximate values for the parameters that have worked. TY1YTD gives the ratio of depth at section 2 to vertical diameter, D, of culvert at the type 1 upper limit. TY1HTD gives the ration of head at section 1 to the vertical diameter of the culvert at the upper limit for type 1. The former limit often determines a limit that is smaller than the latter limit. For example, the initial output from CULVERT showed that the upper limit at section 1 for Type 1 flow was 1.549D as defined by the section 2 value. Making the section 2 value closer to D , that is, 1.00, will yield larger values for the head to diameter ratio. However, care must be used. If TY1YTD is made too close to 1.00, we will be computing critical flow in the slot. Thus, the value of TY1YTD should be such that the top width at that level from the cross section table produced by the BOX option in MULCON will still be the width of the box and not the smaller widths as vertical diameter is approached. The BOX option puts a small slope on the top of the BOX to avoid a discontinuity in the top width. The maximum value for TY1YTD should be at or below this level. . . . TYPE 5 PARAMETERS RBVALUE= 0.00 BVANGLE= 0.00 WWANGLE= 0.00 LPOVERD= 0.00 TYPE5SBF= 0.75 TYPE 1 PARAMETERS TY1YTD=0.997 TY1HTD=1.59 Roadway Description . . . There are also some special output messages that may appear in the tables written to the user output file. In some cases submergence of a FW flow by the tailwater will lead to an submerged orifice (SO) flow based on the relations among the headwater, tailwater and gate opening. However, computation assuming that the gate is submerged will show that it is not quite submerged. If such is the case, a message "SO FLOW is not SO!" will be given. In some cases a flow value will be followed by an asterisk, (*), indicating that the assumption of SO flow lead to critical depth at the culvert exit. The former message would normally appear for steep culvert slopes, and the * may appear for mild slope or horizontal culvert barrels. In the former case, the flow is given the free-flow value and the computations continue. In the later case the flow computed assuming critical depth at the culvert exit is reported and the computations continue. The UFGCULV command does not include the effect of gravity on the momentum balances used in the sloping culvert barrel. This may distort the results, especially for steep barrels. Experimentation with the adjustments for the effect of barrel slope will indicate whether the results are reasonable. The hydraulic jumps are assumed to have zero length. In some cases, especially with submerged jumps, the high-velocity jet of water persists from some distance downstream often exceeding the length of a free jump. This means that the momentum flux in the culvert barrel is not well defined. The error estimation and control for the UFGCULV command is not as well developed as for the UFGATE command. Therefore, the additional points in the computation that are used to estimate the errors are retained in the output of the table, which generally results in error estimates that are too large. The UFGCULV computation takes more run time, therefore, intermediate output is printed to the user terminal during execution. Version 4.91 October 28 ,1998 --Found error in UFGCULV in which SFAC was not applied to the station found at the approach section. If SFAC > 1, this caused problems. --Found a case at low heads at which a zero flow was computed when the drop was small. This caused a failure during the computation of the local power, P, involving logarithms. Made a quick fix. Must check latter why the flow was computed as zero when it should have been slightly greater than zero. However, this portion of the table is unlikely to be used and the error in any case is small. Version 4.92 January 27, 1999 --Modified various commands that output 2-D tables to place a source string after the head datum value. This is used to check tables created from WSPRO data in FEQ. Other source values were added in case they prove useful . Version 4.93 4 February 1999 --Added source flag for cross-section function tables. If flag is 0, the table was input to FEQUTL, and if the flag is 1, the table was interpolated within FEQUTL. Caused change of XTIOFF to 21 in arsize.prm. Version 5.00 30 July 1999 --The change to using a function-table id rather than a table number requires that many references to tables in the input to FEQUTL be changed. In some cases, no change is needed if the standard headings have been used. In some cases the standard headings must be modified to give the correct result. FEQ and FEQUTL currently use a variety of input formats. Here is a brief summary of the various formats and their definition and distinctions. Format name Description ------------------ ---------------------------------------------------- fixed format response items MUST fall within predefined fixed ranges of columns. This was the only format in the early versions of these programs. This format has a fixed order for the items AND a fixed set of columns for each item. It is the most rigid and has no user control available. heading-dependent This is the name given to a format in which items are given in a fixed order and each must appear in a given range of columns but the user defines the range of columns by giving a heading for each item. The heading must not have any spaces in it. The range of columns for an item is from the last character in its heading to the column following the last character in the heading for the preceding item (using left to right order) If there is no preceding item, the first column for the item is column 1 on the line. This option has greater flexibility than the fixed format in that the range of columns for each item is under user control even though the order of items remains fixed. However, the user must make sure that each item falls inside its valid column range. column-independent Column-independent input is still has a fixed order that cannot be changed but now the items need only be separated by one or more spaces or by a single comma. This gives the greater flexibility but blank fields do not exist so that no item can be skipped in the sequence. Either a value must be given or a place holder, the asterisk in our case, must be given. In some cases the context cannot make sense of an asterisk a value must be given for each item, even if the item is to default to zero. named-item Named-item input has the most flexibility because items can be in any order or can even be left out if the default value is suitable. However, the penalty for this additional flexibility is that each item's name, a predefined identifier, must be given with the item. All previous forms use an implied item by virtue of the order of the input. Now that the order is under user control, the name of the item must be given so that the software can associate the value with its proper internal variable. The following commands have had changes in their input processing. In some cases the old inputs will be processed properly without change. In other cases the old inputs must be changed because they are incompatible with the new method of processing the input. CULVERT The culvert barrel description must use NODEID=YES. The NO option is no longer supported. In addition, the input of the barrel is now taken to be in heading-dependent format. The standard headings, if used should be suitable. The DEPTAB specification may have to be changed if you gave a value for BEGTAB or RMFFAC. If either of these two optional values have been given, you must use named-item input. That is each of the three items must have the standard name followed by an equal sign followed by the value. For example, DEPTAB= 568 BEGTAB=570 RMFFAC= 0.8 Default value for BEGTAB is 0 and for RMFFAC is 1.0. EXPCON The standard input will work without change but the left justified heading for the column that contains the label for each table will result in the label being truncated to five characters. The heading for that column should be extended to the right so that the heading for the label column has 50 or more characters in it. Dashes, underlines, or any other printing character on the keyboard will probably work. Dashes and underlines have been tested. Of course digits and letters will work. AXIALPUMP and PUMPLOSS These inputs have been changed to heading-dependent format. These commands have four lines in each heading; three lines of descriptive text with a fourth line of intermittent dashes. The line of dashes above each column of the input is taken to be the heading for that column. Therefore, the standard headings will work without change. ORIFICE This input has been changed to heading-dependent format. The standard headings will work because none of them contains spaces. UFGATE The standard headings for the sequence of gate openings DO NOT work because they have spaces in them. The space should be replaced by a dash or an underline. XSINTERP The input for XSINTERP actually uses the barrel input routine for the CULVERT command. Consequently XSINTERP now requires that NODEID=YES. NODEID=NO is not supported. Thus, this input must have a column for the node id added. The contents of this column may be blank. FEQX, FEQXLIST, FEQXEXT The input for the header block, that is, all input that precedes the specification of the points on the boundary, has been named-item since version 4.70. However, most users have probably not taken advantage of all of the flexibility this offers. Complications have been found in some cases for the STATION and for the information following NSUB. If the STATION response is a left blank, interpreted as 0.0 in versions before 4.70, the newer versions will complain that an input item is missing. Recall from the definition above that a blank response does not exist in named-item input because one or more spaces are used to separate the various items and their responses. The default value for STATION is 0.0. However, there are cases, such as the old bridge routine, in the bridge-opening table, where the STATION has been left blank. That will now cause an error and the STATION may not be able to be left out. I have not tested that option since the old bridge routine is not actively supported at this time. The problem with NSUB occurs when more values of Manning's n are given than the number of subsections. This will cause failure with the new code. The correction is to delete the extra values of n or increase the count or leave out the count of subsections completely. In the latter case, remember to use the slash, /, at the end of each line if additional values of n appear on the following line. EMBANKQ The newest code will fail in processing the line of input that gives the table number(id) for the command if the optional TYPE, HLCRIT, or HLMAX are given. FEQUTL used to treat these as fixed-format input. FEQUTL now treats the line as being named-item input. The TYPE defaults to 13 and can be omitted if that is your desired type. HLCRIT and HLMAX normally only appear if some non- embankment weir flow is being computed. Any items that appear on this line must be in the named-item format. A program, CONVERTUTL, is available to convert existing input to the new form. This program reads all the lines of the input and detects the cases that need changing and makes the changes in its output file. The program assumes that headings have been used and that they are in the proper position on the line. Deviations from this assumption may cause the program to fail or to produce an output that will cause an error when FEQUTL processes the new input. Some problems may have to be fixed manually as the new version of FEQUTL is run. --Error messages. In processing table ids, FEQUTL assigns an internal table number to each table. This internal number is then used everywhere like the id for the table. When an error message is given that contains a table number, FEQUTL is supposed to convert the internal table number to the external table id in the message. It is probable that some error messages have not been corrected and will give the internal table number and not the table id. FEQUTL reports the internal table number assigned to each table the is read from an input file. If an error message appears with a table number that does not exist or does not make sense, scan the output file for the section where the function tables are input. The reported number may be an internal number that was not converted. Also, make a note of the error message number and report the problem so that the message can be corrected. --The first three lines of the header block for an FEQUTL input can be deleted. These lines have not been used for some time and now FEQUTL has been changed to recognize their absence. Again the ability of FEQUTL to do this depends on the standard names being used, that is, the first line MUST start in column 1 with UNITS=. --The option to specify exact or nearly exact values for the Manning's n factor NFAC,and for GRAV, the acceleration due to gravity, has been enhanced a bit. These specifications occur as options after the UNITS= options. Note that this input looks like named-item input but it is fixed-format input. The input UNITS= ENGLISH will get a value of NFAC=1.49 and GRAV=32.2. If the metric system is requested by the option UNITS= METRIC FEQUTL will use NFAC= 1.0 and GRAV=9.8146. The value for NFAC for the metric system is exact but its value for the English system is approximate. Thus, if we try to make a comparison between the results obtained with FEQUTL for the same geometry in the English and in the metric system of units, we will find small differences from this source alone. Consequently, the option to get exact or nearly exact values for NFAC and GRAV was implemented. In order to define GRAV, which technically varies from point to point on the surface of the earth, we require an input of the latitude and elevation. These values can be some mean value for the area being modeled. This definition of GRAV is, of course, optional since the variation of gravity at the elevations normally encountered for unsteady-flow simulation is a fraction of one percent. The latitude is given in degrees and the elevation in feet(meters) above the sea-level datum. To request these values we give the line UNITS= METRIC EXACT 45.0 0.0 The first number given is the latitude and the last is the elevation. Thus, here we are requesting the value of GRAV at latitude 45 degrees and elevation 0.0 meters. The resulting value is GRAV=9.8062. If we change to the English system we use UNITS= ENGLISH EXACT 45.0 0.0 and get NFAC=1.485919 and GRAV=32.1726. In order to show the proper location of the responses the, example file for FEQUTL, FEQUTL.EXM, contains the following line UNITS= ENGLISH NOMINAL 45.0 0.0 which gives the same values when UNITS= ENGLISH is used alone. --FEQUTL now reports the version number, the version date, and the date/time of the run at the start of the user output file. The time is given in the format: hour.minutes.seconds.milliseconds. Version 5.01 17 November 1999 --Found a problem in subroutine RDUP when the range being used included negative values that were small enough that the arithmetic average of the minimum value and maximum value was negative. Changed the code to take the arithmetic average of the absolute value of the minimum and maximum value of the range as the basis for comparison. This should not affect the decision for eliminating duplicates from a list of elevations. Version 5.03 15 December 1999 --Bottom-slot depth is now included in the cross-section function tables so that the depth relative to the original invert can be reported in FEQ. If no slot was added the bottom-slot depth is 0.0. --Added interpolation of bottom-slot depth, Easting, and Northing for interpolated cross sections. --Modified the initial root interval definition in SBFCHN, the routine used to define the submerged flows for CHANRAT. The previous root interval definition sometimes caused false root-finding failures. The new method, however, may fail in yet other cases. Decreases in critical flow with increases in maximum flow depth MAY cause failure or strange results. Version 5.04 5 January 2000 --Modified CHANNEL command to allow setting and clearing of the bottom slot within the command. The presence of the slot will affect the curvilinear elements slightly but the effect should be small if the slot is small relative to the cross section. Version 5.05 13 January 2000 --Found minor bugs: 1. Dummy argument length and actual argument length did not agree in processing input for cross section headers. Could have caused problems if a cross had more than 98 subsections. 2. If the interpolation accuracy tables were missing, CHANRAT would enter an endless loop and not report the error. The run is now stopped if the tables are missing. Version 5.06 18 January 2000 --Roughness factor greater than 0.60 in CULVERT command will cause failure of a table lookup. Warning message is issued and the factor is limited to 0.60. --Found bug in the default values in the EMBANKQ command. CSHIFT was defaulted to 1.0 when it should have been defaulted to 0.0. This resulted in an increase of crest elevation for the embankment of 1.0 feet. --Changed warning 557 for cross-section computation so that it appears the first time it applies and then is suppressed. Typically a vertical wall extensions that is a single subsection will cause ten or more nearly identical messages to be issued. Now only one will be issued. --Changed behavior of EMBANKQ and CHANRAT with automatic assignment of breakpoints to avoid a system error when the max number of breakpoints was found. FEQUTL now halts instead of trying to continue. Version 5.07 4 February 2000 --Found problems with CHANRAT using the modified root-interval finding method in 5.03. Replaced with the original method. Added a root interval search to check the results of the old method. Tested on about 75 CHANRAT commands. With positive slopes there were some cases where the detailed search did not find a root interval but the original method converged to a value that appeared to make sense with regard to the adjacent values. The convergence would be obtained with argument collapse, that is, the arguments at the ends of the interval that is thought to contain a root are so close together that they it no long makes sense to continue the computations. The root finding routine issues a warning if the residual at argument collapse is more than twice the relative function tolerance, EPSF. Version 5.09 18 February 2000 --Corrected a bug in tracking internal accounting of function tables. Added internal generation of tabids to correct the problem in FEQUTL. The bug would sometimes cause failure of a run with a spurious duplicate function-table number message. The internal tabids have been generated in a form that is invalid as an external table id. Thus, they will never conflict with a user table id. The internal table ids can appear in error messages about an interpolated table. The internal table ids contain a sequence number that is unique for each id. The interpolation routine adds the starting and ending internal tabid number to its output. Thus, it is possible to locate which table is the source of the error even though the table was generated from a request for interpolation with the minus sign in the tabid field. Version 5.10 13 March 2000 --Added new command, MKWSPRO, to create a WSPRO cross section from an FEQ cross section. Does not support Manning's n varying with water level. An example command is: MKWSPRO NAME=EXIT FEQXEXT . . . where the triple dots following FEQXEXT indicate the remainder of the FEQXEXT command input. The name filed gives a name to the WSPRO cross section. This must be chosen to match the names needed in the WSPRO runs and their subsequent analysis using the WSPRO set of commands in FEQUTL. --Fixed a minor bug detected by the LF95 compiler when running with the chk option turned on. This did not affect any results and only appeared because the chk option highlights things that might cause problems. --Add a command to create the EMBANKQ embankment crest description from the (x,y,z) trace of the crest and the approach taken from a digital terrain model or from ground survey. See the example input file for the format details. Version 5.11 14 April 2000 --Increased the length allowed for fully qualified file name in FTABIN command to 128 characters. Previous limit was 64 characters. Version 5.2 11 July 2000 --Added option to XSINTERP for definition of the coordinate location of the intersection of the cross sections with the river-mile defining flow line. This means that the river-mile defining flow line must be given to the XSINTERP command! This flow line must be the exact flow line used in defining the stations for the cross sections used in XSINTERP command as the basis for the interpolation. Example input with optional flow line data provided: XSINTERP SFAC= 5280. NODEID=YES FLNAME=RAFLMM FLFILE=\nooksack\lower\futl\r1\xsec\flbnk.xy NODE NODEID -----------TABID STATION ELEVATION 100 RAXSAB_M.93 4.6189 TAB 101 -RAABBK_M.1 - 102 -RAABBK_M.2 - 103 -RAABBK_M.3 - 104 RAXSBK_M.93 4.4781 TAB . . . Two additional lines are inserted after the NODEID-option input line. The first line gives the six-character name for the flow line. This name must appear in the file given by the following line of input. This file will contain one or more flow lines used by the RMASSIGN program. A fragment of this file is: . . . RAFLMM ;This is the main river-mile flow line 10 RM_ORIGIN S_ATRMORG RM_FACTOR S_BEGIN 1.3554 5413.95 5280.0 0.0 RAFLMM1000 1215725.3561 653637.4225 0.00 PI RAFLMM1001 1216036.9169 653932.5854 0.00 PI RAFLMM1002 1216209.0953 654260.5441 0.00 PI RAFLMM1003 1216323.8809 654449.1204 0.00 PI RAFLMM1004 1216479.6613 654596.7019 0.00 PI RAFLMM1005 1216774.8242 654711.4875 0.00 PI . . . END The river-mile defining line is used to define the (x,y) location of the interpolated cross sections. These data are needed for drawing a geographic schematic of the model. If these items of information are missing, the schematic cannot be produced. Version 5.21 5 August 2000 --Found a bug in error detection for non-increasing offset in the roadway-profile-description-input routine for EMBANKQ. Statements existed for checking for non-increasing offset but one statement was missing so that non-increasing offsets would not be detected. Such offsets are in error and sometimes cause enigmatic failures later in the computations. The additional statement was added to the code so that EMBANKQ inputs that did have non-increasing offsets will no longer complete successfully. Version 5.23 6 September 2000 --Modified the input to CHANRAT so that a response of TAB or tab for the middle elevation value will find the invert elevation from the cross section table defining the rating. This option was added to save time in defining the input for the command. Otherwise one has to search for the minimum elevation when the cross-section function table is being computed and does not already exist. Version 5.25 3 October 2000 --XSINTERP did not output the slot depth value for interpolated sections even though the depth was calculated. Added output of slot depth based on linear interpolation. Version 5.30 30 October 2000 --Added new command LPRFIT to compute estimates of surface area for a level-pool reservoir when only the storage capacity is known. The following example outlines the pattern of the input. The line numbers in the first two columns are NOT part of the input. Column 1 is given by the starting character in LPRFIT. 01 LPRFIT 02 TABID= LPR1 03 FIT_WITH = VLSPLINE 04 CHK_OPTION =NATURAL 05 LEFT_SLOPE= 0.002 06 RIGHT_SLOPE = PARABOLIC 07 INFAC = 27.0 ' Input values are in cubic yards. 08 OUTFAC = 43560. ' We want the output values to be in acre-feet 09 10 Elevation Storage 11 57.00 0.00 12 58.00 15.96 13 59.00 357.20 14 60.00 2153.56 15 61.00 10858.47 16 62.00 31139.85 17 63.00 65670.90 18 64.00 109513.92 19 65.00 161828.04 20 66.00 223869.39 21 67.00 294804.77 22 68.00 373155.19 23 69.00 457841.99 24 70.00 548079.82 25 -1.0 The items of input in lines 2 through 8 are named-item. TABID gives the table id of the table of type 4 created by LPRFIT. FIT_WITH gives the fitting option used to estimate the missing values of surface area. The options are: CSPLINE-use a cubic spline fit to the elevation and storage data and compute the slope at each point as an estimate of the surface area; VLSPLINE-use a variation limited cubic spline fit where the slopes computed using a cubic spline are adjusted to force the storage to be a monotone variation function between tabulated points; and PCHERMITE- estimate areas by using three-point derivative estimates at each tabulated value using the central of the three points at locations away from the boundary. The most reliable methods appears to be a VLSPLINE because that yields at least increasing values of storage if the given data is also increasing. CHK_OPTION gives the checking that is to be done on the results. The options are: NATURAL- assumes a natural LPR in which the storage and surface area always increase with an increase in water-surface elevation; CONSTRUCTED- in which only the storage is known to increase with elevation; and NONE- no checking of either storage or area is done. LEFT_SLOPE gives the initial area at the first argument in the data series. The options are: LINEAR-compute the slope (area) from the first two points in the data series and impose it at the left end; PARABOLIC- compute the slope from the first three points in the data series and impose it at the left end; and numeric where the user gives an actual numeric value for the slope. The slope is given in the proper units as they exist in the final table. RIGHT_SLOPE gives the final area at the last argument in the data series and has the same options as LEFT_SLOPE. INFAC is a conversion factor for the values of storage as input. In this example the storage values are in cubic yards. Thus, INFAC will convert the storage to cubic feet on input. OUTFAC is a conversion factor to use on output of the values. In this case we want acre-feet tabulated in the final table and Thus, OUTFAC=43560. ARGFAC, not used, is a conversion factor on output of the argument sequence. It can be used to convert units in the argument sequence. All of the named-item options have default values. Some are designed so that their use will cause an error. This is to prevent invalid dependence on default values. The defaults are: TABID= that is blanks-using this will cause an error. FIT_WITH = VLSPLINE CHK_OPTION = NATURAL LEFT_SLOPE = LINEAR RIGHT_SLOPE = LINEAR ARGFAC = 1.0 INFAC = 27.0 OUTFAC = 43560. You may have to adjust the LEFT_SLOPE and RIGHT_SLOPE to get valid variation of area if the CHK_OPTION is NATURAL. It may not be possible to get valid variation of area in this case given the data and the tabulation interval. If the decrease in surface area is small relative to the area, then allowing decreases in surface area will probably not distort the results greatly. Generally the best way to get a valid variation of the surface area is to retain the zero storage point and delete intermediate points that have small storage. These small storage are often uncertain in any case. The minimum non-zero storage in the input should be one acre-foot or more for maximum capacities on the order of a few hundred to a few thousand acre-feet. Version 5.31 21 December 2000 --The low-head weir coefficient in EMBANKQ and in flow over the road in CULVERT was improperly allowed to vary with the submerged total head as an argument in computing submerged weir flow. The argument should have been held fixed at the free-flow total head value. Limited testing shows that the effect is generally smaller than 0.05 percent of the flow. Some submerged flows less than 10 cfs, appear to to affected by less than 0.5 percent. The only values affected are those in which a significant approach velocity head is present. The flows are unaffected if the velocity head is negligible. Version 5.32 22 January 2001 --An erroneous error message was printed if UFGATE had less than two gate openings given. The correct message was added. --An error existed in computing LEFT and RIGHT options for cross sections whenever either or both points of intersection did not match an existing point on the cross section. A check for consistency of elements in the cross-section function table before it was output should have caught any of these errors that caused significant differences in the computed elements. --Additional screening code was added to FEQUTL to make transfer of a project to UNIX/LINUX easier. FEQUTL has been modified to change any end-of-line carriage-return character to a space. --Relaxed the tolerance in CHKTAJ when checking the consistency of area and top width. Small changes in area would cause a reported bug when the differences were clearly roundoff noise. Version 5.35 9 February 2001 --Discovered that interpolation for intermediate cross sections when a bottom slot was present did not work well. The slot was sometimes greatly distorted at interpolated sections. The interpolation process was revised to take into account a bottom slot. In order to make this process reliable, a slot must be present at both sections bounding the interpolation interval if any slot is present at all. That is, two cases are valid for interpolation: no slot in either section, and a slot at both sections. An error is reported and execution halts if only one of the sections bounding the interpolation interval have a bottom slot. --EMBANKQ now defines the minimum head for automatic argument defintion using a MINQ input in the standard header. If omitted, MINQ is set to 0.2 cfs and its equivalent in cms for SI units. You must still give a minimum head and it is used in defining the new minimum head. This was done because having too large a minimum head when flow over the structure begins has led to computational problems. This is especially true of surfaces that cause the initial flow to increase at approx the 2.5 power of the head. The global value of MINQ can be overridden in the input to EMBANKQ by giving the value of MINQ=. The angle braces should be replaced by your desired value. For example, MINQ= 0.5. This must be given on the same line as TABID. Version 5.36 15 March 2001 --Changed precision of alpha and beta printout in Type 25 cross-section function tables. Also, changed the output to standard output for FEQUTL for computation of estimates of critical flow. --CHANRAT now defines the minimum head for automatic argument definition using a MINQ input in the standard header. If left out a value of 0.2 cfs or its metric equivalent are assigned. No override of the global value is allowed in CHANRAT. Version 5.38 3 October 2001 --Added FHWA option for computing discharge coefficient for Type 5 flow in culverts. --Changed message in LPRFIT when user declined validity checking. Version 5.39 4 February 2002 --Added EQK option to FLOODWAY and updated input of the table to be heading-dependent so that table ids are supported. Further changes to FLOODWAY for an EQA option are underway. Note that it appears that the implementation of FLOODWAY requires that the FLOODWAY command be the only command in the input followed only by the cross-section processing commands required for it. Version 5.40 8 May 2002 --Added output of GISID to the final summary for the FLOODWAY command. In order for the GISID to be found, you must request that every cross section be saved using one of the SAVE options, such as SAVE20. If this is not done the GISID column entry will be blank. Note that FLOODWAY should be run as the only command with only the cross-section processing commands required for it. It may also work if the FLOODWAY command and the related cross-section commands are first in the input. --Modified the command-line argument processing so that a single argument will be taken to be the user-input file. The remaining two file names will be formed from the user-input file name by stripping off the extension (the last one if there is more than one dot(.) in the name), and appending 'out' for the user-output file and 'tab' for the function-table file. For example, if FEQUTL is invoked as: fequtl culvert.in the output file name will be: culvert.out, and the function-table file name will be: culvert.tab. Version 5.41 17 May 2002 --Added additional checking of the command-line arguments. This has been done in the hope of avoiding cryptic operating-system messages. --Checked for matches among the three command-line arguments. All must be unique to avoid destruction of either the user-input file or confusion in the output as the O/S writes to one file as if it were two files. --The HOME directory option is available in FTABIN. Details for this option are available in the release notes for FEQ. Briefly, HOME allows you to give a character string that is prepended to every file name that follows it in the FTABIN block. It is assumed that prepending this string to the subsequent names will create a valid file name with either a partial or complete path that is correct for the operating system you are using. This will make it easier to transfer projects from one system to another but only if one follows a standard pattern in creating the input to FEQ and FEQUTL. For example, if the project files are all located on one drive, say, D, then leaving D: off the names and placing HOME=D: at the start of the FTABIN block, will then give the correct drive. Later if the project is transferred to another computer and the project files are now on the G drive, only a few instances of HOME have to be changed to make the transfer. Version 5.42 3 July 2002 --Found some annoying problems with arithmetic precision when computing cross-section table elements in a slotted section. Differences in optimiation or compilers would produce slightly different results. In some cases the checking of the top width (T), area (A), and first-moment of area (J), would flag a potential BUG because the check showed differences in the incremental values that exceeded the preset tolerance. The tolerances have been increased and the compilers have had the ap flag added. This forces the compiler to store intermediate results so that we are more likely to get consistent results. This does slow the computations because intermediate results could otherwise be held in registers. Registers are both faster and offer greater precision. However, the order of computations varies and in some cases the greater precision later caused problems in the slot. Using the ap flag on the compiler also appears to have solved a problem just recently discovered in CULVERT when optimization level 1 failed with an error message but optimization level 0 did not fail. A small difference in the numerical computations can sometimes cause a large difference in the final result. This can happen when automated decisions must be made to determine the conditions of flow. Such decision happen frequently in the CULVERT command. The solution to some of these problems will need to wait until I can find the time to change some values to double precision. At least the problems with the slot will be eliminated or reduced with that change. --Added output of water-surface elevation to the user-output file results for additional two-D tables. This made it easier to check and compare results and to also prepare manual tables that made use of free-flow values only. Version 5.43 21 October 2002 --If GISID is blank then FEQUTL will set it to the value of TABID. THis proves useful in being able to see the TABID when explicit labels are given to a node on a branch. Version 5.44 17 March 2003 --Modified FLOODWAY command: 1. added flow as an optional input following all others on a line. Just add a proper heading and place the flows under the columns defined by the heading as for any other heading-dependent input. 2. The flow was added to the input in order to produce a velocity in the floodway. Thus, the output table for this command has had the flow area and the velocity added to it. Example floodway file: Floodway Specification for the Trib 2 Mainstem Conveyance Loss= 0.05 Elevation Loss= 0.1 TABID OPTN ELEV FEQBOT LEFT RIGHT LOSS Flow 100 EQK 695.42 692.20 1200. 101 EQK 698.68 696.25 1200. 102 EQK 701.7 698.68 1200. 103 EQK 704.94 699.20 1200. 104 EQK 705.13 701.73 1200. 105 EQK 706.75 702.58 1200. 106 EQK 709.44 702.70 1200. 107 EQK 709.59 704.18 1200. 108 EQK 709.9 705.28 2.7 1200. 109 EQK 713.66 707.80 1200. 110 EQK 713.85 709.17 1200. 111 EQK 716.96 710.34 1200. 112 EQK 716.72 711.16 1200. 113 EQK 716.75 712.07 1200. 114 EQK 719.96 712.35 1200. 115 EQK 720.03 717.23 120. 116 EQK 722.65 716.63 1200. 117 EQK 722.82 719.40 1200. 118 EQK 722.67 719.50 1200. 119 EQK 722.93 720.30 1200. -1 Note that the flows have been pulled from "thin air" do to speak--they do not relate to any real system. The summary table from this floodway specification file looks like this: Summary of Floodway Computation results Table Id-------- GISID----------- FldOpt BF Elevation FEQ Invert Left----- Right---- Loss----- FldwyArea FldwyVel- 100 EQK 695.42 692.20 -259.8 74.2 0.058 457.2 2.62 101 001EBE20101 EQK 698.68 696.25 -208.9 72.9 0.077 285.2 4.21 102 001EBE20102 EQK 701.70 698.68 -171.9 -1.0 0.073 225.0 5.33 103 001EBE20103 EQK 704.94 699.20 -101.1 439.1 0.039 878.1 1.37 104 001EBE20104 EQK 705.13 701.73 -95.2 10.4 0.039 217.8 5.51 105 001EBE20105 EQK 706.75 702.58 -33.2 24.3 0.037 139.6 8.59 106 001EBE20106 EQK 709.44 702.70 -87.9 49.2 0.025 502.6 2.39 107 001EBE20107 EQK 709.59 704.18 -94.0 17.4 0.030 317.9 3.77 108 001EBE20108 EQK 709.90 705.28 -54.5 2.7 0.009 220.4 5.44 109 001EBE20109 EQK 713.66 707.80 -53.7 34.4 0.024 308.4 3.89 110 001EBE20110 EQK 713.85 709.17 -25.0 27.2 0.021 197.4 6.08 111 001EBE20111 EQK 716.96 710.34 -53.1 101.9 0.022 740.2 1.62 112 001EBE20112 EQK 716.72 711.16 -45.5 93.7 0.027 494.3 2.43 113 001EBE20113 EQK 716.75 712.07 -71.0 20.5 0.030 277.4 4.33 114 001EBE20114 EQK 719.96 712.35 -120.5 17.7 0.015 573.0 2.09 115 001EBE20115 EQK 720.03 717.23 -17.7 7.0 0.052 46.7 2.57 116 001EBE20116 EQK 722.65 716.63 -51.6 50.9 0.045 251.1 4.78 117 001EBE20117 EQK 722.82 719.40 -75.2 45.8 0.040 257.6 4.66 118 001EBE20118 EQK 722.67 719.50 -109.7 26.7 0.057 226.0 5.31 119 001EBE20119 EQK 722.93 720.30 -153.1 25.9 0.056 292.6 4.10 If no flow is given, the flood-way velocity will be given as zero. Version 5.46 21 April 2003 ---A change was made in imposing variation-limitation to a fitted cubic spline. Currently this option is used in CULVERT and in LPRFIT. In cases in which the computed derivative was negative and the local trend of the data showed a positive slope, the variation-limitation code would force a zero derivative value. This has been changed to fit a positive slope based on the local behavior of the function as defined by the data points closest to the point in question. The derivative approximation is weighted such that it yields the correct derivative if the function is a parabola and the points are equally spaced. ---An error in computing the bottom-slot insertion point was discovered. This resulted in a slot one-half the width of the requested width. It was discovered in cross sections having a vertical boundary at the minimum point of the cross section. It may have occurred in other situations as well. After correction the insertion point was located correctly in all of the test cases used. Version 5.48 22 April 2003 ---Small depths on the order of roundoff error appeared in cross sections having more than one minimum point. These small depths caused subsequent problems in computation. These have been removed. The current tolerance is set at NRZERO/16.0. The typical default value for the near-zero depth value in U. S. units is 0.08 feet. Thus, depths less than 0.005 feet will be ignored in computing the table. One side effect of this change is that FEQ/FEQUTL will sometimes issue a warning that it has found a discrepancey in area or first moment of area greater than the 0.02 tolerance at the initial positive depth in the table. This message can be ignored since the absolute value of that error is quite small. The other option is to modify the cross section so that duplicate equal inverts do not occur. A shift on the order of 0.1 foot in one invert should eliminate the warning message when the cross-section function table is processed. --A new bottom-slot shape is available. This bottom slot has a top width that varies exponentially from a given base cross section such that the hydraulic depth is constant, that is A(y)/T(y) = Ao/To for all y > yo, where yo=max depth of the base section and To= top-width of the base section. The base section is a triangle. The new command is SETSLOTE and with the default settings the input values can be left the same. However, the role of NSLOT can be expanded by adding a sign to the value. NSLOT > 0.0 -- use NSLOT as the value of Manning's n for the slot boundary NSLOT < 0.0 -- compute the average of the Manning's n at the two insertion points on the boundary and multiply this average by the absolute value of NSLOT. The command SETSLOTE has the following additional options. They should follow YSLOT/ESLOT in the order given here. A parameter may be omitted but the order of those that remain must be maintained. RDEPTH - factor on slot depth to define yo, that is, let ym= max depth of the slot, then yo=RDEPTH*ym. The default value of RDEPTH is 0.37937619 and the default value of To is WSLOT*RDEPTH/10.0. If these two defaults are used then the maximum width of the bottom slot will be WSLOT and the area of the slot will be nearly RDEPTH of the area of a triangular slot with the same max depth and with WSLOT as its maximum width. Thus, shifting to using SETSLOTE from SETSLOT with no change in the parameters yields the same maximum width but with a reduced area of flow. Increasing WSLOT from 1.0 to 1/0.37...=2.6359 yields an exponential slot with a maximum top width of 2.6359 and an area equivalent to the area of the triangular slot with top width 1.0 and the same maximum depth. TZERO - Value for the top width of the base section. EXPFAC - factor on the exponent in the topwidth for the exponential channel. Default value= 1.0. Changes to RDEPTH must be done carefully because small changes can yield large changes in the exponential channel. For example using RDEPTH = 0.30 instead of the default yields a maximum topwidth of 3.19 instead of 1.0 as given by WSLOT when YSLOT=37. The top-width function for the exponential slot is T(y) = To*exp[ 2.*EXPFAC*(y/yo - 1)] to aid in sorting out the effect of non-default settings. In limited experiments to date the exponential bottom slot seems to be beneficial. It also appears that a value of NSLOT of about -0.8, yielding a bottom slot only a bit less rough than the cross section, is also beneficial. However, these observations are based on only one large model. --The NRZERO depth value is NOT added to cross sections with a bottom slot. This was forced by the addition of the exponential bottom slot wherein the area computed to single precision at the NRZERO depth was 0.0. The purpose of the NRZERO point was to improve interpolation for the square root of conveyance, and cross sections with a bottom slot already have many points that avoid the problem. --Six new cross section tables have been added. These are types 30 through 35, and they mimic 20 through 25 with the addition of derivatives to yield at least piecewise cubic Hermite polynomial interpolation for the square root of conveyance, beta, alpha, and the two curvilinear axis correction coefficients. Only the derivatives required by the contents of the table are added. These tables have been added to yield functional representations that are smoother in the sense of having continuity of the first derivative and sometimes the second derivative at the tabulated depth values (breakpoints). It may be that some of the computational problems in an unsteady-flow model originate at the discontinuities in first derivative at breakpoints. A review of the convergence theorems for Newton's method shows that they all depend on continuity of the first derivative near the root. If one of the roots is close to a breakpoint, a likely occurrence with a few thousand cross-section function tables and each table with 30-100 breakpoints, then the model may have convergence difficulty. The increased order of interpolation may yield more accurate values of conveyance, for example, but it is not clear that the change is of any significance. None of the effort in including increased smoothness in approximations was motivated by increased accuracy. It was done to increase the robustness of the computations. As a side effect of these new cross-section table types, FEQUTL will print out the derivatives outlined above in its output display. These derivatives are computed by fitting a cubic spline to the tabulated function values and limiting the first derivative at the breakpoints of this spline to be such that the variation of the function is monotone. The monotone variation is forced to prevent adding potentially spurious maxima or minima between breakpoints. In the printout, each derivative changed to attain monotone variation is denoted by a following caret, ^, symbol. If no caret symbol appears for any derivative of an element, then we have continuous first and second derivatives. At any breakpoint having a caret the second derivative will be discontinuous. The tabulated values of the first derivative of the square root of conveyance (dkh/dy) are useful for checking what happens with non-default values of the parameters for an exponential bottom slot. With the new commands the sequence of commands in the header block for the master-input file for FEQUTL should be modified to the following: NCMD= 38 5 FEQX 1 FLOODWAY 2 BRIDGE 3 CULVERT 4 FINISH 5 SAME 6 FEQXLST 8 ROADFLOW 9 SEWER 10 MULPIPES 11 FTABIN 12 EMBANKQ 13 JUMP 14 CRITQ 15 GRITTER 16 MULCON 18 CHANRAT 19 EXPCON 20 HEC2X 21 QCLIMIT 22 XSINTERP 23 FEQXEXT 25 CHANNEL 26 WSPROX 27 WSPROQZ 28 WSPROT14 29 UFGATE 30 RISERCLV 31 ORIFICE 32 AXIALPMP 33 PUMPLOSS 34 SETSLOT 35 CLRSLOT 36 MKEMBANK 39 MKWSPRO 40 wsprot14 41 LPRFIT 42 SETSLOTE 44 Please note that without the addition of SETSLOTE to the header block, FEQUTL will treat the command as unknown even if the executable supports the command. --Added additional warning to CULVERT under certain conditions involving types 0, 1, and 2 flow. These conditions may occur when the type 0 flow occurs and the approach section is still restrictive as the flows approach what CULVERT thinks is a type 1 condition. Any type 0 flow indicates potential problems because having a control at section 1 is non-standard and if this control persists to higher heads, expect computational failure. See the notes on error 683 in the FEQUTL documentation on what might need to be done. --The format for the cross-section function tables has been changed to increase precision. Various cross-section function-table lookup routines also have been changed. --To support conversion of type 13 tables to type 43, a value of the free drop at zero upstream head will appear in tables of type 13. This is non-zero only for CHANRAT and then only if there is a sustaining slope to the plane. In other cases the free drop at zero upstream head should be zero. It proves useful to have a non- zero value for free drop at zero head for CHANRAT to reduce the rate of change of slope near zero flow. --Increased precision of elevation output in WSPROQZ. Version 5.50 1 December 2003 --In testing a new compiler, several missing commas were found in format statements. Apparently, previous compilers processed these statements correctly or these formats have never been accessed. Version 5.60 8 June 2004 --Added a global-home name to the header info so that projects may be shifted from drive to drive or between MS Windows and various Linux/Unix systems. In order to make this convenient, a command-line option was also added to specify a configuration file that contains the header info. Thus the same header info can be used for all master-input files to FEQUTL. Consequently, only one file needs to be changed to shift to a new location if the global home name and the file references in the FEQUTL input are carefully designed. Any file name that begins with a / or \ is taken relative to any home name that is active. The only local home name that now exists in FEQUTL is in the FTABIN block. If no home name is given there, then any global home name applies. Otherwise the local home name will apply. If the file name does not start with a / or \ then it is taken to be in the directory of invocation of the command. The global home name is given as: GHOME=e:\ where GHOME must start in column 1 and must be the last item in the header block. If a configuration file is given on the command line, then the header info in the master-input file is skipped. This information can be removed if a configuration file is supplied. FEQUTL now also reports the names of the master-input file, master-output file, function- table file, and the source of the configuration data in the master- output file. Also, FEQUTL will write the source of the configuration file to the console. This information should help keep clear what information FEQUTL used. The user may wish to modifiy the batch/script file that invokes fequtl. An example script line for Linux/Unix is /pj/usf/fequtl/gnulnx/lf95/fequtl95 $1 $2 $3 -conf /pj/usf/fequtl/test/fequtl.conf An example line for a MS Windows batch file is f:\usf\fequtl\msw\lf95\fequtl #1 #2 #3 -conf f:\usf\fequtl\test\fequtl.conf The feature of command-line completion also works in this case, that is, one can type as a valid command: fequtl fequtl.exm and the master-output file will be fequtl.out and the function-table file will be fequtl.tab. The configuration file is found in this case. Please note that there must be at least one space between -conf and the first character of the configuration file-name specification. --Changed input of TABID and TYPE and control parameters for CHANRAT to named-item input. Two lines can be taken for this input to match what was used in the fixed format. However, it is possible to place all of these items on a single line. Examples: 1.0 The old fixed form will still work so long as no changes to default values are made: CHANRAT TABID= 600 TYPE= 13 LABEL=TEST OF CHANRAT-using auto arguments XSTAB= 100 BOTSLP=0.003 LENGTH=000000030. MIDELEV= TAB HEAD SEQUENCE FOR TABLE NFRAC= 60 POWER= 1.5 LIPREC= 0.02 MINPFD= 0.1 0.25 10.0 -1.0 A new variable to control the table has been added, the target minimum flow, MINQ. This is given as one of the options. The other optional inputs, ERRKND, INTHOW, EPSINT, NDDABS, and NDDREL, are rarely needed, but if given, must follow the named-item pattern. For example, to change the integration convergence tolerance from the default of 0.1 to 0.2 and to set the target minimum flow to 1.5, one could use: CHANRAT TABID= 600 TYPE= 13 MINQ= 1.5 EPSINT=0.20 LABEL=TEST OF CHANRAT-using auto arguments XSTAB= 100 BOTSLP=0.003 LENGTH=000000030. MIDELEV= TAB HEAD SEQUENCE FOR TABLE NFRAC= 60 POWER= 1.5 LIPREC= 0.02 MINPFD= 0.1 0.25 10.0 -1.0 With this option added to the input, if desired, there is better control over MINQ. In prior versions, the only way to set MINQ for CHANRAT was through the global value given in the header block and now in the configuration file. This sometimes proved messy because not all structures in a single master-input file require the same target minimum flow. In most cases a single value will work, but there were some exceptions that required isolating the structure or structures to their own master-input file. Version 5.61 29 June 2004 --Added detection of the operating system to FEQUTL so that the proper name divider could be placed in file names. This was included so that inputs can be transferred between MS Windows and Linux/Unix without having to change them. Version 5.65 25 August 2004 --Changed length of files names for all but command line arguments to be 128 characters. --In the process of doing extensive checking of the file-name-length changes, global checking was enabled in the compiler. This checks many things including un-initialized variables. Several such variables appeared and have been fixed. The following is a synopsis of what was changed. 1. Sinuosity values were not being set properly for SEWER, MULPIPES, and MULCON. However, that had no effect because even though these values were being moved around they were not used in the computations. FEQUTL always disables sinuosity corrections in closed conduits. However, the variables involved were initialized to prevent problems during further such checks. 2. A bug was found in setting the line-segment Manning's n values in MULCON. One or two line segments could get a Manning's n at one point, that is for one line segment, that came from an adjacent conduit. Thus, if the conduits in the system had differing Manning's n, the near flowing full values of conveyance would be affected. In one test case the changes in conveyance were about 0.4 percent when flowing full. However, this was a test case in which the conduit diameters varied by more than a factor of two. Normally this would not be the case. 3. The CULVERT command had two cases of un-initialized variables. One involved using a type 1 value when type 1 flow was not possible. When corrected, the same results were obtained for the flows in the culvert. However, there might have been some cases where this was not true. The other involved a type 5 flow submergence-limit computation and the starting value for an iterative solution was not set. However, after setting it to a proper value, the same results were obtained as before. Version 5.66 8 June 2005 --A bug was detected in an area increment by the output routine in FEQUTL. The problem was eventually traced to the routine that removes duplicate elevations and near duplicate elevations from the list of breakpoints in top-width variation for a cross section. Detection of near duplicates was not properly scaled when the elevations were in the range of 700's or more. Thus, one or more breakpoints at the end of boundary line segments that deviated only slightly from horizontal were deleted from the list. This would then skip a breakpoint that should have been included in the table. The top widths and areas given in the final table were correct, but some increments in area and related elements were incorrect because a breakpoint was improperly left out. The scaling has been changed to detect cases of near duplication when the elevations are large. However, there may still be cases were the increments in area or first moment of area will fail the test in the output routine. FEQUTL will stop at this point. To get FEQUTL to compute the table, scan the cross section for nearly horizontal line segments and increase the deviation from horizontal for any found. The tolerance for near duplicates is determined as follows: 1. Compute the average of the maximum and minimum elevation in the table. 2. Subtract the minimum elevation from the average in 1. to yield an "average" depth for the table. 3. For each adjacent pair of elevations in the list of breakpoint elevations, compute the difference in elevation and divide by the average depth computed in 2. This gives the relative difference in depth for that interval. 4. If the absolute value of the relative difference in depth is less than 1 x 10^-5, then retain only one of the two elevations in the breakpoint list. Example: If the max elevation is 710 and the min elevation is 705, then the average depth is: (710 - 705)/2 - 705 = 2.5. In order for an elevation breakpoint to be included in the final list, its elevation must deviate by more than 2.5 x 1 x 10^-5 = 2.5 x 10^-5. If the line segment ending at a deleted elevation breakpoint is long, then the loss of area might be large enough to be noticed. In this example, a line segment 1000 feet long would yield an error of about (1000 x 2.5 x 10^-5)/2= 0.0125 ft^2. This is clearly a negligible amount given the size of the cross section. Version 5.67 25 October 2005 --Changes to FEQ required some changes to the storage of 1-D tables of types 2, 3, and 4. Version 5.68 26 October 2006 --Found that GISID was not initialized to blank in subroutines MULCON, PIPES, and SEWER. Currently, FEQUTL does not input a value for GISID for these sections. Therefore, GISID was set to blank in these subroutines. In subroutine TABOUT, a blank value of GISID is then set to the table id for output. Version 5.70 16 December 2006 --Added output of a version/run date-time string to all of the function-table files. The lines were added as comments so FEQ or FEQUTL will skip those lines when reading the files. These items of information will prove useful in tracing the parentage of various files, especially if multiple copies of files of the same name exist in different directories or in a version control system. Version 5.71 30 March 2007 --Modified wsprot14 to check for changes in tailwater elevation made by WSPRO when the flow state at the exit section is deemed to be super critical. In that case, WSPRO computes the critical depth for the given flow, and goes on. What we want is to reduce the flow until the flow state is subcritical at the given water-surface elevation. FEQUTL will NOT output a type 14 table if it finds that any of the tailwater elevations have been changed. An error message is issued for each one found. The flows have to be reduced until WSPRO finds the flow in a subcritical state at the exit section. Version 5.75 29 October 2007 --Added support for attaching a description to each function table that documents the following information about the table: 1. Define the horizontal datum of any eastings/northings included in the table. These will appear in some tables, depending on the data available for their computation. Easting and northing fields have been added to all tables. Until this version, the only tables that had such options were cross-section function tables. The horizontal datum is described using two eight-character fields: ZONE and HGRID. The values supplied for these fields are user defined. However, once selected, the values must be used with complete consistency for all tables in a model. 2. Define the vertical datum of the elevations that may be in the table. This is defined by an eight-character field labeled: VDATUM. Again the contents of this field are user defined. 3. The system of units used in the table. This field is labeled with: UNITSYS and again its contents are user defined. It also has a maximum of eight characaters. 4. The basis or source or era of the data in the table. This field is labelled as: BASIS and is eight-characters in length and is again user defined. The reason for adding these labels is that the United States is undergoing a shift in datum, both for horizontal and vertical measurements. The new datums are already in use, for example, recent maps from the USGS already make use of them. Some GIS databases already make use of them. However, the transition from the older datums to the newer datums will take place over a protrated period of time and will probably be done model by model. Thus it is possible that both sets of datums will be in use by one organization and therefore it becomes helpful to have explicit labels attached to function tables to help avoid extra confusion in this process. Here are some examples of using these values in FEQUTL. Only the relevant parts of the input are shown to avoid a huge number of lines. Example using FEQXEXT with the input generated by a utility program that used 3-D sections extracted from a DTM. FEQXEXT TABID=RDXSBJ_S.93 NOOUT MONOTONE EXTEND NEWBETAM GISID=RDXSBJ_S.93 STATION= 28.5293 ZONE=4601 HGRID=SPCS83 VDATUM=NAVD88 UNITSYS=ENGLISH BASIS=Pre_05 SHIFT=3.926 EASTING= 1284704.08 NORTHING= 683186.55 VARN=NCON NSUB 41 0.086 0.086 0.086 0.086 0.086 0.086 0.086 0.086 0.086 0.086 0.086 0.086 0.086 0.086 0.086 0.086 0.086 0.086 0.086 0.085 0.074 0.074 0.074 0.074 0.074 0.074 0.074 0.074 0.074 0.074 0.074 0.074 0.074 0.074 0.068 0.057 0.057 0.057 0.057 0.057 0.057 OFFSET ELEVATION SUBS N0 The new items are named-item format. Therefore each name and its value must appear on the same line but the order and the placement is user defined. That is, the input above could be entered as: FEQXEXT TABID=RDXSBJ_S.93 NOOUT MONOTONE EXTEND NEWBETAM GISID=RDXSBJ_S.93 STATION= 28.5293 EASTING= 1284704.08 NORTHING= 683186.55 ZONE=4601 HGRID=SPCS83 VDATUM=NAVD88 UNITSYS=ENGLISH BASIS=Pre_05 SHIFT=3.926 VARN=NCON NSUB 41 0.086 0.086 0.086 0.086 0.086 0.086 0.086 0.086 0.086 0.086 0.086 0.086 0.086 0.086 0.086 0.086 0.086 0.086 0.086 0.085 0.074 0.074 0.074 0.074 0.074 0.074 0.074 0.074 0.074 0.074 0.074 0.074 0.074 0.074 0.068 0.057 0.057 0.057 0.057 0.057 0.057 OFFSET ELEVATION SUBS N0 and still obtain the same result. Lines below and including NSUB must appear as shown however. The output from either one of these inputs yields: TABID= RDXSBJ_S.93 TYPE= -25 ZONE=4601 HGRID=SPCS83 VDATUM=NAVD88 UNITSYS=ENGLISH BASIS=Pre_05 STATION= 2.85293E+01 GISID=RDXSBJ_S.93 EASTING= 1284704.080 NORTHING= 683186.550 ELEVATION= 1.17356E+02 EXT=-99.900000 FAC=1.000 SLOT= 0.0000 Depth Top_width Area Sqrt(Conv) Beta First_moment Alpha Critq Ma Mq Notice that the new labels are just echoed to the output. However, in the function table itself, the format is fixed, that is each label must appear exactly in the position shown. The items are not named- item. Most often these tables are created by a computer program and are read by a computer program, so the fixed formats do not cause major problems. However, if the function table is modified manually, one must exercise care that the fixed format is followed. Here is an example of a CHANRAT command, again generated from a utility program that uses 3-D sections and this program extracts an appropriate easting and northing: CHANRAT TABID=RDEGEF_M.1R TYPE= 13 ZONE=4601 HGRID=SPCS83 VDATUM=NAVD88 UNITSYS=ENGLISH BASIS=Pre_05 EASTING= 1304906.856 NORTHING= 667997.465 LABEL=Flow fromMM at RDEGEF_M.1 to R1 XSTAB=RDEGEF_M.1R_XS BOTSLP= 0. LENGTH= 50.0 MIDELEV= TAB HEAD SEQUENCE FOR TABLE NFRAC= 60 POWER= 1.5 LIPREC= 0.010 MINPFD= 0.10 0.25 20.0 -1. The output from this input fragment looks like this: TABID= RDEGEF_M.1R TYPE= -13 HDATUM= 201.900 CHANRAT zrhufd= 0.0000 ZONE=4601 HGRID=SPCS83 VDATUM=NAVD88 UNITSYS=ENGLISH BASIS=Pre_05 EASTING= 1304906.856 NORTHING= 667997.465 LABEL=Flow fromMM at RDEGEF_M.1 to R1 NHUP= 35 NPFD= 7 HUP 2605-4 2957-4 3363-4 3747-4 4081-4 4465-4 4932-4 5462-4 6092-4 6616-4 FDROP 1376-4 1538-4 1725-4 1898-4 2021-4 2141-4 2270-4 2408-4 2569-4 2715-4 All the fields are fixed format. FEQUTL does not make any conversions between datums. Such conversions can be very complex and software is generally available to make those conversions to full precision using methods developed by the NGS and others. From the point of view of FEQ/FEQUTL, these values are descriptive and under user control. Currently, only the vertical datum and the unit system are checked for consistency. Working with varying datums can become confusing and even a simple typing error in one of the new fields could result in FEQ complaining because it has found an inconsistent vertical datum or unit system (the only two values now checked by FEQ). Consequently, new global values for these new items of information have been put into the FEQUTL configuration file (See this file under version 5.60 to get an overview of the configuration file. It is just the header information placed into a file.) These new values are prescriptive, that is, they are used to force all input values to conform. Thus a typing error or just a mistake in the datum given, will be detected by FEQUTL. Here is an example configuration file: UNITS= ENGLISH NOMINAL 45.0 0.0 NCMD= 35 FEQX 1 FLOODWAY 2 BRIDGE 3 CULVERT 4 FINISH 5 FEQXLST 8 SEWER 10 MULPIPES 11 FTABIN 12 EMBANKQ 13 CRITQ 15 GRITTER 16 MULCON 18 CHANRAT 19 EXPCON 20 HEC2X 21 QCLIMIT 22 XSINTERP 23 FEQXEXT 25 CHANNEL 26 WSPROX 27 WSPROQZ 28 WSPROT14 29 UFGATE 30 RISERCLV 31 ORIFICE 32 AXIALPMP 33 PUMPLOSS 34 SETSLOT 35 CLRSLOT 36 MKEMBANK 39 MKWSPRO 40 wsprot14 41 LPRFIT 42 SETSLOTE 44 DZLIM= 2.55 NRZERO= 0.08 USGSBETA=NO EPSARG=5.E-5 EPSF= 1.E-4 EPSABS= 1.E-4 EXTEND=NO MINQ= 1.0 GHOME=/pj/software/usf/fequtl/test G_ZONE = 4601 G_HGRID = SPSC83 G_VDATUM = NAVD88 G_UNITSYS = ENGLISH G_BASIS = DTM ; Make sure that the config file contains several blank lines at the end The new values are: G_ZONE giving the required zone value for all commands. G_HGRID giving the required horizontal grid for all commands G_VDATUM giving the required vertical datum for all commands G_UNITSY giving the required unit system for all commands G_BASIS giving the required basis for all commands. All must be given or all must be omitted, and they must be in the order shown. If present, FEQUTL will complain with an error message if any command that can have the value is missing a value or has a response that differs from the global value (the G prefix stands for "global"). Currently, this only applies to the vertical datum and the unit system. However, it may be extended to the other values at a later time when more extensive use is made of the easting and northing values. Using the response NA for the vertical datum or the unit system will avoid the error message. Here is a synopsis of what happens under certain combinations of input: 1. Global values not present in the configuration file No checking for consistency in vertical datum or unit system. Complete freedom for specifying these values for a given function table or not specifying these values. If specified, they will appear in the function table. 2. Global values present in the configuration file Each command that supports the values must have them present or an error message will abort the run. The vertical datum and the unit system response must match their global value but all others are not checked. The response NA for the vertical datum or the unit system will suppress checking that value. All values will be placed in the computed function table. Version 5.76 5 September 2008 --Fixed problem that NA was not properly recognized as an error-suppressing response for the vertical datum and unit system. --Fixed problem in HEC2X so that the location items, zone, hgrid, vdatum, unitsys, basis, easting and northing, were output to each cross section when the user supplied these values in the input. Version 5.80 6 October 2008 --Added output of the source code repository and the Subversion version number. This should define the set of files used to create the executable file that produced the output. This also appears in the function-table file. --Added output of the Subversion version if the global home name or a local home name are under it. Otherwise, nothing should be printed. If the home names are not used, nothing is printed. 23 March 2009 --Found that the selection of variation of Manning's n with depth in FEQXEXT was ignored by the code and was being treated as a case of constant n. This mistake first appeared in version 5.75. It is now corrected in this version.