Feeds:
Posts
Comments

Archive for March, 2018

Any flood modeler is well aware of the challenges that this particular peril presents, for several reasons.  Ivan Maddox lays this out quite well in his Risks of Hazard blog where he discusses “The Ingredients of a Good Flood Model.”  The National Research Council raised similar concerns about the need for high quality elevation data in their 2009 Mapping the Zone report which is available at a free pdf download here.

In addition to the challenges of obtaining high quality elevation data, there is also the question of which discharge to model.  To the extent a flood model is based on an historic analysis of annual peaks, and the methods put forth in either Bulletin 17B (four pieces of software are currently available from four different agencies that incorporate this method) or 17C, eventually some significant uncertainty arises as one moves further along the curve towards higher magnitude, lower frequency events. Another approach to flood modeling is a Monte Carlo analysis with thousands of modeled events, but given how common an annual peak analysis is, I am going to stick to this type of analysis.

The graph below is taken directly from the macro enabled Excel spreadsheet that the Natural Resources Conservation Service provides on this page. Upon downloading the file and opening the ‘freq curve’ tab, a user will see this graph for the Yellowstone River near Sydney, MT. With over 100 years of annual peak discharge, it’s a pretty robust empirical set of observations.  The orange rectangle is what I refer to as the ‘box of uncertainty’, and in this case, it is the box for the 1% annual chance flood.

It has been pointed out that communicating about floods and especially flood frequency to the general public is challenging for agencies, disaster planning teams and hydrologists.  The NRCS recognizes this, and to their credit, they put exceedance probabilities on the x-axis at the bottom of the graph and they also include some flood frequency recurrence intervals (RI) in years on the top of the graph, which I clipped out.

The key point here is to look at the uncertainty that surrounds a given exceedance probability which is what the orange rectangle highlights for the 1% exceedance probability event.  For starters, if one holds to the 1% event, in my experience, the typical application is to read the value off the blue line and say that the value is 158,000 cfs.  In other words, a colloquial interpretation would be to say, “the 100 year flood is 158,000 cfs.”  A more nuanced and admittedly more challenging way to communicate the analysis would be to acknowledge the uncertainty and state that staying within the 5% and 95% percentile estimates, the 1% exceedance flood could be as low as 140,000 cfs and as high as 184,000 cfs.

In addition, if we hold to the 158,000 cfs estimate, one should also recognize that within the 5% and 95% estimate, this discharge could have an exceedance probability of as much as 3% (e..g ~33 yr RI) to as low as  0.3% (e.g. ~333 yr RI).

The air photo below is a screenshot from FEMA’s National Flood Hazard Layer (NFHL) Viewer. The USGS Sidney, MT gage is just to the northeast of the bridge crossing; flow is from the bottom to top. The blue-green shading represents the “1% Annual Chance Flood Hazard.” If I happened to own the property with the red roof near the bottom of the image, given all the modeling uncertainty and assuming the road just to the west of the property doesn’t provide any flood protection, it seems like purchasing flood insurance would certainly be in my best interest.

Communicating flood risk is and will continue to be a challenge.  I certainly prefer exceedance probability to recurrence intervals with years as the unit (e.g. the 50 or 100 year flood). In closing and keeping Ivan’s excellent piece in mind, another ingredient of a good flood model would be a well written flood model report explaining some of the uncertainty in the model results.

 

Advertisements

Read Full Post »

%d bloggers like this: