61

**Models / Re: Curious case of Flood Frequency Analysis**

« **on:**September 14, 2014, 06:38:46 PM »

First one has to understand the concept of return period. It is simply the inverse of exceedance probability (or relative frequency like you said). A 2 year flood event has a return period of 1/2 i.e., 0.5. It means that there is 0.5 % chance that the event exceeds this value in any one year. And this does not mean that a 2 year flood event occur regularly every 2 years or once in every 2 years. So, the concept of time does not exist at this point. We have to know that return period is only a statistic which we compute from observed data we have. It gives a ball park estimate that this is what is observed once in 2 years. Its the data that speaks through statistic. It may or may not be right.

Say we are analysing for 100 yr event, the data we have may not event contain event pertaining to such a calamity. So, statistical analysis is way to get an answer.

Referring to your first point, before concluding that FFA doesn't consider length of record, one should always keep questioning themselves, what is the need for FFA?. When it comes to floods, FFA is necessary to carry out dam break analysis and associated with that would be reservoir operations (I cant think of anything else!). So, in such a scenario, with current statistical analysis setup, the length of record may not matter (to obtain 100 yr quantile) and I want to go ahead and say having historical dates also may not be necessary.

There are works that conducted FFA at nonstationary setup. Most consider moments to be time varying and determine quantiles as a function of time. Following is a review by Khaliq et al. (2006).

Khaliq, M. N., Ouarda, T. B. M. J., Ondo, J. C., Gachon, P., & BobĂ©e, B. (2006). Frequency analysis of a sequence of dependent and/or non-stationary hydro-meteorological observations: A review. Journal of hydrology, 329(3), 534-552.

It will be interesting to know how much effect inclusion of nonstationarity is bringing to final output. If it is more or less insensitive, I will go with quicker stationary FFA, add sufficient factor of safety and be done with it (remember the question, what is need for FFA?)

Say we are analysing for 100 yr event, the data we have may not event contain event pertaining to such a calamity. So, statistical analysis is way to get an answer.

Referring to your first point, before concluding that FFA doesn't consider length of record, one should always keep questioning themselves, what is the need for FFA?. When it comes to floods, FFA is necessary to carry out dam break analysis and associated with that would be reservoir operations (I cant think of anything else!). So, in such a scenario, with current statistical analysis setup, the length of record may not matter (to obtain 100 yr quantile) and I want to go ahead and say having historical dates also may not be necessary.

There are works that conducted FFA at nonstationary setup. Most consider moments to be time varying and determine quantiles as a function of time. Following is a review by Khaliq et al. (2006).

Khaliq, M. N., Ouarda, T. B. M. J., Ondo, J. C., Gachon, P., & BobĂ©e, B. (2006). Frequency analysis of a sequence of dependent and/or non-stationary hydro-meteorological observations: A review. Journal of hydrology, 329(3), 534-552.

It will be interesting to know how much effect inclusion of nonstationarity is bringing to final output. If it is more or less insensitive, I will go with quicker stationary FFA, add sufficient factor of safety and be done with it (remember the question, what is need for FFA?)