Wednesday, January 10, 2007

Better risk models

Important advances are being made to understand and predict hurricane activity. On the seasonal time scale, and to a first order, we know that a warm ocean fuels storm genesis, a calm atmosphere allows storms to intensify, and the position and strength of the subtropical high pressure region paves the tracks for storms that do form. The next generation of risk modelers should incorporate this science into their assessments.

We at the Hurricane Climate Institute at Florida State University (FSU) have made important contributions to this science. We have developed techniques for predicting seasonal hurricane activity (Elsner et al. 1998; 1999; Jagger et al. 2001; 2002), have quantified the statistical association between the North Atlantic oscillation (NAO) and hurricane activity (Elsner et al. 2001; Elsner 2003; Elsner and Jagger 2006), and have demonstrated the utility of Bayesian methods for handling incomplete and missing data (Elsner and Bossak 2001; Elsner et al. 2004; Elsner and Jagger 2004). Our approach is to build models from the available data.

Data models help us understand and predict relationships beyond that accessible with statistical descriptions because they provide a safeguard against cherry-picking the evidence. Data models help us unravel the nuances of climate on hurricanes. Standard meteorological procedures like filtering, trend lines, and empirical orthogonal functions are not up to this task. Data models provide us a context that is consistent with the nature of underlying climate processes, similar to the way the laws of physics provide a context for studying meteorology. In short, data modeling is a scientific way to understanding how the climate works given the available evidence. At issue for risk assessment is how extreme coastal hurricane activity is conditional on climate patterns.

The next big improvement in risk modeling will likely come with data models that assess regional hurricane activity using historical data and numerical prediction output. Indeed, we now successfully model hurricane counts (Elsner and Jagger 2006) and hurricane intensities (Jagger and Elsner 2006) in regions along the coastal United States. Moreover we demonstrate statistical skill in predicting the expected annual insured loss conditional on the state of the NAO and Atlantic ocean temperatures (Jagger et al. 2007).

With our help this science is incorporated in risk models from Accurate Environmental Forecasting (AEF). However, more work is needed to add spatial information and regional predictors. Global predictors include leading modes of variability such as the NAO as well as variables that track the El Nino. Regional predictors like sea temperatures in the Gulf of Mexico and the Caribbean Sea, surface air pressures over Bermuda, and rainfall/soil moisture indicators over eastern North America and western Europe should be considered.

As a consequence of incomplete data and the existence of alternative scientific theories (e.g., climate change versus natural variability), probabilistic risk assessment requires some degree of expert judgment. One approach is to use Bayesian statistics another is to use expert opinion in formal elicitation. Elicitation is practiced in analyzing earthquakes and other geological hazards. Although the physics of climate is better understood than certain geological processes, there remains a sufficient lack of understanding with regard to hurricane risk to cause divergence among researchers.

Formal methods are available for eliciting expert judgment. One method involves a panel of experts who debate and explain the merits of evidence and argument. This approach is based on the assumption that group judgments can improve the validity of forecasts. In any case, the procedures will provide information about the relative risk that is agreeable to the panel. This is done by Risk Management Solutions (RMS) resulting in updates to their hurricane risk assessments that reflects, to some degree, expert opinions about future hurricane activity.

To improve these efforts evidence models should be used to ensure that the experts give credible witness to the data. For example, it is inconsistent for an expert to believe that the most likely number of U.S. hurricanes over the next 5 years will be 10 while at the same time believing there is a 40% chance that the number will be less than 3. The data simply do not conform to this type of distribution.

Averaging expert opinion will not necessarily give a consistent estimate of the hurricane rate either and the method does not account for the uncertainty inherent in the numbers provided by the experts. Moreover, there is some agreement on increased hurricane activity over the basin as a whole for the next few years, but much less agreement on what that means for citizens living along the U.S. coast. This differential in uncertainty also needs to be quantified and incorporated.

As mentioned, a data model can help. One model is to assume that each of the N-year totals from the experts is Poisson with a parameter equal to rate times N. This generates separate estimates for each expert. Another model is to assume that the observed counts have a negative binomial distribution. More work is needed, but future risk models will certainly benefit by utilizing the latest hurricane climate science.

Disclosure: I acknowledge discussions with Thomas H. Jagger on this topic. My financial support comes from the U.S. National Science Foundation and the Risk Prediction Institute of the Bermuda Institute of Ocean Sciences. These opinions are mine and do not necessarily reflect those of the funding agencies. I worked previously under contract with AEF. Currently I have no financial interest in a risk modeling or insurance company.

No comments: