The Computer Modeling Wars

“When you can measure what you are speaking about and express it in numbers you know something about it, but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely, in your thoughts, advanced to the state of science, whatever the matter may be.” Lord Kelvin (1883).

In the spring of 1994, I came upon a two-page paper by Dr. James Anderson of the University of Washington that suggested that there were two possible views of the salmon world: one in which in-river survival is very low, and ocean survival high, and one in which in-river survival is higher, but ocean survival was lower. The paper suggested that existing data could not exclude either possibility. I thought to myself, here at last is someone with an open mind on these issues.


So I traveled to Seattle, wandered down the halls of the University of Washington's School of Fisheries, and knocked on Professor Anderson's door. He was (and is) eager to explain his salmon work, which consisted of the construction of a huge and complex model of salmon survival called CRiSP (short for Columbia River Salmon Passage).


Most of us recognize that computers are essential to deal with complicated problems facing our society. That is why computers are used in nearly every facet of economic activity. One of the ways computers are useful is in modeling natural phenomena which are too complex for the human mind to assess and predict. Thus, we use computer models to build airplanes. We use computer models to study the weather. And we should use computer models to study salmon.


After all, the greatest challenges faced by salmon managers is attempting to figure out what effect any particular human action will have upon salmon. If you open up a reservoir and increase the flow downstream, without some sort of quantitative model, assessing the relative magnitude of the effects on both juvenile and adult salmon, you have no idea what the net effect on salmon populations will be.


Unfortunately, it is the deliberate policy of Northwest fishery managers to operate in such ignorance. The Bonneville Power Administration, which makes extensive use of computers in its own operation, has long recognized the importance of building computer models that can assess the effects of human actions on natural salmon populations. Beginning in the late 1980s, they began to fund the development of computer models of salmon migration in the Columbia River and its tributaries. Other federal funds have created complicated models of harvest and models to predict adult returns to the river.


State and tribal fishery managers and the Northwest Power Planning Council pressured BPA to provide funding to develop their own models. The leading state and tribal model of juvenile salmon passage is called FLUSH, standing for "Fish Leaving Under Several Hypotheses". A derivative model developed by staff at the Northwest Power Planning Council Model is called PAM for "Passage Analysis Model". Over the years, FLUSH has been invoked to support all kinds of expensive changes to dam operations. Yet, no one knows how the FLUSH model works because there is no manual explaining its operations. Nor is the model code available to anyone.


CRiSP, on the other hand, is available to anyone. You can call up to Dr. Anderson and get a copy of it. You can run the model on the Internet. There is a large and comprehensive manual that explains everything the model does, and the formulas that contain its assumptions. The manual is available on the Internet as well. (Dr. Anderson also runs the website with the single best collection of Columbia Basin salmon data, located at http://www.cqs.washington.edu, and hosted by the University of Washington.)


To construct a computer model, the model builder relies on data gathered about what he is modelling and the relationships among the various factors. In the context of the Columbia River salmon migration, there are very detailed data available going back many years for river flows, temperatures, historic dam operations, harvest levels, and many other factors. The model must account for mortality across the concrete at the dams and for predation in the reservoirs. It must account for the percentage of fish that are transported around the dams.


There is such a wealth of data that the construction of models proceeds in two phases. First, the model is calibrated against some of the data. This is, in a sense, the data that goes into building the model. Then, as new data become available or more old data are discovered, the model can be validated against this second set of data by using the model to predict what the first set of data says the second set of data will be.


For example, if we have flow and survival data for the 1980s and build a model calibrated against that data, when new data become available in the 1990s, we can run the model with new flows and see whether the predicted survivals with 1990 flows match the actual survivals measured in the 1990s.


In 1993, 1994 and 1995, new data became available based on a new and better method for estimating survival based on PIT-tags. The CRiSP model predicted survival consistent with the new data. The FLUSH model did not.


I asked Dr. Anderson to review the limited information that was available about the operation of the FLUSH model. He concluded that the assumptions that went into it were wildly unrealistic. For example, one of the chief characteristics of the FLUSH model is that it predicts very great survival increases from fairly small increases in river flow, because it is based, in part, on the long-discredited Sims and Ossiander flow/survival relationships.97 Dr. Anderson discovered that the way this was accomplished was by inserting a relationship under which, as flows increased and travel time decreased, survivals went above 100%—an impossibility.98 The model is also hard-wired to pretend that smolt transportation does not work.


As the National Marine Fisheries Service prepared its biological opinion on hydropower operations for 1995 and future years, we were able to persuade them that the CRiSP model is more accurate. Nevertheless, as a political matter, NMFS declared that because the FLUSH model represented the judgment and skill of the state and tribal fishery managers, it too should be looked at in making management decisions.


I thought that if a federal agency was going to make a decision based, in part, on the results of a computer model, the model ought to be available to the public. It is hard to understand how a government agency like the National Marine Fisheries Service can make a decision based on a model when it does not have the model, cannot run the model, has no idea what assumptions are built into the model, and can rely only upon the assertions of state and tribal managers as what the model does. Many of the researchers within NMFS have long been frustrated by this state of affairs, but the power of the state and tribal bureaucracy is such that none of them dared challenge it.


After seeing the FLUSH model used to justify actions that we knew made no sense, we asked the state and tribal fish authorities for a copy of it. Specifically, in the Idaho Fish and Game case (discussed in Chapter 9), we made a formal request for production of the model pursuant to the Federal Rules of Civil Procedure, which call for disclosure of any document “reasonably calculated to lead to the discovery of admissible evidence”.


The state and tribal fishery agencies refused to allow us access to the model, so we filed a motion before Judge Marsh for an order compelling production of the model. After we filed the motion to compel discovery, we also brought to the attention of many Northwest decisionmakers the idea that the FLUSH model was a secret model and that it was improper to make salmon decisions based on a secret model.


The State of Oregon, taking the lead in resolving this issue for the state and tribal fishery agencies, finally agreed to produce the model shortly before Judge Marsh was to rule on our motion, but only if we agreed to a rather stringent set of rules governing it use. In particular, we could make no use of the model without promptly informing the state and tribal fishery managers what we were doing. Anxious to get the model, we signed a stipulated order with the limitations in it.


As initially produced, the model would not run; many of the files were missing. After months of foot dragging, we finally persuaded the state and tribal fishery managers to produce the additional files and ran the model. Our first occasion to use it came in 1996 in the context of attempting to resolve the question whether additional spill was appropriate to benefit salmon. The state and tribal fishery managers argued that survival in 1995 was higher than 1994 because there were higher spills.


I was convinced that the FLUSH model would have predicted much higher survivals in 1995 than actually occurred. Thus if you took the model as accurate, something had to be killing the fish to bring the survivals down. For reasons explained in Chapter 12, we thought that was the spill.


I asked Dr. Anderson to run the FLUSH model and forwarded the results to the State of Oregon for review. The State Attorney General’s Office then threatened to file a motion holding us in contempt of court if we showed the results to the state water quality regulators.99 So we told the water quality regulators that the fishery managers wouldn’t let them see what their own model predicted. In proceedings detailed in Chapter 12, the Oregon water quality regulators rubber-stamped the spill requests anyway.


The harvest managers know that the lack of complete and comprehensive computer models prevents criticism of their policies. In its recent decision to allow continued heavy harvest on endangered Snake River fall chinook from 1996 to 1998, the National Marine Fisheries Service claimed:


“It has not been possible to distinguish natural mortality from human-induced mortality in any life stage (except perhaps in the harvest sector) or allocate proportions of human-induced mortality between life stages. Without such a model, and without first resolving remaining uncertainties, it is not possible to calculate with any confidence tradeoffs in survival improvements that may be necessary as a result of a change in mortality that may be contemplated in any life stage.”100

This is not true, as computer models can already predict with some confidence the effects of harvest managers’ decisions, not only with respect to harvest, but also with respect to operation of the dams. But so long as NMFS claims it is true, and the states and tribes block the funding of even better computer models, harvest managers can and will continue to pretend that dams cause most of the problems with salmon in the Columbia River Basin.


97 J. Anderson, “FLUSH and PAM models: A critique of concepts and calibrations”, Oct. 28, 1994.

98 Id. Dr. Anderson informed me early in 1997 that the FLUSH modelers had revised the model so that it no longer predicts greater than 100% survival at high flows.

99 Letter, E. Bloch to J. Buchal, Feb. 13, 1994 (“It is our strong position that, in several respects, Dr. Anderson’s analysis indeed violates the terms of the protective order, and therefore may not be presented to the EQC, NMFS or any other body at any time. . . . I want to be clear that in the event you go forward with your effort to submit this analysis, we will seek a contempt order against your clients.”).

100 NMFS, Biological Opinion, “Impacts on Listed Snake River Salmon by Fisheries Conducted Pursuant to the 1996-1998 Management Agreement for Upper Columbia River Fall Chinook”, July 31, 1996, at 14.

Previous PageTable Of ContentsNext Page

This Web page was created using a Trial Version of HTML Transit 3.0.