The computer software that formed the “Stanford Watershed Model” evolved over a period from about 1959 through 1966. In 1974 work resulted in the widely available codes known as the Hydrologic Simulation Program, Fortran (HSPF), developed for and with support of the young U.S. Environmental Protection Agency. Uses of, enhancements to, and refinements of the basic continuous moisture accounting scheme of the model was at the core of a productive research program at Stanford University in the application of stored program digital computers in water resources from about 1962 to 1974. The influence of this research program expanded as other universities and research groups used and extended the Stanford work, and as students left Stanford for careers at other institutions. Many talented individuals contributed to this research program and are due credit for its scope and influence.

This paper is a selective history of this research and is a retrospective summary of its influence in water resources.


The Stanford model is a story of the confluence of professional needs, newly emerging computing technology, and the curiosity of Ray Linsley and Norman Crawford. In hindsight the model was the result of time and place and has all the elements that are associated with inventions – much trial and error and a passion to get something completed that would be useful. The point of time was one of growing emphasis on graduate education in the United States and the place was one of a concentration of individuals who were driven by curiosity. The environment that supported this intellectual climate was created by Stanford faculty members Ray K. Linsley, Jr., Joseph B. Franzini, and John K. Vennard who possessed a rich set of skills in hydrology, fluid mechanics, and hydraulic engineering. The new technology of the stored program digital computer opened new possibilities. Computer science pioneer John McCarthy commented in a seminar that the computation speeds already achieved by computers, four to five orders of magnitude faster than desk calculators, meant that analysis methods could be entirely different and that new analysis methods were needed to take advantage of this new environment.

The need for improved schemes to do hydrologic computations had become clear earlier. Ray Linsley’s first doctoral student, Eugene Richey (late Professor Emeritus, University of Washington), used hand calculations in 1954 to solve a finite difference formulation for flow over a plane surface. Linsley and Ackerman (1942) had approached modeling of runoff production for the Valley River Basin in North Carolina by a procedure of continuous moisture accounting using a daily time step. They examined numerous storms and noted that they were sufficiently uniform in intensity that they could approximate the rainfall that way. Linsley knew that many situations existed where the runoff response was related to rainfall intensity and that shorter duration data would be needed. The computational demands were impossible without an improved calculating environment.

The catalysts in 1959 for the first steps toward hydrologic simulation modeling at Stanford were frustration and resignation. The frustration came from unsatisfactory pencil-and-paper predictions of peak flows on small watersheds that were based on watershed characteristics. Since the plan for predicting small watershed peak flows didn’t work by trying co-axial correlations, Linsley suggested the last dregs of the grant money be spent to see if the ‘digital computer’ in the Electrical Engineering Department could be used.

The computer was an IBM 650, a machine about the size of a four-drawer filing cabinet. It was one step up from the ‘tabulating machines’ that IBM used to sort punched cards. Tabulating machines for punched cards were an IBM innovation and an established business. The IBM 650 cost $200 per hour to use (about $1,200 per hour in 2003 dollars), but it could be used for free by signing up as part of a grant given to Electrical Engineering.

…the catalysts in 1959 for the first steps toward hydrologic simulation modeling at Stanford were frustration and resignation.

In the spring of 1959 the buildings and grounds department at Stanford asked Linsley to investigate the benefits for campus irrigation water supplies of excavating trapped sediment at Searsville Dam on San Francisquito Creek. The two small Searsville and Felt Lake Dams supply irrigation water to the campus. Linsley asked Crawford and Dan Sokol (a geology Ph.D. student) to work on that project over the summer. Crawford worked on writing a crude water balance model for Los Trancos Creek, a San Francisquito Creek tributary, on the IBM 650, and later published in the International Association of Scientific Hydrology (IASH). The project included recovery of ground water seepage from Lake Lagunita on campus, so Sokol and Crawford spent a couple of nights sleeping outside monitoring drawdown tests on shallow wells.

The Los Trancos Creek model was much too crude. as it used a daily time step and did not have enough structural elements to follow processes adequately. Crawford chose to follow up on that work for his Ph.D. to build a comprehensive structure that was as physically based as possible and operated on a short time step (hourly). Such an approach would be practical only for stored program computers. Linsley had the remarkable skill of letting his younger colleagues have complete freedom to follow their interests, but no detail escaped his attention.

In 1960 Stanford acquired an advanced mainframe computer, a Burroughs 220. It had tape drives that would handle input and output, I/O, and it had a modern ALGOL compiler. Crawford worked on writing the software for two and half years including the summers of 1960 and 1961 with financial support from Stanford. Funding agencies then, as now, were reluctant to take risks. “Hydrologic Modeling” was probably too speculative to explain to outside sponsors. Many NSF grants followed, but those came only after the initial work on modeling was judged by others to be successful.

The goal was to model hydrologic processes continuously in time (infiltration, soil moisture, actual evapotranspiration, channel flow hydraulics), and to include all of the interacting processes within the same structure. An irreverent schematic of the Stanford Watershed Model by Steven Gorelick and David Storestrom, would later find a place on the office wall of the Chief Hydrologist of the U.S. Geological Survey in Reston, Virginia. Linsley knew the broad approach that was under way but did not know the algorithms or model structure the project was developing. Like many programmers of the day, Crawford wrote code first and documentation last. Linsley often wondered in 1961 and early 1962 why months were spent in the computer center creating stacks of green fanfold code printouts without any streamflow results.


It is not possible to say how much of the structure of this model was ‘new.’ Existing concepts were used wherever possible. Overland flow, interflow and ground water or base flow, infiltration and soil moisture, and evaporation and transpiration were all well recognized and were documented by both field studies and analysis before 1960. The integration of all of these processes and their calculation on a short (hourly) time step was new. The only similar integration at the time was Sugawara’s ‘tank model’ (see Sugawara et al., 1984, for an English language description). The tank model, however, did not attempt explicitly to relate parameters to the physical characteristics of watersheds.

In retrospect, the use of “nominal” soil moisture storages and the continuous variability of assignments of water to moisture storages were unique. The use of cumulative frequency distributions for infiltration rates at a point in time to model areal infiltration and evaporation were also unique. Soil water storage was broken into two zones, an upper zone (UZS), which was a relatively shallow zone where water could be removed by gravity drainage or evaporation, and lower zone storage (LZS), where water could be removed by gravity drainage and by transpiration. The upper and lower zone storages, despite their names, were defined by their behavior rather than by their physical location.

The ideas of “parameters” that as fixed values related to watershed characteristics and “calibration” of selected parameter values to be determined from measured data were borrowed from hydraulic models and from existing hardwired analog ground water models. Theses ideas were influenced by papers in a periodical called “Simulation” that included papers on simulation from many different fields – medicine, physics, social sciences, etc. A course in Mechanical Engineering on dimensionless analysis, and J.K. Vennard’s interests in dimensionless indices, prompted making the model indices dimensionless, so infiltration rates for example were linked to LZS/LZSN, the actual lower zone storage amount divided by a calibration parameter, the nominal lower zone soil moisture storage (LZSN). The choice for variable acronyms was influenced by restrictions on variable name size in the Burroughs ALGOL (BALGOL) language, not a restriction for modern era compilers.

The Stanford Watershed Model (Version II) published as Crawford’s Ph.D. thesis in July 1962 attracted some, but not great attention. Linsley recognized immediately the potential for computer based modeling and became a major advocate. He saw how modeling could influence a wide range of water resources activities. Graduate students were encouraged to work on thesis topics that extended the scope of the modeling or used modeling results to investigate water resource issues.

Reactions to simulation modeling beyond Stanford varied; NSF support became generous and many invitations to speak at conferences were received. Still, engineering hydrology was not affected: The Stanford Model was viewed as “good research” without any obvious practical application. A major objection was to ‘calibration.’ A lot of discussion took place about the legitimacy of adjusting parameters to fit observed streamflow, and about the merit (or lack of merit) of writing algorithms for processes that could not be solved mathematically in ‘closed form.’ Model calibration is now routinely done in all fields of hydrologic modeling. Extensions include propagation of model parameter and data uncertainty into prediction of hydrologic states and fluxes. Burges (2003) discusses many aspects of model selection, structure, calibration, and uncertainty propagation.


In the decade following 1962 the “Stanford Hydrology Program” was very active. Topics investigated in Ph.D. theses included transport of radio nuclides, sediment erosion and transport, snow accumulation and melt, simulation of water quality, optimization of model parameters, stochastic generation of rainfall, requirements of hydro-meteorological networks, reservoir reliability, full equations routing using finite difference methods, and infiltration analysis. In much of this work, computers were used to solve problems in innovative ways – they were not used to program engineering methods that predated 1960. Instead basic processes were considered and solutions were devised to use the computational power of mainframes. During this period the hydrology group at Stanford was the second-largest user of computer time on campus after high energy physics.


The basic Stanford Watershed Model (version II) was revised in 1966. In keeping with the invention process, Versions III and V were not published. By 1966 most universities had computers. More than 10,000 copies of the Stanford IV report (Crawford and Linsley, 1966) were distributed. In the summer of 1966 a two-week conference was held at Stanford attended by approximately 40 hydrology professors. Many of the attendees later taught courses on the Stanford Model at their own institutions.

In Stanford Watershed Model IV, more attention was given to making model indices nondimensional, to making model parameters as independent as possible, and to reducing the number of parameters found by calibration. Model innovations were driven in part by advancing computer technology. For example, kinematic wave routing for channel flows became feasible in later versions only after direct access disk drives were introduced. Sequential tape drives were too slow, as they could not handle the random requests for data that kinematic wave routing required. At each stage of development the model structure was adjusted to take advantage of computer technology and increasingly improved compilers.


Simple hydrologic models that are not calibrated are often used for engineering design purposes. Alan Lumb and Doug James, then both at Georgia Tech, recognized the need for improved hydrologic design, particularly in urban settings where few data have been collected at the scale of interest and introduced “runoff files for hydrologic simulation” based on the Stanford Model (Lumb and James, 1976). They affected calibrated simulations for various soil types and land covers and provided continuously simulated hydrographs per unit contributing area for design. All a user had to do was scale the unit area runoff time series by the area. This had the advantage that any regulatory authority could conduct rapid checks on the adequacy of submitted designs for urban drainage facilities. This approach was taken further in King County, Washington, and is the underpinning of the King County Urban Storm Water Design Manual. Jackson et al. (2001) describe the technical and sociopolitical setting of implementing this approach.

A major problem in the practice of hydrologic engineering is determining non point loads and estimating total maximum daily pollution loads. The successor to the Stanford Model, HSPF, provides the basis for the U.S. EPA’s “BASINS” approach to this vexing problem.


The “Stanford Watershed Model” is still discussed 40 years after its initial publication, if the modern internet search engine Google is any guide. A search for “Stanford Watershed Model” gives 14,500 hits, and a search for “HSPF,” a successor FORTRAN version, gives 15,700 hits. A search for the ubiquitous ground water flow model developed by the U.S. Geological Survey “MODFLOW” gives 22,200 hits. A search for the generic term “hydrologic modeling” gives 84,000 hits. These are impressive numbers that reflect the significance of the Stanford Model on hydrologic practice.


Stephen J. Burges
160 Wilcox Hall
University of Washington Seattle, WA 98195-2700
(206) 543-7135 / Fax: (206) 685-3836

Norman H. Crawford, the co-founder of Hydrocomp, is a consulting Hydrologist and a compulsive buyer of the latest and greatest computer gadgetry. He is the recipient of the Ray K. Linsley Award of the American Institute of Hydrology and the Ven Te Chow Award of ASCE.

Stephen J. (Steve) Burges, Professor of Civil and Environmental Engineering, University of Washington, took his first graduate hydrology course with Norm Crawford in 1967. His research interests are eclectic and he has a deep interest in the history of science. He is a past President of the Hydrology Section of AGU. Ray Linsley was his doctoral advisor.