Tenaya Lake, CA
Project: Tuolumne Watershed
The distribution and use of hydrologic software for forecasting reservoir inflows and streamflows, and the distribution of software for the analysis of reservoir inflow and demand probabilities for optimization of reservoir operations have been done by software downloads and installations on local computers.
Maintaining this type of software is challenging as computer operating systems and hardware evolve. Reservoir operators typically do not have the time or qualifications to update operations software.
The first WWW conference was held at CERN in 1994. The worldwide explosion in web capabilities and access began in 1998, and reliable high-speed data communication is now ubiquitous.
It is feasible to run hydrologic and optimization software for reservoir operations as a “distributed computing” application. The work is shared by virtual servers and local computers.
Local computers (include SCADA systems) maintain meteorological and operational data series (reservoir levels, reservoir demands, and constraints, weather forecasts, precipitation, temperatures, streamflows, etc.). Physical characteristics of watersheds and civil works (reservoir elevation-storage data, hydro plant capacities) are stored locally. These data may or may not be in an SQL database.
Watershed initial conditions (snowpacks, reservoir levels) are modeled and stored on servers. Initial conditions can be updated with observed data, data stored on local computers.
Local data are mirrored by server data and data are exchanged frequently.
HFAM II is maintained and updated as necessary by Hydrcomp so the software remains current.
Assistance with software, or with model input and results, can be provided efficiently. Remote access to modeling data is limited when data are on local computers; erroneous data are not easily found. Obsolescent software on local computers seriously limits assistance, a limitation that is not present when server software is continuously updated.
Local computers transfer data, make model runs, and view results via Remote Desktop Connections to servers. HFAM II runs on high-reliability virtual servers (Hyper-V Microsoft Windows 2008 Web. Ed./IIS7 at myhosting.com).
Server data for each watershed are in separate password protected files.
Virtual server access to HFAM Web for hydrologic analysis, real-time reservoir operations, and engineering applications is now available.
Please request a Hyper-V password by sending a message via our Facebook page, HydrocompIncFacebook. Alternately, an email request can be sent to firstname.lastname@example.org with Hfam II Web Access in the subject line.
HFAM lists each parameter under the hydrologic process that it primarily affects. There is interaction among parameters, in particular for parameters affecting streamflow volumes. This interaction occurs in nature and modeling.