employed during sampling. The authoritative source for this procedure and for most stream
measurements is the U.S.G.S.
Once sampling is completed successfully, the concentration data may be combined with the
flow data to produce flow-weighted quantities, average concentrations, etc. Such metrics for all inputs
can then be combined to predict the loading to a lake. At the same time, outflows treated in the same
manner can be subtracted from the loading estimate. The process is as simple as balancing a bank
account and can be used to imply the loss or accumulation of materials to or from the lake.
220.127.116.11 Data Management Considerations
Obviously these assessment efforts proceed best with large quantities of data. Either these are
from pre-existing sources or they are collected during the assessment studies. Data management must
contend with a large number of issues all of which are important to the successful assessment.
Data will be available from different sources, in different forms, and in different quantities.
Continuous thermal monitors may quickly and easily produce megabytes of thermal data in a string
format. At the other extreme, certain biological analyses involving enzymatic reactions or isotopes may
produce just a few estimates, the results of enormous effort. Operational data from a dam inflow may
be available on printouts or in digital spreadsheet format. Decisions on how to manage these diverse
sources and sizes must be made substantially before the data arrive. These should be available to all
parties to the studies which requires that either all parties employ identical hardware and software...or
else the data must be in a form that can be imported universally. Fortunately there are ways to
Today, data management requires computer assistance. There are three basic computational
approaches to data management. First, all data may reside in a comprehensive file managed using a
sophisticated program such as SAS (Statistical Analysis System) or SPSS. These software packages
offer versatile capabilities of being able to manage a database, plot the data graphically, analyze it
statistically, and this is accomplished through a programming language which enables programmed
repetition of processes needed on a periodic basis.
The second approach employs a database program designed solely for that purpose and having
few capabilities beyond simple manipulations of the database. Their power consists of an ability to
move, sort, and query large bodies of data. Examples of these programs include Paradox and Access.
Their advantage is their small size compared with the full-featured packages in the first approach. Their
disadvantage is that they are limited and cannot provide more than rudimentary graphics or analysis.