Personal tools
You are here: Home 2. Design 2.0.2 Monitoring Program Design: Introduction

2.0.2 Monitoring Program Design: Introduction

Overview    Introduction    Status and trend design    Mechanisms design

Design is a plan for arranging elements in such a way as best to accomplish a particular purpose. — Charles Eames

Your monitoring design should specify where, when, and how you collect information, as well as how you will analyze it to achieve your goals and objectives. Because the development of a clearly articulated and comprehensive monitoring design can be complicated, we have developed a standardized structure for which all elements of the monitoring design can be described and documented.



We use the acronym STRIDE to describe the design structure we use in this website.  STRIDE involves the following four design components: 

  • Spatial design (how we select monitoring sites) 
  • Temporal design (how we select when we monitor)
  • Response design (what and how we measure)

  • I
    nference Design (how we analyze the data)
More information

We chose to develop the STRIDE nomenclature because it is comprehensive and some elements are well established by monitoring practitioners in the U.S. Pacific Northwest (e.g Stevens and Urquhart 2000).  Other frameworks such as the project operational planning approaches used in Alaska (e.g., Bernard et al. 1993) also exist.  Project operational planning has a close correspondence with STRIDE, although that correspondence might be hard to see at first because there is so little overlap in the technical terms and concepts used to describe the planning steps and resulting statistics.

Measurements, Metrics, and Indicators

Another important nomenclature that we use in the design process is derived from Stevens and Urquhart (2000) and involves three specific names for numerical quantities as they pass in steps from data collected in the field into final processed parameter estimates.  

  • The term measurement is used to describe a value resulting from a field data collection event.  Each of these field measurements is taken at a particular time and place.  The field data collection protocols are described in what is called the response design.
    •  At the next higher level of organization, there is what is called a metric, which is a value resulting from the reduction or processing of measurements at a site or over a unit of time or space (i.e., metrics are site-scale values for the sampling period).  The process of developing the metrics is also described in the response design.
    • At the level of organization that reaches up to the original objectives of the study, the indicator is the value resulting from the processing of metrics across sites or across time.  Indicators are population-scale values for the sampling period.    The methods for calculating the indicators are described in the inference design.

    There may be some overlap between these three numerical designations because some things called measurements can also be metrics or even indicators in the same study.  Nonetheless, you will find these terms useful for organizing the process of data collection, initial data reduction, and then the final statistical reporting. 

  here to see a specific example of how this structure and nomenclature is used in the case of Oregon coastal coho salmon monitoring.

    Integrating Design Components

    When designing a monitoring program, it is important to consider all design components collectively because the total cost must be balanced among the cost of the individual components and among potentially competing objectives. 

    More information

    In designing the monitoring program, you will be faced with a problem.  You will have a fixed budget allocated to collecting data at each spatial unit (e.g, a site) during each temporal unit (e.g., year), and an expected budget for the study period.   You will know how much it costs to collect the data at a site (including travel costs).   Your challenge will be to optimize the allocation of sampling across the potential set of spatial and temporal units  during the study period.  It might also be important to allocate effort to sampling within the temporal window to obtain an adequate estimate of your metric and/or to evaluate the variation introduced by your response design.

    The following considerations are important for optimizing this allocation of sampling effort:

    •  Degree of certainty - The level of confidence that you must have in the results of your monitoring program plays a significant role in determining the appropriate design.  In general, the degree of certainty in monitoring results is lowest for opportunistic designs, intermediate for model-based and survey designs, and highest for census designs.  It is lowest for opportunistic designs because it is difficult or often impossible to assess how well the chosen sample sites represent the domain for which inferences are intended.  Because of the non-statistical nature of sample site selection, it is often impossible to assess the degree of certainty of results from opportunistic sample sites because you cannot determine the precision or bias associated with inferences to entire populations obtained from data collected at opportunistic sample sites.  The degree of certainty is intermediate for model and survey based spatial designs because they depend on a statistical sample with its associated uncertainty.  In addition, model-based designs can be subject to unknown uncertainties associated with model assumptions.  The degree of certainty is highest for census  designs because all members of the target population are sampled (either via a fixed counting station or by sampling at all sites in the domain) resulting in no sampling uncertainty or faulty assumptions about the representativeness of selected sites.
    • Cost - The cost of designs generally varies directly in relation to their degree of certainty.  While the high degree of certainty provided by a complete census may be attractive, in many cases the cost associated with conducting a census over a large geographic area or for the entire study period will be prohibitive.  Because of the myriad of factors that are associated with estimating costs for different types of monitoring designs, we do not attempt to provide explicit guidance on the cost of different designs here.  However, it is important that you keep in mind the need to adopt a design that is within the available budget.  This may mean that you will have to take a hard look and potentially revise your objectives for the degree of certainty you will obtain from your monitoring program, given the spatial, temporal, and response designs that fall within your budget.   
    • Feasibility - Adopting a design that achieves your desired degree of certainty and that is within your budget may result in a design that is not feasible due to extenuating circumstances.  For example, if you will be denied access to a significant portion of private lands in you study area, you may need to revise your monitoring goals and objectives to recognize that restriction.
    • Existence of a verified model - Choosing a model -based design will obviously not be an option if you lack an appropriate model  that can guide your site selection process. 
    • Flexibility - It is a common occurrence that over the life of a monitoring program, there may be changes in the goals and objectives, monitoring technologies, allocated budgets, or other constraints.  Some designs are more amenable than others to the modification that may be necessary to meet these new challenges.  For example, an initial objective that requires an abundance estimate over a prescribed monitoring region might be changed to an objective that requires abundance estimates for specific populations within that region.  A spatial/temporal design  that allows you to add or subtract sites without biasing your results is more desirable than one that requires an entirely new design.

    A framework that can be used to balance the various competing choices in designing a monitoring program consists of : 

      • understanding the influence of spatial and temporal components of variability in your data here for more information 
      • evaluating the accuracy of your estimates
      • the statistical power  of the design (i.e., the chance of correctly detecting some situation)

      • costs

    The following citations provide a good foundation underlying the concept of power (the chance of detecting an effect if the effect is present) and how it has been applied natural resource monitoring:

    Fairweather, P.G.  1991.  Statistical power  and design requirements for environmental monitoring.  Aust. J. Mar. Freshw.Res.  42:555-567.

    Gerow, K. G. 2007. Power  and sample size estimation techniques for fisheries management: assessment and a new computational tool. North American Journal of Fisheries Management 27:397–404.

    Hatch, S.A.  2003.  Statistical power  for detecting trends with applications to seabird monitoring.  Biol. Conserv. 111:317-329.

    Link, W. A., and J. S. Hatfield. 1990. Power  calculations and model  selection for trend  analysis: a comment. Ecology 71:1217–1220.

    Peterman, R. M. 1990. Statistical power  analysis can improve fisheries research and management. Canadian Journal of Fisheries and Aquatic Sciences 47:2–15. 

    Spatial Scale Considerations

    The spatial scale for which you design your monitoring program can have a significant influence on the inferences that you can make from the information you collect.  In general, monitoring to determine causal mechanisms related to the impacts of climate change will be more successful if it is conducted across large spatial scales. 

    More information

    Designs of salmon monitoring programs can be applied at two spatial scales, one small and one large. The most common scale is the relatively small one in which agencies and NGOs plan for, implement, and conduct coordinated monitoring programs across relatively confined local or regional areas (e.g., interior of British Columbia, coast of Oregon, Puget Sound, or particular reaches of a river such as the Columbia, to name but a few). These programs are coordinated within such spatial domains in the sense that the spatial/temporal sampling locations/frequencies are chosen to produce appropriate replication, representativeness, extent of coverage, and other desirable features of design. However, questions about climate-driven mechanisms are more likely to be answered by salmon monitoring programs that require coordinated planning and effort at a much larger spatial scale, such as across entire administrative regions like states and provinces or even across countries. One common key monitoring objective for salmon is to learn more about the relative importance of different causal mechanisms behind observed changes in salmon indicators. Such improved knowledge can potentially lead to appropriate management actions that will mitigate or reverse the causes of a deteriorating situation for salmon. Thus, for a monitoring objective that includes learning about mechanisms of change in salmon, some comparison groups of monitored sites will need to be established. Those comparison groups will either have experienced in the past differences in one or more explanatory variables such as extent of detrimental human activities or climatic change, or are expected to have such contrasts in the future (either due to natural differences or deliberate experiments by scientists). A key question on the west coast of North America is the relative importance of climatic change for salmon populations compared with human activities that influence uses of land and water. To address this question will likely require coordinated monitoring efforts across hundreds if not thousands of kilometers.

    We discuss designs of monitoring programs on this web site at both small and large spatial scales.  However, large-scale designs are only discussed on this web site in relation to answering questions about the mechanisms that cause changes in indicators of salmon populations.  Regardless, we emphasize for readers that another benefit of such large-scale coordination of monitoring designs is clearer identification of changing trends in status of salmon populations. This benefit arises because, due to random sampling error and observation error, consistent patterns of change across multiple populations that are caused by a shared mechanism are more likely to become obvious in large-scale data than in small-scale data on one or only a few populations. Observation errors tend to get averaged out across large numbers of sampled populations (Thompson and Page 1989). 

    Proceeding with the Design Step 

    Just as you did when developing the goals and objectives of your monitoring program, you will now need to first decide if you are primarily interested in gathering information to determine the status and/or trend in salmon indicators or to understand the causal mechanisms responsible for the observed status and/or trend.  Making this distinction is important because although many of the design steps are similar for these two purposes, designing a monitoring program to provide information on mechanisms involves additional considerations specific to experimental designs (i.e., using contrasting treatments, controls, etc.) or observational designs (i.e., comparing salmon and environmental variables at different times or places).

    If you wish to design a program that aims to meet both objectives of estimating status and/or trend as well as understanding causal mechanisms, then you will need to go through these 7 steps two times, one for each objective. 

    Once you have decided the primary purpose of your monitoring program, click on the appropriate option below.

    Next:  Status and trend design | Mechanisms design | Go Back

    Document Actions

    hierarchy diagram

    Posted by rodgers at Sep 17, 2009 02:18 PM
    need to figure out a place for the design hierarchy diagram that is in our data matrix spreadsheet. Should also include temporal design? Most of what is in the daigram is already shown in the spatial design dichotomous key[…]re-3Nov2008-OR10Oct2008.xls

    hierarchy diagram

    Posted by peterman at Oct 01, 2009 03:17 PM
    In keeping with the idea that different people learn in different ways, I would strongly recommend inserting this original design hierarchy diagram that is in our data matrix spreadsheet. It will be a nice compact way to show the relationships among the different types of designs that is different than the other diagram. For the same reason, I would also show the temporal design diagram too, but if it is the same one that I am thinking of that Hal put together from one of Kendra's figures, plus some other material, then that is already sitting on the web site somewhere.