Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Comparative Study
. 2018 Jul;221(6):907-920.
doi: 10.1016/j.ijheh.2018.05.010. Epub 2018 May 31.

Why do water quality monitoring programs succeed or fail? A qualitative comparative analysis of regulated testing systems in sub-Saharan Africa

Affiliations
Comparative Study

Why do water quality monitoring programs succeed or fail? A qualitative comparative analysis of regulated testing systems in sub-Saharan Africa

Rachel Peletz et al. Int J Hyg Environ Health. 2018 Jul.

Abstract

Background: Water quality testing is critical for guiding water safety management and ensuring public health. In many settings, however, water suppliers and surveillance agencies do not meet regulatory requirements for testing frequencies. This study examines the conditions that promote successful water quality monitoring in Africa, with the goal of providing evidence for strengthening regulated water quality testing programs.

Methods and findings: We compared monitoring programs among 26 regulated water suppliers and surveillance agencies across six African countries. These institutions submitted monthly water quality testing results over 18 months. We also collected qualitative data on the conditions that influenced testing performance via approximately 821 h of semi-structured interviews and observations. Based on our qualitative data, we developed the Water Capacity Rating Diagnostic (WaterCaRD) to establish a scoring framework for evaluating the effects of the following conditions on testing performance: accountability, staffing, program structure, finances, and equipment & services. We summarized the qualitative data into case studies for each of the 26 institutions and then used the case studies to score the institutions against the conditions captured in WaterCaRD. Subsequently, we applied fuzzy-set Qualitative Comparative Analysis (fsQCA) to compare these scores against performance outcomes for water quality testing. We defined the performance outcomes as the proportion of testing Targets Achieved (outcome 1) and Testing Consistency (outcome 2) based on the monthly number of microbial water quality tests conducted by each institution. Our analysis identified motivation & leadership, knowledge, staff retention, and transport as institutional conditions that were necessary for achieving monitoring targets. In addition, equipment, procurement, infrastructure, and enforcement contributed to the pathways that resulted in strong monitoring performance.

Conclusions: Our identification of institutional commitment, comprising motivation & leadership, knowledge, and staff retention, as a key driver of monitoring performance was not surprising: in weak regulatory environments, individuals and their motivations take-on greater importance in determining institutional and programmatic outcomes. Nevertheless, efforts to build data collection capacity in low-resource settings largely focus on supply-side interventions: the provision of infrastructure, equipment, and training sessions. Our results indicate that these interventions will continue to have limited long-term impacts and sustainability without complementary strategies for motivating or incentivizing water supply and surveillance agency managers to achieve testing goals. More broadly, our research demonstrates both an experimental approach for diagnosing the systems that underlie service provision and an analytical strategy for identifying appropriate interventions.

Keywords: Institutional capacity; Monitoring; Qualitative comparative analysis; Water quality; Water testing.

PubMed Disclaimer

Figures

Fig. 1
Fig. 1
Data collection flow. This figure describes the different steps in data collection and data analysis. The two performance outcomes include (1) Targets Achieved, the overall percent of testing targets met throughout the program duration and (2) Testing Consistency, the percentage of months during which testing targets were met. MfSW = Monitoring for Safe Water; WaterCaRD = Water Capacity Rating Diagnostic; fsQCA = fuzzy-set Qualitative Comparative Analysis.
Fig. 2
Fig. 2
Scatterplots of WaterCaRD scores versus water quality monitoring performance. Each dot represents a testing institution. The WaterCaRD score encompasses ratings on the institutional conditions selected for the QCA analysis as a proportion of total available scores. We measured monitoring performance using two indicators: (a) Targets Achieved (outcome 1), and (b) Testing Consistency (outcome 2). These outcomes were calculated as (a) Targets Achieved = total tests conducted / total number of tests required to achieve testing targets since testing commenced; and (b) Testing Consistency = months meeting 100% of targets / total months since formal collaborating agreements signed. We differentiated performance levels into two categories based on the cross-over point (dashed lines), determine by our qualitative knowledge and experience of the MfSW program. For Targets Achieved, 17 institutions were considered high performers (≥70%, the cross-over point); the remaining nine were classified as low performers. For Testing Consistency, 10 institutions were considered high performers (≥50%, the cross-over point); the remaining 16 were classified as low-performers.
Fig. 3
Fig. 3
Selection of institutional conditions for fsQCA. Of the 27 institutional parameters, or conditions, that comprise the WaterCaRD scoring framework, we excluded six conditions due to unavailable of data and two due to insufficient variation; four conditions were combined or better represented by other conditions. Finally, we excluded seven conditions due to a low association with the outcomes (Spearman’s correlation coefficient rs<0.2, p > 0.3); these p-values are presented for Targets Achieved (outcome 1) and Testing Consistency (outcome 2).
Fig. 4
Fig. 4
Graphical displays of water quality monitoring performance. Each graph depicts the monthly monitoring levels achieved by a MfSW collaborating institution between when they entered the MfSW program (July 2013 earliest) and December 2014. Months with no data were prior to the institution entering the MfSW program. Performance levels, shown as a solid black line, were calculated as the percentage of microbial testing targets met monthly. Testing targets and program start dates differed among institutions. We also indicate two aggregate performance metrics: Targets Achieved, the overall percent of targets met throughout the program duration (outcome 1, solid bar) and Testing Consistency, the percentage of months during which targets were met (outcome 2, striped bar). Surveillance agencies are indicated with a white background, and piped water suppliers are indicated with a gray background. The graphs are ordered vertically according to monitoring performance as measured by Targets Achieved (solid bar). The lowest performer is placed in the top left position, and the highest performer is in the top right position.
Fig. 5
Fig. 5
Pathways or combinations of conditions for that promote high monitoring performance, as identified using fsQCA. Monitoring performance was measured using two metrics: Targets Achieved - Outcome 1; and Testing Consistency - Outcome 2. The pathways present the necessary conditions in shaded boxes. The cases in each pathway are listed in bold to the right of each pathway. Consistency measures the proportion of cases with the condition(s) that exhibit the outcome of interest. Coverage measures the proportion of cases exhibiting the outcome that can be explained by the condition(s) or pathways presented. Unique coverage was 7% for the first two pathways and 8% for the third pathway; nine cases were in multiple pathways.

References

    1. African Water Association . 2017. Water Quality: The ONEP Gets Equipped With a 1.7 Billion Fcfa Value Laboratory.http://www.afwa-hq.org/index.php/en/news/item/1079-water-quality-the-one... Available at: (Accessed June 19, 2017)
    1. AguaConsult . 2015. Sustainability Index Tool.
    1. American Chemical Society . 2015. White Paper on Proposed Capacity Development for Water Quality Assessment and Management in Nigeria.https://www.acs.org/content/dam/acsorg/global/international/resources/gi... Available at:
    1. Andrés L.A., Schwartz J., Guasch J.L. World Bank; Washington D.C: 2013. Uncovering the Drivers of Utility Performance: Lessons from Latin America and the Caribbean on the Role of the Private Sector, Regulation, and Governance in the Power, Water, and Telecommunication Sectors.http://elibrary.worldbank.org/doi/book/10.1596/978-0-8213-9660-5 Available at: - DOI
    1. Bain R. A summary catalogue of microbial drinking water tests for low and medium resource settings. Int. J. Environ. Res. Public Health. 2012;9(5):1609–1625. http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3386575&tool=p... Available at: (Accessed December 5, 2012) - PMC - PubMed

Publication types

LinkOut - more resources