Hostname: page-component-cd9895bd7-jn8rn Total loading time: 0 Render date: 2024-12-22T11:48:43.803Z Has data issue: false hasContentIssue false

STEP: toward a semantics-aware framework for monitoring community-scale infrastructure

Published online by Cambridge University Press:  20 December 2024

Andrew Chio*
Affiliation:
Department of Computer Science, University of California, Irvine, California, USA
Jian Peng
Affiliation:
Orange County Public Works, Orange, California, USA
Nalini Venkatasubramanian
Affiliation:
Department of Computer Science, University of California, Irvine, California, USA
*
Corresponding author: Andrew Chio; Email: achio@uci.edu

Abstract

Urban communities rely on built utility infrastructures as critical lifelines that provide essential services such as water, gas, and power, to sustain modern socioeconomic systems. These infrastructures consist of underground and surface-level assets that are operated and geo-distributed over large regions where continuous monitoring for anomalies is required but challenging to implement. This article addresses the problem of deploying heterogeneous Internet of Things sensors in these networks to support future decision-support tasks, for example, anomaly detection, source identification, and mitigation. We use stormwater as a driving use case; these systems are responsible for drainage and flood control, but act as conduits that can carry contaminants to the receiving waters. Challenges toward effective monitoring include the transient and random nature of the pollution incidents, the scarcity of historical data, the complexity of the system, and technological limitations for real-time monitoring. We design a SemanTics-aware sEnsor Placement framework (STEP) to capture pollution incidents using structural, behavioral, and semantic aspects of the infrastructure. We leverage historical data to inform our system with new, credible instances of potential anomalies. Several key topological and empirical network properties are used in proposing candidate deployments that optimize the balance between multiple objectives. We also explore the quality of anomaly representation in the network through new perspectives, and provide techniques to enhance the realism of the anomalies considered in a network. We evaluate STEP on six real-world stormwater networks in Southern California, USA, which shows its efficacy in monitoring areas of interest over other baseline methods.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - SA
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-ShareAlike licence (http://creativecommons.org/licenses/by-sa/4.0), which permits re-use, distribution, and reproduction in any medium, provided the same Creative Commons licence is used to distribute the re-used or adapted article and the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press

Impact Statement

Built utility infrastructures are strained and fragile lifelines, and their resilient operation under anomalous events is critical. This article studies the problem of deploying heterogeneous Internet of Things sensors for geo-distributed infrastructures, with stormwater as a driving use case. These networks are responsible for transporting nuisance flows out of urban communities, but may also transport unwanted pollutants and contaminants, leading to negative ecological impacts downstream. This article presents a SemanTics-aware sEnsor Placement (STEP) for deploying sensors in a stormwater network to capture potential anomalies (i.e., pollutants, abnormal flows) in the network. We examine the techniques to balance several objectives, including network coverage and anomaly traceability, while considering practical limitations. STEP is evaluated on six real-world stormwater networks in Southern California, USA, where we show the efficacy in capturing anomalies of interest over other baseline methods.

1. Introduction

Utilities such as water, gas, and power are critical lifelines that help sustain the modern socioeconomic systems on which urban society depends. The built infrastructures that provide these essential services are engineered systems consisting of large, geo-distributed underground and surface-level assets that are managed by municipal agencies and service providers. Factors such as urban growth, climate change, and aging have given rise to multiple modes of failure, which can occur from anomalies of continuous, transient, and/or sporadic nature (ASCE, 2021; Semadeni-Davies et al., Reference Semadeni-Davies, Hernebring, Svensson and Gustafsson2008; Masoner et al., Reference Masoner, Kolpin, Cozzarelli, Barber, Burden, Foreman, Forshay, Furlong, Groves and Hladik2019). As a result, such built infrastructures need to be monitored to understand their daily operations and respond to problems. Smart city initiatives have shown promise in leveraging the Internet of Things (IoT) to enable next-generation smart monitoring solutions. In this article, we aim to design suitable IoT sensor placements to detect and trace anomalies for stormwater community lifelines. Our solution models relevant system dynamics, including the network topology and the physics driving flow propagation, which are used to optimize and refine the proposed sensor deployments.

Stormwater networks, also referred to as municipal separate storm sewer systems (MS4s), are vital infrastructures that transport rainwater and other nuisance flows (e.g., excess irrigation, groundwater seepage) out of urban cities and communities to prevent flooding. These flows are usually first captured by catch basins along roadways before flowing through an underground network of storm drainpipes. They then enter surface channels of increasing sizes through outfalls before reaching receiving waters, such as lakes, bays, and coastal oceans. One major issue faced by these networks is the downstream conveyance of harmful pollutants (e.g., pesticides, oils, and greases), which are oftentimes generated from urban areas through various human activities such as industrial operations, household and commercial application of fertilizer and pesticides, broken pipes, over-irrigation, and outdoor washing. This can result in negative water quality impairments in the receiving waters, such as increased levels of harmful bacteria, algal blooms, and ecotoxicity (Barbosa et al., Reference Barbosa, Fernandes and David2012; Skinner et al., Reference Skinner, De Peyster and Schiff1999). To address this issue, regulations such as the amendment to the US Clean Water Act in 1987 (Copeland, Reference Copeland1999) restrict the discharge of pollutants into MS4s (e.g., illegal dumping, unauthorized connections to the MS4, etc.) without a stormwater permit. However, effective enforcement remains challenging due to the random nature of these discharges, the sheer number of potential entry points (e.g., thousands of catch basins), and the complex and diffuse nature of networks (Sage et al., Reference Sage, Bonhomme, Berthier and Gromaire2017). The rapid and efficient detection of anomalies and/or large deviations in water quality is crucial for appropriate prevention, mitigation, and enforcement.

Current state-of-the-art approaches for monitoring stormwater networks consists of citizen reports, manual site visits, and human grab sampling (Barbosa et al., Reference Barbosa, Fernandes and David2012; Bernstein et al., Reference Bernstein, Moore, Sharp and Smith2009; Smith, Reference Smith2002). This is generally a very time-consuming and ineffective process that fails to capture potential anomalies spatially or temporally, and heavily relies on inspections at the right location and time. Most jurisdictions are ill-equipped to sufficiently monitor their stormwater networks on a wide scale. For example, Orange County Public Works (OCPW) conducts five annual site visits for 30 selected storm drain outfalls, where staff spend around 1 hour at each site for observation, testing, and sampling. Moreover, this requires measuring the water quality using test kits and laboratory analysis, which typically has a 3–5 weeks turnaround time, resulting in further delays for corrective actions and treatment. Thus, current methods fail to provide effective coverage and timely detection for potential issues.

With recent advances in communication and sensing capabilities, the IoT has shown promise in enabling new, low-cost solutions for smart monitoring (Al-Hader & Rodzi, Reference Al-Hader and Rodzi2009). In this context, a fundamental problem is to optimally deploy IoT sensors to quickly and efficiently capture issues that arise. Several challenges exist in designing a suitable IoT deployment for the network. Pollutants and contaminant anomalies introduced into the system are inherently heterogeneous and transient in nature: the chemical properties they exhibit are manifested through various phenomena (e.g., turbidity, pH, temperature, etc.) and the timescales over which detection is possible can be limited (e.g., due to dilution and degradation effects). Moreover, studying the evolution and impact of anomalies and their root causes using grab samples is imprecise and difficult. Water quality sensors can also miss anomalies that produce different phenomena than the one they are designed to observe. Deployment solutions must also consider how captured data can facilitate future decision-support tasks (e.g., the detection of anomalies and the identification of their likely sources), as well as the limitations posted by external factors and environmental conditions (e.g., communication, vandalism, weather).

Despite these challenges, several aspects of the infrastructure that can help provide insight into suitable sensor deployments to monitor these anomalies. First, the network structure details physical aspects of the stormwater infrastructure (e.g., catch basins, pipe connections, etc.) and their topology, which can be used to identify candidate deployment locations through properties such as network centrality and distance. Next, the behavior of a network, which captures the response of the system to various stimuli (e.g., flows and contaminants injected at a catch basin) and their propagation through the network, can provide viable locations where measurements can be made. A third factor exploits the observation that specific land uses in a community (e.g., residential areas, commercial spaces, industrial plants, restaurants, etc.) are more likely to generate particular types of anomalies. These anomalies from different land uses can vary in frequency, type, and chemical makeup. In our work, we refer to this as community-level semantics, which provide insight into the impact of community structures and the scale of investments necessary for developing effective sensor deployments for large utility networks.

In this article, we extend the SemanTics-aware sEnsor Placement framework (STEP; Chio et al., Reference Chio, Peng and Venkatasubramanian2023) to more effectively capture and represent anomalies contained within real-world stormwater infrastructures. The STEP framework relies on exploiting structural, behavioral, and semantic aspects of an infrastructure to gain insight toward the design of IoT sensor deployments. Our approach first aims to generate a set of realistic anomalies by profiling historical grab sample data and their correlation to nearby community-level semantics. We augment this methodology to additionally leverage the network topology and behavior for simulating new, credible anomalies. Several topological and empirical network properties are subsequently derived using the network structure and running physics-based simulations. Our optimization approach then considers candidate deployments to optimize node coverage and anomaly traceability. To assess the quality of representation for the set of anomalies used in STEP, we propose new metrics that examine this through perspectives of structure, behavior, and semantics. STEP also provides a toolkit for the exploration and refinement of placement solutions to ensure feasibility in real settings. An evaluation of STEP is conducted using six real-world stormwater networks located in Southern California, USA. The key contributions of this article include the following.

  • A novel STEP framework for IoT sensor placement that leverages network structure, behavior, and community semantics. We formulate a model to capture key objectives of node coverage and anomaly traceability in placement solutions.

  • A divide-and-conquer algorithm utilizing graph partitioning and MILP optimization for identifying idealized deployments (types and locations) for heterogeneous sensors.

  • We propose a methodology to generate new, credible instances of anomalies for the network using historical data, network topology and behavior, and the semantics of community land uses. New metrics to assess the quality of anomaly representation, and a new method for augmenting a set of anomalies to be more representative are also presented.

  • We publish the STEP toolkit, which includes a dashboard that domain experts and practitioners can use to visualize a stormwater network model, apply customized constraints, and do what-if analysis to optimize and refine the proposed sensor deployment.

  • We conduct detailed simulations to evaluate the STEP approach using six real-world stormwater networks provided by domain experts, and demonstrate the efficacy of using STEP over other baseline methods for placement.

The rest of this article is organized as follows. In Section 2, we review related work in tracing anomalies, and present the high-level STEP overview. Section 3 then formulates a model for network structure and behavior, and defines the key objectives and network properties considered for the IoT placement. The methodology used to generate credible instances of anomalies is detailed in Section 4, and it is utilized to inform the integrated placement solution in Section 6. We then present new metrics to assess the quality of representation of anomalies in Section 5. Experiments to validate the benefits of the STEP approach are presented in Section 7, and a short discussion of the STEP prototype and its applicability to a real-world deployment are discussed in Section 8. Finally, conclusions and future directions are presented in Section 9.

2. The STEP approach: sensor placement to trace anomalies in infrastructure networks

This section examines state-of-the-art techniques for isolating anomalies in community-scale infrastructures. Our focus use case studies the stormwater setting, where heterogeneous anomalies (i.e., pollution incidents) may occur sporadically in the network. The propagation of these anomalies depends on several physical network attributes (e.g., pipe shape, width), and the anomaly itself (e.g., flow quantity). This makes anomalies difficult to capture their transient presence in the network requires using the appropriate types of sensors at optimized locations. The goal of this work is to propose a sensor placement technique to select the optimal types and locations for sensors to best enable monitoring capabilities.

2.1. Related works and limitations

We first review related literature in effectively tracing anomalies, from the role of sensor placement and key considerations on designing suitable deployment solutions, to the issue of anomaly detection.

2.1.1. Sensor placement

The sensor placement problem has been studied for various community-scale infrastructures, including smart buildings (Yoganathan et al., Reference Yoganathan, Kondepudi, Kalluri and Manthapuri2018), transportation systems (Bagula et al., Reference Bagula, Castelli and Zennaro2015), and healthcare facilities (Alarifi et al., Reference Alarifi, AlZubi, Al-Maitah and Al-Kasasbeh2019). The general goal is to find ideal locations to deploy sensors for effective and efficient monitoring. In the drinking domain, key objectives for placement include: (i) contaminant detection time (Kumar et al., Reference Kumar, Kansal, Arora, Ostfeld and Kessler1999; Hu et al., Reference Hu, Zhao, Yan, Zeng and Guo2015), which is the time taken for deployed sensors to observe an anomaly; (ii) population impact (Berry et al., Reference Berry, Hart, Phillips, Uber and Watson2006; Schwartz et al., Reference Schwartz, Lahav and Ostfeld2014b; Venkateswaran et al., Reference Venkateswaran, Han, Eguchi and Venkatasubramanian2018), which is the total population affected by an anomaly until its treatment; and (iii) coverage (Krause & Guestrin, Reference Krause and Guestrin2009; Shahra & Wu, Reference Shahra and Wu2023), which is the proportion of the network monitored by sensors. Initial efforts (Berry et al., Reference Berry, Fleischer, Hart, Phillips and Watson2005, Reference Berry, Hart, Phillips, Uber and Watson2006) exploited mixed-integer programming (MIP) formulations to minimize expected contaminant impacts on populations using a limited number of sensors. This was extended in Berry et al. (Reference Berry, Carr, Hart, Leung, Phillips and Watson2009) and Watson et al. (Reference Watson, Murray and Hart2009) to address the issues of robustness using imperfect binary sensors that produce false negatives. However, MIP approaches are often very computationally expensive, and thus tend to scale poorly for larger networks and more complex anomalies. This led to the rise in techniques using greedy heuristics (Krause et al., Reference Krause, Leskovec, Guestrin, VanBriesen and Faloutsos2008; Das & Udgata, Reference Das and Udgata2021), which exploit the submodular nature of the placement; and genetic algorithms (Kumar et al., Reference Kumar, Kansal, Arora, Ostfeld and Kessler1999; Hu et al., Reference Hu, Zhao, Yan, Zeng and Guo2015; Schwartz et al., Reference Schwartz, Lahav and Ostfeld2014b; Shahra & Wu, Reference Shahra and Wu2023), which are inspired by the process of natural selection. While these approaches are more time and compute efficient, they can fail to provide optimal solutions and have limited applicability for handling complex constraints (e.g., from diverse anomalies). Other efforts have cast the sensor placement problem as a multi-criteria objective (Krause et al., Reference Krause, Leskovec, Guestrin, VanBriesen and Faloutsos2008; Preis & Ostfeld, Reference Preis and Ostfeld2008) to find Pareto-optimal solutions. The role of topology, specifically in identifying impacts to community structures (Venkateswaran et al., Reference Venkateswaran, Han, Eguchi and Venkatasubramanian2018), and leveraging bipartite graph reformulations with instrumented edges (Das & Udgata, Reference Das and Udgata2021) have also been explored. More recently, parallel architectures have proposed to reduce run times (Krause et al., Reference Krause, Leskovec, Guestrin, VanBriesen and Faloutsos2008; Ciaponi et al., Reference Ciaponi, Creaco, Di Nardo, Di Natale, Giudicianni, Musmarra and Santonastaso2019) and techniques to improve observations from imperfect sensors using maximum cover techniques are revisited in de Winter et al. (Reference de Winter, Palleti, Worm and Kooij2019).

2.1.2. Modeling anomalies and their properties

An important part of designing effective sensor deployments stems from ensuring that the anomalies considered are realistic and/or credible with respect to the infrastructure network. Early work in the drinking water domain mainly considered simple representations for anomalies. Spatially, these anomalies are distributed randomly or uniformly throughout a network, such as in Shastri and Diwekar (Reference Shastri and Diwekar2006), Krause and Guestrin (Reference Krause and Guestrin2009), Aral et al. (Reference Aral, Guan and Maslia2010), and Hu et al. (Reference Hu, Zhao, Yan, Zeng and Guo2015); and temporally, they assume continuous discharge (Dorini et al., Reference Dorini, Jonkergouw, Kapelan and Savic2010) or follow parameterized statistical distributions, for example, Poisson (Yan et al., Reference Yan, Gong and Wu2021; McKenna et al., Reference McKenna, Hart and Yarrington2006). This can lead to unrealistic depictions of the operation of the network: efforts to trace an anomaly may become focused on unlikely scenarios and reduce the overall capability of a monitoring solution. In this regard, historical data has been shown to help inform the understanding of the state and operation of a community-level infrastructure (Mounce et al., Reference Mounce, Khan, Wood, Day, Widdop and Machell2003; Soldevila et al., Reference Soldevila, Blesa, Tornil-Sin, Fernandez-Canti and Puig2018). However, such information is very scarce and limited, making it difficult to study at larger scales typical of built infrastructures. Many existing methods also formulate anomalies as a homogeneous entity that can be perfectly detected as it propagates in the network (Berry et al., Reference Berry, Fleischer, Hart, Phillips and Watson2005; Casillas et al., Reference Casillas, Puig, Garza-Castanón and Rosich2013; Santos-Ruiz et al., Reference Santos-Ruiz, López-Estrada, Puig, Valencia-Palomo and Hernández2022). This abstraction, while providing a simpler problem statement, can disregard many practical concerns. In real-world cases, the types of phenomena produced can vary greatly, depending on the type of anomaly, as studied previously with specific water quality parameters (Skinner et al., Reference Skinner, De Peyster and Schiff1999; Schwartz et al., Reference Schwartz, Lahav and Ostfeld2014b; Festus Biosengazeh et al., Reference Festus Biosengazeh, Estella Buleng Tamungang, Nelson Alakeh and Antoine David2020). The capabilities of sensors are often assumed to also be homogeneous, that is, all sensors can detect all types of contaminants; however, we note that factors such as the type, cost, and accuracy of sensors can affect the quality of a resulting deployment solution. This illustrates the need for credible anomalies in designing effective solutions for tracing anomalies.

2.1.3. Anomaly detection

The main considerations for the detection of anomalies after sensor instrumentation include establishing baselines for “normal” behavior, and examining thresholds between anomalies and noise/errors. Anomaly detection approaches have typically relied upon statistical tests, time-series analysis, and machine-learning (ML) models. Most statistical tests, for example, ANOVA (Festus Biosengazeh et al., Reference Festus Biosengazeh, Estella Buleng Tamungang, Nelson Alakeh and Antoine David2020), Grubb’s test (Cohn et al., Reference Cohn, England, Berenbrock, Mason, Stedinger and Lamontagne2013), and PCA (Branisavljević et al., Reference Branisavljević, Kapelan and Prodanović2011), provide a criteria to score data and determine outliers. For time-series data, methods such as ARIMA (Blanch et al., Reference Blanch, Puig, Saludes and Quevedo2009; Arumugam & Saranya, Reference Arumugam and Saranya2018; Barrientos-Torres et al., Reference Barrientos-Torres, Martinez-Ros, Navarro-Tuch, Pablos-Hach and Bustamante-Bello2023) are used, which predict patterns and trends in data using moving averages and forecasting. In general, these techniques rely on certain assumptions (e.g., normality, stationarity) and require careful parameter tuning and human interpretation, which makes them difficult to apply at scale. As a result, ML models have become popular, since they provide a framework that trains on historical data and applies the learned patterns to automate the detection of anomalies. Conventional models leverage decision trees (Jalal & Ezzedine, Reference Jalal and Ezzedine2020), support vector machines (Candelieri, Reference Candelieri2017), clustering techniques (Li et al., Reference Li, Izakian, Pedrycz and Jamal2021), and Bayesian networks (Ding et al., Reference Ding, Gao, Bu, Ma and Si2018). However, these models are largely constrained by the need for cleaned/ideal data for training. In particular, skewed distributions of anomalous and normal data, missing observations, and changes in the underlying distribution of data values (e.g., from environmental factors) can result in high false positive or false negative detection rates. Over the past few years, deep learning approaches (Qian et al., Reference Qian, Jiang, Ding and Yang2020; Li et al., Reference Li, Chen, Jin, Shi, Goh and Ng2019) has gained much interest for anomaly detection, due to their ability to learn very complex anomaly patterns, thus enabling it to outperform many traditional models. While they remain promising for future efforts, such models are very memory and compute resource intensive, and are difficult to tune properly.

2.2. The STEP approach and architecture

Incorporating appropriate types of sensors at meaningful locations to monitor diverse, realistic anomalies is essential in facilitating next-generation decision-support for domain experts and practitioners. To this end, this article proposes the STEP approach for heterogeneous sensor placement in community-scale infrastructures. Multiple aspects of a network—structural, behavioral, and semantic—are coalesced to provide insight into suitable deployments. Our methodology leverages the network structure to understand the positioning of nodes and their interconnections, which is then used to study properties such as node centrality and distance. The behavior, that is, operation of the network is driven by physics-based principles of flow, and examines the response to various stimuli, such as the propagation of contaminants through nodes. We then use “semantics” to associate specific types and patterns of discharge, that is, their frequency, type, and environmental impact, to their likelihood of occurrence when specific community infrastructures lie upstream, for example, industrial plant. We refer to these semantic land uses as a way to capture the interactions between people within communities and land. In this work, we aim to determine an optimal placement of sensors (including type and location) to perform effective monitoring of the infrastructure. Since budget limits of agencies dictate the extent of sensing and communication, we note that deployment solutions must balance competing tradeoffs between cost and multiple desired objectives, including coverage and traceability.

The high-level architecture of STEP is presented in Figure 1. In this article, we assume that domain experts can provide details of the underlying network structure and associated historical data a priori. Community-level land uses, that is, the manner in which people interact with the land surrounding the network, can be obtained through domain knowledge and/or public datasets, such as OpenStreetMap (OpenStreetMap contributors, 2017). Our approach partitions the sensor placement into three phases. First, an offline community-learning or preprocessing phase extracts the structure and behavior of the network from topological properties of network graph data. We apply learning techniques on the provided historical data, and use both proximity to anomaly sources and similarity of semantic land uses to generate new sets of credible anomalies for an infrastructure network. This refines the set of input anomalies for sensor placement by capturing the most impactful and realistic anomalies, which can help in producing more informed decisions about the location and type of sensors to deploy. STEP leverages topological and empirical network properties to construct an initial candidate sensor deployment. Domain expert-developed simulations are used to explore the propagation of anomalies, and enable interactive refinement of the proposed solution that incorporates feedback from domain experts. The STEP prototype toolkit is available on GitHub (https://github.com/andrewgchio/STEP, 2023).

Figure 1. STEP components and workflow.

3. Modeling infrastructure networks

The primary elements of an infrastructure network are comprised of its physical components, and its associated community features. We look to deploy heterogeneous sensors whose goal is to efficiently detect and trace anomalies introduced into the system. To this end, we utilize network science-based principles to model the structural properties of the underlying infrastructure, and a physics-based approach to capture its behavior in response to artificially generated pollutants incidents or anomalies. The developed models will then be used in the formulation of the heterogeneous sensor placement problem, which determines the optimal type and location of the sensors to deploy.

3.1. Elements of an infrastructure network

We first introduce the key components of an infrastructure network, its community model, and define the heterogeneous sensors and transient anomalies considered.

3.1.1. Infrastructure network model

We model the geo-distributed infrastructure network as a directed acyclic graph $ \mathcal{G}=\left(\mathcal{V},\mathrm{\mathcal{E}}\right) $ , where nodes $ {v}_j\in \mathcal{V} $ are junctions at which sensors could be instrumented, and directed edges $ \left({v}_i,{v}_j\right)\in \mathrm{\mathcal{E}} $ are pipes or conduits through which flow moves from node $ {v}_i $ to node $ {v}_j $ . Fundamentally, each node $ {v}_j $ is located at an xy-coordinate $ \left({x}_j,{y}_j\right) $ , at an elevation $ {z}_j $ , while each edge $ \left({v}_i,{v}_j\right) $ has length $ {L}_{ij} $ , cross-sectional area $ {A}_{ij} $ , and frictional pipe roughness $ {f}_{ij} $ . We note that these physical node and edge attributes will be used later to model the physics driving the propagation of flow and contaminants. We then define $ pa\left({v}_j\right) $ as the direct parents of $ {v}_j $ in the graph $ \mathcal{G} $ , and $ path\left({v}_i,{v}_j\right) $ as the path of nodes through which flow from $ {v}_i $ to $ {v}_j $ would be observed.

3.1.2. Community model

We introduce the concept of semantic land uses, denoted as $ {u}_m\in \mathcal{U} $ , to describe the types of community structures that surround different regions of the network. A semantic land use $ {u}_m $ expresses intuitively how citizens utilize and interact with the land, and is comprised of labels such as “residential,” “industrial,” and “commercial,” among others. In this work, we rely on the observation that these land uses are correlated with the locations and types of anomalies they tend to introduce into the network, for example, “industrial” areas are more likely to release industrial chemicals than “residential” areas. For each node $ {v}_j $ in the network, let $ Area\left({v}_j,{u}_m\right) $ correspond to the area of the region near $ {v}_j $ with semantic land use $ {u}_m $ . Each land use $ {u}_m $ is also assigned a priority level $ {\lambda}_m $ from a domain expert that reflects the relative importance of monitoring anomalies generated by $ {u}_m $ .

3.1.3. Sensor model

Let sensor $ {s}_l\in \mathcal{S} $ be defined as a three-tuple $ \left({p}_l,{\varepsilon}_l^{acc},{c}_l\right) $ , which describes a sensor that produces observations with some Gaussian error $ {\varepsilon}_l^{acc} $ in measuring a phenomenon $ {p}_l $ . Its cost $ {c}_l $ includes purchasing the hardware, deploying it in the field, and maintaining it over time. We let $ \mathcal{S}(p) $ denote the set of sensors that measure a phenomenon $ p $ .

3.1.4. Anomaly model

Transient anomalies $ {\alpha}_k\in \mathcal{A} $ are defined using the five-tuple $ \left({v}_k^{\ast },{t}_k^s,{t}_k^e,{\mathcal{P}}_k,{u}_k\right) $ , which denotes an anomaly that originates at node $ {v}_k^{\ast } $ over the time period $ \left({t}_k^s,{t}_k^e\right) $ . The anomaly produces the set of phenomena $ {\mathcal{P}}_k $ , and has correlations to the land use $ {u}_k $ . We let $ \mathcal{A}\left({v}_i\right) $ be the set of anomalies whose origin node is $ {v}_i $ . The propagation time $ time\left({\alpha}_k,{v}_j\right) $ represents the time taken for $ {\alpha}_k $ to reach a downstream node $ {v}_j $ , and is set to $ \infty $ if flow cannot reach $ {v}_j $ .

3.2. Sensor placement objectives and network properties

We next formalize the key objectives in designing a heterogeneous sensor deployment, and define several related topological and empirical network properties for effectively tracing anomalies.

3.2.1. Definition: placement

A candidate placement $ \mathcal{X} $ is a matrix whose entries $ {x}_{lj} $ is $ 1 $ if a sensor $ {s}_l $ is deployed at node $ {v}_j $ , and 0 otherwise.

3.2.2. Objective: coverage

Typical definitions for network coverage are purely structural, that is, the nodes that fall inside a sensor’s range. In contrast, we argue that coverage should be defined with respect to its ability to capture a set of anomalies in the network. To this end, we formalize the node coverage objective $ COV $ in Equation (1). This objective aims to maximize the total proportion of nodes that can be effectively monitored for anomalies by a placement $ \mathcal{X} $ , as shown in Equation (1a). We define a node $ {v}_i $ to be covered by $ \mathcal{X} $ in Equation (1b), which computes if at least $ \rho $ % of the anomalies originating at $ {v}_i $ can be quickly detected by the downstream sensors in $ \mathcal{X} $ . The notion of “quickly detecting anomalies” is expressed through $ OB\left(l,k\right) $ and $ PT\left(k,j\right) $ , which requires that the downstream sensor $ {s}_l $ must observe anomaly $ {\alpha}_k $ (i.e., $ {p}_l\in {\mathcal{P}}_k $ ), in under $ \tau $ time (i.e., $ time\left({\alpha}_k,{v}_k^{\ast },{v}_j\right)\le \tau $ ), respectively. Note that node $ {v}_j $ must lie downstream of $ {v}_k^{\ast } $ in $ \mathcal{G} $ for $ time\left({\alpha}_k,{v}_k^{\ast },{v}_j\right) $ to be bounded by $ \tau $ . The indicator function “ $ 1\left[ stmt\right] $ ” is used to denote that the expression should be $ 1 $ if the statement $ stmt $ is true, and is set to $ 0 $ otherwise.

(1a) $$ COV\left(\mathcal{X},\mathcal{A},\mathcal{G}\right)=\frac{1}{\mid \mathcal{V}\mid}\sum \limits_{v_i\in \mathcal{V}} covered\left({v}_i,\mathcal{X},\mathcal{A}\left({v}_i\right)\right) $$
(1b) $$ covered\left({v}_i,X,A\left({v}_i\right)\right)=1\left[\sum \limits_{\alpha_k\in A\left({v}_i\right)}\sum \limits_{v_j\in V}\sum \limits_{s_l\in S}{x}_{lj} OB\left(l,k\right) PT\Big(k,j\Big)\ge \rho \left|A\left({v}_i\right)\right|\right] $$

3.2.3. Objective: traceability

We next define the concept of traceability to describe the degree to which sensor observations can help track the origin of an anomaly. We rely on the intuition that when a sensor captures an anomaly, then its origin must lie upstream. Let $ {\mathcal{V}}_{\mathcal{X}} $ denote the set of nodes at which the placement $ \mathcal{X} $ deploys at least one sensor. Then, define $ {\mathcal{V}}_{v_j,\mathcal{X}}^{up} $ as a subset of nodes in the network that lies upstream of $ {v}_j\in {\mathcal{V}}_{\mathcal{X}} $ , such that an anomaly would first be observed at $ {v}_j $ . The traceability $ TR $ for a placement $ \mathcal{X} $ can be denoted as the average proportion of the number of nodes in $ {\mathcal{V}}_{v_j,\mathcal{X}}^{up} $ , for each node $ {v}_j\in {\mathcal{V}}_{\mathcal{X}} $ , at which a sensor is deployed. We formalize this objective in Equation (2) as follows.

(2) $$ {\displaystyle \begin{array}{c} TR\left(\mathcal{X},\mathcal{G}\right)=\frac{1}{\left|{\mathcal{V}}_{\mathcal{X}}\right|}\cdot \sum \limits_{v_j\in {\mathcal{V}}_{\mathcal{X}}}\left|{\mathcal{V}}_{v_j,\mathcal{X}}^{up}\right|/\left|\mathcal{V}\right|\\ {}{\mathcal{V}}_{\mathcal{X}}=\left\{{v}_j\in \mathcal{V}:{\sum}_{s_l\in \mathcal{S}}{x}_{lj}\ge 1\right\}\\ {}{\mathcal{V}}_{v_j,\mathcal{X}}^{up}=\left\{{v}_i\in \mathcal{V}:\exists \hskip0.3em path\left({v}_i,{v}_j\right)\wedge {\sum}_{s_l\in \mathcal{S}}{x}_{li}=0\right\}\end{array}} $$

3.2.4. Property: betweenness centrality

Nodes that lie on the path taken by an anomaly as it propagates downstream are natural deployment candidates. Let the betweenness centrality of a node $ {v}_j $ be empirically derived by counting the number of anomalies that can be observed at $ {v}_j $ before a time threshold $ \tau $ , as detailed in Equation (3). Note that intuitively, instrumenting at nodes with high betweenness centrality values will provide better coverage in a network.

(3) $$ \mathcal{BTN}\left({v}_j,\mathcal{A}\right)=\sum \limits_{\alpha_k\in \mathcal{A}}\unicode{x1D7D9}\left[ time\left({\alpha}_k,{v}_k^{\ast },{v}_j\right)\le \tau \right] $$

3.2.5. Property: branching complexity

We quantify the complexity of branching in a network using the number of merges and splits at nodes. Networks with high branching have more junctions through which flows may combine, and thus may require more sensors for adequate monitoring. Intuitively, this property aims to capture the notion that an anomaly from an upstream “chain” of nodes is easier to trace than a more complex “tree” structure. We define the branching complexity $ \mathcal{B}\mathcal{C} $ recursively in Equation (4). Here, if a node $ {v}_j $ has no predecessors (i.e., a “root” node), then we set $ {v}_j $ ’s branching complexity to $ 1 $ . Otherwise, we find the maximum branching complexity of node $ {v}_j $ ’s parents $ pa\left({v}_j\right) $ , denoted as $ {\mathcal{BC}}_{pa\left({v}_j\right)}^{max} $ , and set the branching complexity of $ {v}_j $ to the sum of: (i) $ {\mathcal{BC}}_{pa\left({v}_j\right)}^{max} $ ; and (ii) the relative increase of the sum of the remaining (non-maximum) branching complexities as compared to $ {\mathcal{BC}}_{pa\left({v}_j\right)}^{max} $ . Thus, our definition associates lower branching complexities with simpler upstream network structures.

(4) $$ {\displaystyle \begin{array}{l}\mathcal{B}\mathcal{C}\left({v}_j\right)=\left\{\begin{array}{ll}1& \mathrm{if}\;{v}_j\;\mathrm{is}\;\mathrm{a}\;\mathrm{root}\ \mathrm{node}\\ {}{\mathcal{BC}}_{pa\left({v}_j\right)}^{max}+\sum \limits_{v_i\in pa\left({v}_j\right)}\frac{\mathcal{BC}\left({v}_i\right)}{{\mathcal{BC}}_{pa\left({v}_j\right)}^{max}}-1& \mathrm{else}\end{array}\right.\\ {}{\mathcal{BC}}_{pa\left({v}_j\right)}^{max}=\underset{v_i\in pa\left({v}_j\right)}{\max}\mathcal{B}\mathcal{C}\left({v}_i\right)\end{array}} $$

3.3. A physics-based approach to model infrastructure behavior

The propagation of an anomaly through infrastructure networks is fundamentally governed by physical laws for the conservation of mass (Equation 5a) and momentum of flow (Equation 5b). The EPA’s Storm Water Management Model (SWMM) (EPA, 2023) is a simulator developed by domain experts that models these equations for an entire stormwater network, and solves them to accurately portray the expected operation of a system. Multiple physical characteristics of the nodes and edges of the network are utilized, including the distance $ x $ , time $ t $ , flow area $ A $ , flow rate $ Q $ , hydraulic head $ H $ , friction slope $ {S}_f $ , and gravity $ g $ . This is done using a dynamic wave analysis process described for EPA SWMM (EPA, 2023), which accounts for various features, including channel storage, backwater effects, entrance/exit losses, flow reversal, and pressurized flow. More details on the specific methodology used can be found in EPA (2023).

(5a) $$ \frac{\partial A}{\partial t}+\frac{\partial Q}{\partial x}=0 $$
(5b) $$ \frac{\partial Q}{\partial t}+\frac{\partial \left({Q}^2/A\right)}{\partial x}+ gA\frac{\partial H}{\partial x}+{gAS}_f=0 $$

4. Modeling and generating credible anomalies

We present a novel methodology to generate new, credible instances of anomalies for an infrastructure network. By capturing the most impactful and realistic anomalies that could occur for the network, more informed deployment decisions on the location and type of sensors can be made. In our approach, we leverage historical data collected by domain experts, the topology of the network, and the semantics of land uses in the community to gain insight into potential anomalies.

We define the historical water quality grab sample dataset $ \mathcal{D} $ as a series of five-tuple $ \left(t,v,q,o,p\right) $ , which represents a flow $ q $ that propagates pollutants and contaminants downstream. Water quality parameters are measured as observations $ o $ of a phenomenon $ p $ , at a node $ v $ at time $ t $ . Examples of commonly observed phenomenon include turbidity, temperature, flow rate, and pH (Bertrand-Krajewski et al., Reference Bertrand-Krajewski, Chebbo and Saget1998). We assume such data is normally captured by water agencies for regulatory compliance.

4.0.1. Extracting anomalies from water quality data

We begin by describing the process of extracting physical features of anomalies based on the historical water quality grab sample dataset. Our approach first constructs a uniformly distributed set of anomalies for the nodes in a given network, and records their expected behavior using a physics-based simulator like (EPA, 2023). Then, profiles of the anomalies with similar impacts are created by applying agglomerative clustering techniques (Müllner, Reference Müllner2013). The records of the dataset $ \mathcal{D} $ are used to weigh anomaly profiles by their possible likelihood of occurrence: a best effort matching is done from each grab sample in $ \mathcal{D} $ to an anomaly profile. We note that this has the added benefit of allowing historical grab sample data to be associated with a set of potential origin nodes, and their respective ranges in start/end times. The phenomena produced by this anomaly are labeled by domain experts using simple thresholds based on environmental regulations.

The semantic land uses detailing the potential cause of an anomaly is identified probabilistically with a semantic map. This provides a correlation between semantic land uses and sets of phenomena, depending on an anomaly’s origin node and the semantic land uses that lie upstream. Let $ {\mathcal{M}}_v^{up} $ be the total area of each semantic land use $ u $ upstream of the node $ v $ . The semantic map consists of the sum of these areas across these nodes for each semantic land use. WLOG, we assume that the values in the semantic map are normalized. Extracted anomalies can then be probabilistically assigned a semantic land use “cause,” based on its set of potential origin nodes: land uses covering larger areas near the origin are more likely to be selected as the cause. Next, we present two methods of generating new, credible instances of anomalies based on community land use semantics and network topology.

4.0.2. Generating new anomalies using semantics

Using the semantic map $ \mathcal{M} $ constructed earlier for associating land uses with different sets of phenomena, we next generate a new set of anomalies for the infrastructure network. Algorithm 1 details this process. Let $ {\alpha}_k $ denote a new anomaly to be created using semantics. First, we randomly choose a candidate land use $ {u}_k $ as the “cause” of the anomaly with weight equal to the number of historical anomalies that correlated with $ {u}_k $ in $ \mathcal{M} $ . We note that the selected land use dictates all other physical aspects of the new anomaly. In particular, the candidate origin node $ {v}_k^{\ast } $ for $ {\alpha}_k $ is selected with weight equal to the area of the region near $ {v}_k^{\ast } $ with semantic land use $ {u}_k $ , that is, $ Area\left({v}_k^{\ast },{u}_k\right) $ . The time period $ \left({t}_k^s,{t}_k^e\right) $ is selected by sampling a normal distribution on the start and end times of the historical anomalies caused by $ {u}_k $ . The set of phenomena $ {\mathcal{P}}_k $ that the anomaly produces is then chosen based on the correlation to $ {u}_k $ in the semantic map $ \mathcal{M} $ . Intuitively, this method aims to capture the notion that if an anomaly occurs in one region due to a specific land use (e.g., industrial sector), then its likelihood of occurring in other regions with the same land use is higher.

Algorithm 1 Generate Semantic Anomalies

4.0.3. Generating new anomalies using network topology

Leveraging the structure derived from the network topology can also provide useful insight into how anomalies can be generated in a community-scale infrastructure. We look to capture the intuition that an anomaly generated from an urban area can impact more than just its associated origin node, but is limited based on its distance from its source, that is, the “reach” of an anomaly is higher when closer to its source. Our methodology is described in Algorithm 2.

To quantify a notion of the “reach” of an anomaly, we applying agglomerative clustering (Müllner, Reference Müllner2013) to the nodes of the graph, subject to the connectivity of the infrastructure graph. Note that ward linkage (Nielsen & Nielsen, Reference Nielsen and Nielsen2016) must be used to provide stability in cluster generation. This creates clusters of nodes representing subsets of the graph that are physically located near each other. For each node $ v $ in the cluster $ C $ , we then compute the reach of anomalies $ {\alpha}_k^{\in}\mathcal{A}(v) $ , which weighs nodes in clusters based on the number of anomalies that it observes. This is used to create new anomalies in the network: we take the anomalies constructed using the semantics of land uses, and identify new origin nodes based on the computed “reach” of the anomaly.

Algorithm 2 Generating Anomalies using Network Topology

5. Exploring the quality of anomaly representation

We look to assess the quality of representation for anomalies. We advocate for three perspectives centered around examining the structural, behavioral, and semantic aspects of anomalies. Then, we discuss how these metrics can be used to improve the quality of representation for a sample set of anomalies.

5.1. Metrics for the representation of anomalies

Consider two sets of anomalies: $ \mathcal{A} $ , a sample set of anomalies to assess; and $ {\mathcal{A}}^{ref} $ , a reference set of anomalies. Note that $ {\mathcal{A}}^{ref} $ can consist of real-world anomalies (e.g., from historical data), or a comprehensive set of uniformly generated anomalies. Each set of anomalies are defined as in Section 3. Assume that both sets of anomalies apply to the same network, denoted with $ \mathcal{G}=\left(\mathcal{V},\mathrm{\mathcal{E}}\right) $ .

5.1.1. Structural representation of anomalies

The structural perspective of anomaly representation evaluates $ \mathcal{A} $ on its spatial distribution in the network, relative to $ {\mathcal{A}}^{ref} $ . We cluster the nodes in the network into $ {n}_{cluster} $ clusters, subject to the connectivity implied by its topology. While any clustering technique can be used, we advocate for agglomerative clustering (Müllner, Reference Müllner2013) with ward linkage (Nielsen & Nielsen, Reference Nielsen and Nielsen2016). This creates groups of nodes based on their distance, and captures the intuition that nodes located near each other can structurally be represented by the same anomaly. For each anomaly $ {\alpha}_k $ with origin node $ {v}_k^{\ast } $ , let $ cl\left({v}_k^{\ast}\right) $ denote the cluster to which $ {v}_k^{\ast } $ belongs. Equation (6) presents the structural representation metric, which computes the proportion of clusters that contain origin nodes for anomalies in $ \mathcal{A} $ , compared to $ {\mathcal{A}}^{ref} $ . The metric $ StrRepr\left(\cdot \right) $ ranges between $ 0 $ , where $ \mathcal{A} $ has poor spatial distribution in $ \mathcal{G} $ ; to $ 1 $ , where $ \mathcal{A} $ covers all constructed clusters throughout $ \mathcal{G} $ . We note that if the reference set does not adequately contain enough anomalies throughout the entire network, this metric will yield a value greater than $ 1 $ .

(6) $$ StrRepr\left(\mathcal{A},{\mathcal{A}}^{ref},\mathcal{G}\right)=\left|\underset{v_k^{\ast}\in {\mathcal{V}}^{\ast }}{\cup } cl\left({v}_k^{\ast}\right)\right|/\left|\underset{v_k^{\ast}\in {\mathcal{V}}_{ref}^{\ast }}{\cup } cl\left({v}_k^{\ast}\right)\right| $$

5.1.2. Behavioral representation of anomalies

Understanding the different ranges of impacts that anomalies can have on the network is another important aspect to explore. Let $ I\left({\alpha}_k\right) $ denote an anomaly impact function that computes the number of nodes in the network that an anomaly $ {\alpha}_k $ reaches. Then, we define two frequency distributions $ \Pi $ and $ {\Pi}^{ref} $ that record the sizes of the impacts of anomalies for each set $ \mathcal{A} $ and $ {\mathcal{A}}^{ref} $ , respectively. WLOG, we assume that these distributions are normalized. To compare the similarity of anomaly impacts in the sample set $ \mathcal{A} $ and reference set $ {\mathcal{A}}^{ref} $ , we propose using an adapted version of the earth mover’s distance (EMD) metricFootnote 1 (Pele & Werman, Reference Pele and Werman2009) to quantify the “direction” of difference between $ \Pi $ and $ {\Pi}^{ref} $ . Specifically, we adapt this metric to become a directed variant of EMD with normalization, where the “direction” of change is considered, that is, “negative” work represents a need to shift to a distribution leftward. Thus, the behavioral representation metric is computed in Equation (7). In general, when the value of this metric is close to $ 0 $ , the two sets of anomalies $ \mathcal{A} $ and $ {\mathcal{A}}^{ref} $ have similar behavior.

(7) $$ BehRepr\left(\mathcal{A},\mathcal{G}\right)=\sum \limits_{i=1}^{\left|\mathcal{V}\right|}\sum \limits_{j=1}^i\left(\Pi (i)-{\Pi}^{ref}(i)\right) $$

5.1.3. Semantic representation of anomalies

Finally, we discuss the semantic perspective on assessing the quality of representation in sets of anomalies. Here, we intuitively measure the distribution of “causes” of anomalies. This primarily entails understanding the land uses from which an anomaly is generated. For each land use $ {u}_m\in \mathcal{U} $ , we count the number of anomalies in $ \mathcal{A} $ and $ {\mathcal{A}}^{ref} $ that are generated by $ {u}_m $ . Then, we take their difference, and weight it by the priority $ {\lambda}_m $ associated with $ {u}_m $ . This helps to reflect the importance of representing certain causes of anomalies. Equation (8) presents the semantic representation metric, which computes the normalized weighted difference between causes of anomalies in $ \mathcal{A} $ and $ {\mathcal{A}}^{ref} $ . Note that when the value of this metric is close to $ 0 $ , the causes present in the two sets of anomalies $ \mathcal{A} $ and $ {\mathcal{A}}^{ref} $ are similar.

(8) $$ SemRepr\left(A,{A}^{ref},G\right)=\frac{1}{\left|\mathcal{U}\right|\cdot \left|{\mathcal{A}}^{ref}\right|}\sum \limits_{u_m\in U}{\lambda}_m\cdot \left(\sum \limits_{\alpha_k\in A}\unicode{x1D7D9}\left[{u}_k={u}_m\right]-\sum \limits_{\alpha_k^{\prime}\in {A}^{ref}}\unicode{x1D7D9}\left[{u}_k^{\prime }={u}_m\right]\right) $$

5.2. Enhancing the quality of anomaly representation

Utilizing sets of anomalies that are well represented for the given infrastructure network can help improve the suitability of placement solutions in practice. Our methodology described in Algorithm 3 looks to improve the quality of a sample set of anomalies with respect to the metrics introduced in Section 5.1.

Let $ {\mathcal{A}}^{comp} $ denote a comprehensively constructed set of anomalies with which a sample set of anomalies $ \mathcal{A} $ can be augmented. Our approach constructs two subsets of anomalies: the add set which represents anomalies from $ {\mathcal{A}}^{comp} $ to add to $ \mathcal{A} $ ; and the remove set which represents anomalies from $ \mathcal{A} $ to remove. The procedure in Algorithm 3 maintains a total of $ N $ anomalies, by iteratively replacing anomalies in the remove set with an anomaly in the add set, if it does not satisfy certain thresholds for representation.

Algorithm 3 Enhancing Anomaly Representation

From the structural perspective, improving this metric requires that more of the constructed clusters need to be represented with an anomaly. Thus, the add set must consist of anomalies in $ {\mathcal{A}}^{comp} $ whose origin is in a cluster not currently represented. On the other hand, the remove set can contain only overrepresented anomalies from clusters, that is, when multiple anomalies are from the same cluster (lines 3–8). Improving the behavioral representation of $ \mathcal{A} $ requires interpreting the value of the metric itself. In particular, if this value is largely negative, then this indicates that $ \mathcal{A} $ contains far fewer small-impact anomalies (e.g., very transient anomalies) than the reference set $ {\mathcal{A}}^{ref} $ . Inversely, if this value is largely positive, then $ \mathcal{A} $ need to consist of more persistent (i.e., continuous) anomalies. The add set and remove set thus can be built accordingly using this intuition (lines 9–13). Finally, addressing the semantic representation of anomalies requires examining the number of anomalies that are caused by a semantic land use, along with its priority. Both the add set and remove set must maintain an order of how anomalies are considered. In particular, the add set should consist of anomalies caused by land uses that are under-represented, in order of highest to lowest priority; and the remove set should contain anomalies whose causes are already represented enough, in order of lowest to highest priority (lines 14–20).

6. The integrated STEP solution and placement algorithm

In this section, we detail the STEP approach to construct a heterogeneous sensor placement solution leveraging structural, behavioral, and semantic properties of the community infrastructure network. The core of our approach uses a mixed integer linear programming (MILP), which can produce optimal sensor deployments for a given network graph. However, due to the heavy computation required to solve large MILPs, we utilize a graph partitioning strategy to first split the infrastructure graph into smaller subgraphs. To this end, we first introduce the notion of semantic entropy, a novel information-theoretic quantity, and apply it alongside the network properties defined in Section 3.

6.1. Definition: semantic entropy

The potential causes of an anomaly can be varied in an infrastructure network. To gain insight, we look to quantify the skewness in the distribution of upstream semantic land uses. In particular, if a region in the network consists primarily of a specific land use $ {u}_m $ , then we can more confidently attribute anomalies that occur to $ {u}_m $ . These correlations can be then collected into a knowledge base for identifying potential causes of new anomalies. Denote $ {\mathcal{G}}_{v_j}^{up} $ as the subgraph induced from nodes that lie upstream of node $ {v}_j $ . We then define the semantic entropy index $ \mathcal{S}\mathrm{\mathcal{E}} $ for a given set of semantic land uses $ \mathcal{U} $ and the upstream subgraph $ {\mathcal{G}}_{v_j}^{up} $ in Equation (9).

(9) $$ {\displaystyle \begin{array}{c}\mathcal{S}\mathrm{\mathcal{E}}\left(\mathcal{U},{\mathcal{G}}_{v_j}^{up}\right)=\sum \limits_{u_m\in \mathcal{U}\left({v}_j\right)}{\lambda}_m\cdot \left(-P\left({u}_m\right)\log P\left({u}_m\right)\right)\\ {}P\left({u}_m\right)=\sum \limits_{v_i\in {\mathcal{V}}_{v_j}^{up}}\left(\frac{Area\left({v}_i,{u}_m\right)}{\sum \limits_{u_m\in \mathcal{U}} Area\left({v}_i,{u}_m\right)}\right)\end{array}} $$

This definition is adapted from the well-established information theoretic definition of entropy in Shannon (Reference Shannon1948). We let $ P\left({u}_m^{up}\right) $ be the proportion of the total area allocated for the land use $ {u}_m\in \mathcal{U} $ in the nodes of $ {\mathcal{G}}_{v_j}^{up} $ . Then, by taking the sum of the terms $ -P\left({u}_m^{up}\right)\log P\left({u}_m^{up}\right) $ over all semantic land uses $ {u}_m\in \mathcal{U} $ and weighting them by the priority $ {\lambda}_m $ of $ {u}_m $ , we compute the semantic entropy of land uses in the upstream subgraph. Intuitively, this value represents the weighted average “information” (i.e., bits) needed to describe the skewness in the upstream distribution of semantic land uses.

6.2. Placement optimization

The STEP placement strategy operates in two phases. First, we look to split the network into smaller subgraphs based on the values of the network properties defined earlier. Then, we leverage an MILP formulation to produce optimal sensor deployments for each subgraph, and merge partial solutions through heuristics and feedback from domain experts.

6.2.1. Graph partitioning

For tractability of the MILP, we partition the infrastructure graph $ \mathcal{G} $ into multiple smaller subgraphs. Here, key components of the network are grouped so that optimums are found within a partition. The first half of Algorithm 4 describes our methodology.

We exploit the relationship between the network properties defined earlier, and the set of optimization objectives for our placement. The betweenness centrality and branching complexity of nodes in the network influence the quality of coverage and traceability of a proposed placement solution, respectively. Let $ \Delta \mathrm{\mathcal{B}}\mathcal{TN}\left({v}_j\right) $ , $ \Delta \mathrm{\mathcal{B}}\mathcal{C}\left({v}_j\right) $ , and $ \Delta \mathcal{S}\mathrm{\mathcal{E}}\left({v}_j\right) $ represent the total change in the respective metrics at node $ {v}_j $ and its direct parents $ pa\left({v}_j\right) $ . These quantities are used to greedily partition the graph $ \mathcal{G} $ : we select nodes to instrument based on the extent to which making a partition would yield the largest reduction in these values. We accommodate tradeoffs in different objectives using a weight on each delta term, that is, $ {w}_{cov} $ and $ {w}_{tr} $ . This process is then repeated for the total number of partitions desired, $ {N}_{part} $ . We note that placing sensors at these nodes will help to minimize worst-case coverage and traceability.

6.2.2. Formulating an MILP and merging the solution

After $ \mathcal{G} $ is partitioned, we formulate and solve the MILP in Equation (10) for each subgraph. The objective in Equation (10a) takes a weighted sum over the coverage and traceability of the placement, which captures the placement solution’s capacity to detect anomalies and trace them to potential sources. Two constraints are considered: a budget constraint (Constraint 10b) which limits the total budget allowed; and a heterogeneity constraint (Constraint 10c) which limits the total number of sensors measuring a specific phenomenon at a node.

(10a) $$ \max \hskip2.50em \sum \limits_{\alpha_k\in \mathcal{A}}\hskip0.5em \sum \limits_{x_{jl}\in \mathcal{X}}{x}_{lj}\left({w}_{cov} COV\left({x}_{lj},{\alpha}_k\right)+{w}_{tr} TR\Big({x}_{lj},{\alpha}_k\Big)\right) $$
(10b) $$ {\displaystyle \begin{array}{ll}\mathrm{s}.\mathrm{t}.\hskip2em & \sum \limits_{s_l\in \mathcal{S}}\sum \limits_{v_j\in \mathcal{V}}{x}_{lj}{c}_l\le {B}^c\end{array}} $$
(10c) $$ \hskip8pc \sum \limits_{s_l\in \mathcal{S}(p)}{x}_{lj}\le 1\hskip6em \forall {v}_j\in \mathcal{V},\forall p\in \mathcal{P} $$

Finally, the placement solutions obtained by solving the MILP on each subgraph are then merged. We use simple migration heuristics to adjust the overall placement, that is, if moving a deployed sensor to an adjacent node can improve the weighted objectives for coverage and traceability. The second half of Algorithm 4 shows the process of solving the MILP on the graph partitions and merging the solution.

Algorithm 4 Sensor Placement

6.2.3. Placement refinement

Placement Refinement. In many real-world settings, the idealized deployment solutions proposed by our algorithm may be infeasible or ill-advised due to external factors, such as communication issues, potential for vandalism, physical barriers preventing easy human access, and environmental conditions. As a result, changes to the proposed placement might be required. To aid domain experts in the construction and refinement of a sensor placement solution, the STEP interactive toolkit was developed. This tool provides user-level visualization the different steps of our approach, and allows a human-in-the-loop (i.e., domain expert or practitioner) to insert regional infrastructure networks, generate ideal placements, and alter the suggested placement. We utilize domain expert feedback from the field and results of physics-based simulations to conduct relevant what-if analysis to provide effective community-scale deployments. We provide additional details on the STEP toolkit in Section 8.

7. Experiments

In this section, several experiments are conducted to show the utility of integrating structural, behavioral, and semantic aspects of networks. STEP is compared against multiple baseline techniques for sensor placement, and we analyze the number of anomalies detected, their traceability, and coverage of nodes. We finally explore the credibility and realism of each considered set of generated anomalies.

7.1. Experimental setup

7.1.1. Real-world networks

We evaluate STEP on six real-world stormwater networks covering cities in Southern California in the United States. The networks are defined using EPA SWMM (EPA, 2023), and were provided to us by the public water utility agency, OCPW. The structure and size of each network are visualized in Figure 2. Regions surrounding nodes in the networks are specified using the definition of subcatchments as provided within the EPA SWMM models. Our work considers three categories of semantic land uses: (i) high-priority land uses with priority $ \lambda =3 $ : agriculture, commercial service, and industrial; (ii) medium-priority land uses with priority $ \lambda =2 $ : mixed commercial and mixed urban; and (iii) low-priority land uses with priority $ \lambda =1 $ : hi-density residential and lo-density residential.

Figure 2. EPA SWMM networks used for evaluation.

7.1.2. Historical data

We utilize historical grab sample data collected by OCPW during site visits in which instances of anomalous behavior were reported. This dataset consists of 1292 historical grab samples measuring several water quality metrics between 2006 and 2022, from 30 different outfalls in Southern California. This dataset is used later to inform the set of credible anomalies in the evaluation, using the method described in Section 4.

7.1.3. Sensors

Multiple heterogeneous water quality sensors are considered for deployment, as shown in Table 1. Sensor specifications, including the measured phenomenon and accuracy, are obtained from Shi et al. (Reference Shi, Catsamas, Kolotelo, Wang, Lintern, Jovanovic, Bach, Deletic and McCarthy2021), Catsamas et al. (Reference Catsamas, Shi, Wang and McCarthy2022), and Wang et al. (Reference Wang, Shi, Catsamas and M2021). Sensor costs vary from $100 to $150 for hardware and deployment, and recurrent costs for continued operation and maintenance (e.g., cellular dataplan, battery replacements) range from $300 to $350 per year based on the rates at which the sensors log data. We note that the accuracy of the sensor is an empirically derived constant or percentage of the quantity of the measured phenomenon. In our experiments, we assume that sensors can only observe an anomaly if the percent difference between its observed value and its simulated value is under 30%.

Table 1. Sensors considered in placement

7.1.4. Anomalies

Two sets of anomalies are used to obtain empirical measurements for the metrics defined in Section 6. For each node in the evaluated networks, five anomalies were uniformly constructed with a random duration of $ 30\pm 5 $ minutes and flow rate of $ 0.2\pm 0.2 $ cfs. The set of phenomena produced by each anomaly is randomly sampled from the phenomena captured by the sensors in Table 1. For evaluating proposed placements, we construct a more realistic set of anomalies based on historical data and semantic land uses using the methodology in Section 4. In particular, extracting anomalies from historical data utilized agglomerative clustering to group anomalies into profiles based on their associated land uses; we set a distance threshold of $ 0.2 $ for the absolute difference between percentages of land uses. Best effort matches between historical grab samples and the constructed profiles were aided by considering factors such as distance to nodes and observed phenomena.

7.1.5. Comparison algorithms

Two common baseline algorithms for sensor placement optimization are leveraged in our evaluation. The Greedy Heuristic (Greedy) (Kansal et al., Reference Kansal, Dorji, Chandniha and Tyagi2012; Rathi & Gupta, Reference Rathi and Gupta2014; Schwartz et al., Reference Schwartz, Lahav and Ostfeld2014a) is an algorithm that aims to select deployment locations for sensors based on their ability to minimize or maximize a given objective/criteria. Two different classes of greedy baseline algorithms are considered. The first class leverages only structural network properties to optimize a sensor placement. These algorithms, Naive-COV and Naive-BTN aim to deploy sensors to maximize the radius-based definition of coverage, and the global betweenness centrality in the network, respectively. The second class of greedy algorithms leverages both topological and empirical network properties to maximize coverage (COV) and traceability (TR) objectives, as defined previously in Section 3. We also consider another common sensor deployment strategy: the genetic algorithm (Genetic) (Ehsani & Afshar, Reference Ehsani and Afshar2010; Krause et al., Reference Krause, Rajagopal, Gupta and Guestrin2009; Preis & Ostfeld, Reference Preis and Ostfeld2008). This technique searches for deployment solutions by simulating the process of natural selection and evolution. In this comparison baseline, we set a population size of $ 1000 $ , crossover rate of $ 0.8 $ , and mutation rate of $ 0.01 $ . Similar to the greedy heuristic, coverage and traceability are used as objectives. As mentioned previously, each comparison baseline uses the uniform distribution of anomalies for optimization, but is evaluated on the more realistic and credible distribution of anomalies.

7.1.6. Performance metrics

The efficacy of our approach is first evaluated on the number of anomalies detected by each proposed placement. We then compare the coverage and traceability provided by the placements, as defined in Equations (1) and (2). For each comparison, the range of percent differences between our approach and each baseline is reported. This is computed using $ |v(STEP)-v(CMP)|/\hskip1.5pt v(STEP) $ , where $ v(STEP) $ and $ v(CMP) $ are the metric values from STEP and a comparison algorithm, respectively.

7.2. Experimental results

In our experiments, we evaluate the sensor placements produced by STEP on the number of anomalies detected, their traceability and the number of nodes covered. Our results show how the STEP approach was able to coalesce knowledge of the network topology and behavior with the embedded community land use semantics to propose effective and efficient sensor deployments, as compared to multiple baseline techniques.

7.2.1. Detected anomalies

In Figure 3, we first present the average number of anomalies captured by the proposed sensor deployments for each network, over varying budget levels. Our results indicate that the STEP approach generally outperformed each of the other baselines with respect to the total proportion of anomalies detected. For the small, medium, and large network, the average percent difference between STEP and the baseline approaches for the highest budget limit evaluated, ranged between $ 35.66\%\hskip2pt \mathrm{t}\mathrm{o}\hskip2pt 528.13\% $ , $ 32.08\%\hskip2pt \mathrm{a}\mathrm{n}\mathrm{d}\hskip2pt 309.10\% $ , and $ 0.74\%\hskip2pt \mathrm{t}\mathrm{o}\hskip2pt 206.90\% $ , respectively. One of the largest factors influencing the actual proportion of anomalies captured is the number of nodes in the network. We observed that smaller networks (i.e., Coyote Creek Upstream, Santa Ana Upstream) have the highest proportion of anomalies captured, while larger networks (i.e., Newport Beach) perform worse. Since the budget range used in our evaluation is similar for each size of network, the resulting proposed placement for the largest network was difficult to distinguish from other solutions. Thus, we show that our approach was able to propose a sensor deployment that can effectively monitor the network for anomalies.

Figure 3. Number of anomalies detected in evaluation networks.

7.2.2. Traceability

Next, we explore how effectively anomalies can be traced back to a potential source, that is, its traceability. In Figure 4, we report the average proportion of nodes that were eliminated as potential sources for each anomaly. Note that when an anomaly goes undetected, no nodes can be eliminated. Our experiment shows that the average percent difference between the traceability enabled by STEP and the baselines for the largest budget limit, across the small, medium, and large networks ranged from $ 30.30\%\hskip2pt \mathrm{t}\mathrm{o}\hskip2pt 671.65\% $ , $ 43.12\%\hskip2pt \mathrm{t}\mathrm{o}\hskip2pt 400.36\% $ , and $ 2.95\%\hskip2pt \mathrm{t}\mathrm{o}\hskip2pt 272.75\% $ , respectively. Similarly to our previous experiment, the size of the network is a primary factor that impacts the traceability. That is, since smaller networks were able to detect more anomalies with the proposed deployment, more potential source nodes were able to be eliminated. These results show the efficacy of the sensor deployments proposed by STEP, which balanced the tradeoff in both the detection of anomalies, and the ability to trace them back to potential origins.

Figure 4. Evaluation of network node traceability versus budget.

7.2.3. Node coverage

Our next experiment studies how the number of nodes monitored by the proposed sensor deployments varies with the budget. To this end, we present the node coverage for each of the small-, medium-, and large-sized networks in Figure 5. Our results indicate that the small- and medium-sized networks tend to have higher node coverage using the STEP approach as compared to the baselines. In particular, the average percent difference for these networks range between $ 27.45\%\hskip2pt \mathrm{t}\mathrm{o}\hskip2pt 376.67\% $ and $ 43.23\%\hskip2pt \mathrm{t}\mathrm{o}\hskip2pt 300.00\% $ , respectively. However, for the large network, this percent difference drops to $ 2.67\%\hskip2pt \mathrm{t}\mathrm{o}\hskip2pt 140.65\% $ ; the limited budget explored for deployment causes all placement algorithms to perform suboptimally with respect to the number of nodes covered. We note that this also yields a loss in traceability.

Figure 5. Evaluation of nodal coverage versus budget.

7.3. Assessing the credibility and realism of generated anomalies

Finally, we evaluate the quality of the anomalies that could be generated for each of the real-world stormwater networks. To this end, we consider the structural, behavioral, and semantic perspectives on the representation of anomalies, as defined earlier in Section 5. Four different sets of $ N $ anomalies were studied and compared: (i) a uniform (UNIF) set which was generated over each of the nodes in the infrastructure graph; (ii) a random (RAND) set whose parameters were all randomly selected, following estimated values proposed by domain experts; (iii) a historical (HIST) set based on the historical anomaly extraction process described in Section 4; and (iv) a credible (CRED) set consisting of anomalies constructed using semantics and network topology, as described in Algorithms 1, 2, and 3. Our experiment studies the effect of considering varying numbers of anomalies, ranging from 10 to 2000, on the overall representation of anomalies. For each considered anomaly set size, three random samples were taken and their results were averaged.

7.3.1. Evaluating the structural representation of anomalies

The structural representation of anomalies compares the degree to which a sample set of anomalies is spatially distributed in the infrastructure network. In this experiment, we use the uniform set of anomalies as the reference set, since it contains anomalies at all nodes of the network. Our results are presented in Figure 6. The average percent change between the considered set of credible anomalies and the best baseline set ranged from 1.69% to 63.04% for the small networks, 0.0% to 61.14% for the medium networks, and 1.71% to 45.68% for the large network. A common (and obvious) trend that appears regardless of the sizes of networks is the relationship between higher numbers of anomalies and better structural representation of the network. This is generally expected, as considering more anomalies will provide more opportunities for representing clusters. An interesting observation in this regard lies in location of the cutoff between where structural representation is poor (when the number of anomalies is small), and marginally improving, if at all (when the number of anomalies is large). Using the Elbow method heuristic (Syakur et al., Reference Syakur, Khotimah, Rochman and Satoto2018), our methodology in constructing credible anomalies requires roughly 150% to 400% fewer anomalies to yield the same structural representation than other baseline methods, across all sizes of networks. In each of the networks, we also found that historical anomalies generally have very poor structural representation; this is due to limited resources available for water agencies to conduct site visits. This also demonstrates the need to rely on more than just historical data when considering the potential anomalies in an infrastructure network. Thus, our results show that the STEP approach for generating credible anomalies is structurally representative in the network.

Figure 6. Comparison of structural representation of anomalies.

7.3.2. Evaluating the behavioral representation of anomalies

Next, we examine the range of anomaly behaviors, and its representation in the infrastructure network. We use the set of historical anomalies as the reference with which all other anomaly sets are compared in this experiment. Figure 7 depicts the varying levels of the behavioral representation: there is a 11.42% to 212.87%, 1.49% to 208.48%, and 14.50% to 122.23% average percent difference between the representation provided by the set of credible anomalies as compared to the baselines, for the small, medium, and large networks, respectively. Note that values closer to 0 are considered more representative, as they better match the reference set, that is, historical data. Our results reveal the general trend that as the size of the network grows, the ability to closely match reference sets becomes more difficult. This insight highlights the need to consider credible anomalies for a given infrastructure network, especially when larger, more geo-distributed scales of networks are studied.

Figure 7. Comparison of behavioral representation of anomalies.

7.3.3. Evaluating the semantic representation of anomalies

We finally study the representation of anomalies from a semantic perspective, which examines the distribution of causes that create anomalies within the network. We plot the results in Figure 8 for each of the real-world stormwater networks used for the evaluation. In our comparison, we found that the average percent difference for the small, medium, and large networks were 19.80% to 146.56%, 15.78% to 96.23%, and 41.00% to 94.84%, respectively. We observe that when the number of anomalies considered is small, there tends to be significantly larger deviations in the semantic representation. This is likely due to the lack of diverse causes to which anomalies can be attributed. However, on average, the STEP approach is effective at selecting anomalies of varying size, to minimize the difference in semantic land use causes while balancing a tradeoff with representation in structure and behavior.

Figure 8. Comparison of semantic representation of anomalies.

8. Toward an STEP prototype

We developed a prototype system to serve as a visualization and refinement tool to enable interactions with domain experts and practitioners. Figure 9 illustrates the architecture and prototype system. The general workflow fundamentally leverages the infrastructure graph, an existing set of historical data, and community-level semantics data as described in Section 3. This is used to derive several topological and empirical network properties, which then inform the STEP approach for sensor deployment. A preview of our dashboard is presented in Figure 10, which shows the sample Newport Beach network and a few network properties.

Figure 9. The STEP prototype architecture.

Figure 10. The STEP interactive dashboard.

In general, we envision the role of our prototype system as an interface that allows domain experts to explore different sensor deployments, especially in the case where certain nodes are found to be unsuitable for instrumentation due to external factors (e.g., communication, vandalism, physical barriers to human access, environmental conditions, etc.). We provide network analytics and support for conducting what-if analysis to facilitate exploration of the network and suggested deployments. The STEP prototype toolkit and dashboard are published on GitHub (https://github.com/andrewgchio/STEP, 2023).

8.0.1. Experiences toward a practical, real-world deployment

Initial efforts with collaborators from OCPW to deploy the STEP framework in practice have yielded a small handful of sensor deployments within a real-world stormwater network. Figure 11 depicts a few of these sensors. Our experiences revealed several new practical challenges for an operational deployment. One key observation was the need to better understand failures in sensing due to issues such as communication power, and sensor drift. This naturally points to interest in determining how sensor deployments can be optimized to additionally provide a notion of verifiability of information, that is, the ability to use multiple sensors to study and understand the state of the network. Enabling effective retrofit of deployments, for example, after sensors are partially instrumented in the network, was another key challenge that presented itself during this task. Throughout this process, we note that domain experts remain a key interaction necessary for the STEP system to operate; their knowledge and adaptation to hard-to-capture external factors (e.g., the potential for vandalism, etc.) help to mitigate the number of unsuitable locations for instrumentation. However, we leave these computational aspects as future work.

Figure 11. Real-world deployment of sensors in a storm drain.

9. Conclusions and future work

The STEP approach is a framework that aims to solve the heterogeneous sensor placement problem by exploiting structural, behavioral, and semantic aspects of a community infrastructure. We relied on historical grab sample data, combined with community-level semantics to learn and construct new, credible anomalies for a given infrastructure network. Key topological and empirical network properties were also identified, which played a large role in determining an optimal sensor deployment and its refinement. Six real-world stormwater infrastructure networks were explored in our evaluations, which illustrate the efficacy of STEP in balancing the tradeoffs between core objectives of coverage, traceability, and verifiability. We developed an interactive dashboard to aid domain experts and practitioners in the usability of STEP. Future work will entail realizing deployments for the real-world stormwater networks, and the analysis that observations can provide toward facilitating informed decision-support tools. Specifically, we plan to examine how anomaly detection methods and source identification inference can be enabled through the proposed deployments. We aim to identify other elements for automation based on real-world experiences to reduce the effort required by domain experts and practitioners.

Author contribution

Conceptualization: J.P., N.V., A.C.; Resources: J.P.; Supervision: J.P., N.V.; Writing – original draft: J.P., N.V., A.C.; Writing – review & editing: J.P., N.V., A.C.; Validation: N.V., A.C.; Investigation: A.C.; Methodology: A.C.; Software: A.C.; Visualization: A.C.

Data availability statement

Code can be accessed on GitHub: https://github.com/andrewgchio/STEP.

Funding statement

This research was supported by the UC National Laboratory Fees Research Program Grant No. L22GF4561, and NSF Grants No. 1952247 and 2008993. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.

Competing interest

The authors declare no competing interests exist.

Ethical standard

The research meets all ethical guidelines, including adherence to the legal requirements of the study country.

Footnotes

1 The Earth Mover’s Distance (EMD) metric, also known as the Wasserstein metric is a measure of similarity between distributions. It is oftentimes described using the analogy of moving piles of earth: it computes the minimum amount of work needed to transform one pile of earth into another.

References

Al-Hader, M. and Rodzi, A. (2009). The smart city infrastructure development & monitoring. Theoretical and Empirical Researches in Urban Management, 4(2 (11)):8794.Google Scholar
Alarifi, A., AlZubi, A. A., Al-Maitah, M., and Al-Kasasbeh, B. (2019). An optimal sensor placement algorithm (o-spa) for improving tracking precision of human activity in real-world healthcare systems. Computer Communications, 148:916.CrossRefGoogle Scholar
Aral, M. M., Guan, J., and Maslia, M. L. (2010). Optimal design of sensor placement in water distribution networks. Journal of Water Resources Planning and Management, 136(1):518.CrossRefGoogle Scholar
Arumugam, P. and Saranya, R. (2018). Outlier detection and missing value in seasonal arima model using rainfall data. Materials Today Proceedings, 5(1):17911799.CrossRefGoogle Scholar
ASCE (2021) Stormwater.Google Scholar
Bagula, A., Castelli, L., and Zennaro, M. (2015). On the design of smart parking networks in the smart cities: An optimal sensor placement model. Sensors, 15(7):1544315467.CrossRefGoogle ScholarPubMed
Barbosa, A. E., Fernandes, J. N., and David, L. M. (2012). Key issues for sustainable urban stormwater management. Water Research, 46(20):67876798.CrossRefGoogle ScholarPubMed
Barrientos-Torres, D., Martinez-Ros, E. A., Navarro-Tuch, S. A., Pablos-Hach, J. L., and Bustamante-Bello, R. (2023). Water flow modeling and forecast in a water branch of mexico city through arima and transfer function models for anomaly detection. Water, 15(15):2792.CrossRefGoogle Scholar
Bernstein, B., Moore, B., Sharp, G., and Smith, R. (2009). Assessing urban runoff program progress through a dry weather hybrid reconnaissance monitoring design. Environmental Monitoring and Assessment, 157(1):287304.CrossRefGoogle ScholarPubMed
Berry, J., Carr, R. D., Hart, W. E., Leung, V. J., Phillips, C. A., and Watson, J.-P. (2009). Designing contamination warning systems for municipal water networks using imperfect sensors. Journal of Water Resources Planning and Management, 135(4):253263.CrossRefGoogle Scholar
Berry, J., Hart, W. E., Phillips, C. A., Uber, J. G., and Watson, J.-P. (2006). Sensor placement in municipal water networks with temporal integer programming models. Journal of Water Resources Planning and Management, 132(4):218224.CrossRefGoogle Scholar
Berry, J. W., Fleischer, L., Hart, W. E., Phillips, C. A., and Watson, J.-P. (2005). Sensor placement in municipal water networks. Journal of Water Resources Planning and Management, 131(3):237243.CrossRefGoogle Scholar
Bertrand-Krajewski, J.-L., Chebbo, G., and Saget, A. (1998). Distribution of pollutant mass vs volume in stormwater discharges and the first flush phenomenon. Water Research, 32(8):23412356.CrossRefGoogle Scholar
Blanch, J., Puig, V., Saludes, J., and Quevedo, J. (2009). Arima models for data consistency of flowmeters in water distribution networks. IFAC Proceedings 42 (8):480485.CrossRefGoogle Scholar
Branisavljević, N., Kapelan, Z., and Prodanović, D. (2011). Improved real-time data anomaly detection using context classification. Journal of Hydroinformatics, 13(3):307323.CrossRefGoogle Scholar
Candelieri, A. (2017). Clustering and support vector regression for water demand forecasting and anomaly detection. Water, 9(3):224.CrossRefGoogle Scholar
Casillas, M. V., Puig, V., Garza-Castanón, L. E., and Rosich, A. (2013). Optimal sensor placement for leak location in water distribution networks using genetic algorithms. Sensors, 13(11):1498415005.CrossRefGoogle ScholarPubMed
Catsamas, S, Shi, B, Wang, M and McCarthy, D (2022) Characterisation and development of a novel low-cost radar velocity and depth sensor. In 10th International Conference on Sewer Processes and Networks.Google Scholar
Chio, A., Peng, J., and Venkatasubramanian, N. (2023). Step: Semantics-aware sensor placement for monitoring community-scale infrastructure. In Proceedings of the 10th ACM International Conference on Systems for Energy-Efficient Buildings, Cities, and Transportation, pp. 189197.CrossRefGoogle Scholar
Ciaponi, C., Creaco, E., Di Nardo, A., Di Natale, M., Giudicianni, C., Musmarra, D., and Santonastaso, G. F. (2019). Reducing impacts of contamination in water distribution networks: A combined strategy based on network partitioning and installation of water quality sensors. Water, 11(6):1315.CrossRefGoogle Scholar
Cohn, T. A., England, J., Berenbrock, C., Mason, R., Stedinger, J., and Lamontagne, J. (2013). A generalized grubbs-beck test statistic for detecting multiple potentially influential low outliers in flood series. Water Resources Research, 49(8):50475058.CrossRefGoogle Scholar
Copeland, C. (1999). Clean water act: A summary of the law. Congressional research service, Library of Congress Washington, DC.Google Scholar
Das, S. and Udgata, S. K. (2021). Sensor placement for contamination source detection in water channel networks. In ICC 2021-IEEE International Conference on Communications, pp 16. IEEE.Google Scholar
de Winter, C., Palleti, V. R., Worm, D., and Kooij, R. (2019). Optimal placement of imperfect water quality sensors in water distribution networks. Computers & Chemical Engineering, 121:200211.CrossRefGoogle Scholar
Ding, N., Gao, H., Bu, H., Ma, H., and Si, H. (2018). Multivariate-time-series-driven real-time anomaly detection based on bayesian network. Sensors, 18(10):3367.CrossRefGoogle ScholarPubMed
Dorini, G., Jonkergouw, P., Kapelan, Z., and Savic, D. (2010). Slots: Effective algorithm for sensor placement in water distribution systems. Journal of Water Resources Planning and Management, 136(6):620628.CrossRefGoogle Scholar
Ehsani, N. and Afshar, A. (2010). Optimization of contaminant sensor placement in water distribution networks: Multi-objective approach. In Water Distribution Systems Analysis 2010, pp. 338346. American Society of Civil Engineers.Google Scholar
EPA (2023) Epa stormwater management model (swmm).Google Scholar
Festus Biosengazeh, N., Estella Buleng Tamungang, N., Nelson Alakeh, M., and Antoine David, M.-z. (2020). Analysis and water quality control of alternative sources in bangolan, northwest cameroon. Journal of Chemistry, 2020:113. https://github.com/andrewgchio/STEP (2023) GitHub Repository.CrossRefGoogle Scholar
Hu, C., Zhao, J., Yan, X., Zeng, D., and Guo, S. (2015). A mapreduce based parallel niche genetic algorithm for contaminant source identification in water distribution network. Ad Hoc Networks, 35:116126.CrossRefGoogle Scholar
Jalal, D. and Ezzedine, T. (2020). Decision tree and support vector machine for anomaly detection in water distribution networks. In 2000 International Wireless Communications and Mobile Computing (IWCMC), pp 13201323. IEEE.Google Scholar
Kansal, M. L., Dorji, T., Chandniha, S. K., and Tyagi, A. (2012). Identification of optimal monitoring locations to detect accidental contaminations. In World Environmental and Water Resources Congress 2012: Crossing Boundaries, pp. 758776.CrossRefGoogle Scholar
Krause, A. and Guestrin, C. (2009). Optimizing sensing: From water to the web. Computer, 42(8):3845.CrossRefGoogle Scholar
Krause, A., Leskovec, J., Guestrin, C., VanBriesen, J., and Faloutsos, C. (2008). Efficient sensor placement optimization for securing large water distribution networks. Journal of Water Resources Planning and Management, 134(6):516526.CrossRefGoogle Scholar
Krause, A., Rajagopal, R., Gupta, A., and Guestrin, C. (2009). Simultaneous placement and scheduling of sensors. In 2009 International Conference on Information Processing in Sensor Networks, pp. 181192. IEEE.Google Scholar
Kumar, A., Kansal, M. L., Arora, G., Ostfeld, A., and Kessler, A. (1999). Detecting accidental contaminations in municipal water networks. Journal of Water Resources Planning and Management, 125(5):308310.CrossRefGoogle Scholar
Li, D., Chen, D., Jin, B., Shi, L., Goh, J., and Ng, S.-K. (2019). Mad-gan: Multivariate anomaly detection for time series data with generative adversarial networks. In International conference on artificial neural networks, pp 703716. Springer.Google Scholar
Li, J., Izakian, H., Pedrycz, W., and Jamal, I. (2021). Clustering-based anomaly detection in multivariate time series data. Applied Soft Computing, 100:106919.CrossRefGoogle Scholar
Masoner, J. R., Kolpin, D. W., Cozzarelli, I. M., Barber, L. B., Burden, D. S., Foreman, W. T., Forshay, K. J., Furlong, E. T., Groves, J. F., Hladik, M. L., et al. (2019). Urban stormwater: An overlooked pathway of extensive mixed contaminants to surface and groundwaters in the united states. Environmental Science & Technology, 53(17):1007010081.CrossRefGoogle ScholarPubMed
McKenna, S. A., Hart, D. B., and Yarrington, L. (2006). Impact of sensor detection limits on protecting water distribution systems from contamination events. Journal of Water Resources Planning and Management, 132(4):305309.CrossRefGoogle Scholar
Mounce, S. R., Khan, A., Wood, A. S., Day, A. J., Widdop, P. D., and Machell, J. (2003). Sensor-fusion of hydraulic data for burst detection and location in a treated water distribution system. Information Fusion, 4(3):217229.CrossRefGoogle Scholar
Müllner, D. (2013). fastcluster: Fast hierarchical, agglomerative clustering routines for r and python. Journal of Statistical Software, 53:118.CrossRefGoogle Scholar
Nielsen, F. and Nielsen, F. (2016). Hierarchical clustering. In Introduction to HPC with MPI for Data Science, pp 195211.CrossRefGoogle Scholar
OpenStreetMap contributors (2017) Planet dump retrieved from https://planet.osm.org. https://www.openstreetmap.org.Google Scholar
Pele, O. and Werman, M. (2009). Fast and robust earth mover’s distances. In 2009 IEEE 12th international conference on computer vision, pp 460467. IEEE.CrossRefGoogle Scholar
Preis, A. and Ostfeld, A. (2008). Multiobjective contaminant sensor network design for water distribution systems. Journal of Water Resources Planning and Management, 134(4):366377.CrossRefGoogle Scholar
Qian, K., Jiang, J., Ding, Y., and Yang, S. (2020). Deep learning based anomaly detection in water distribution systems. In 2020 IEEE International Conference on Networking, Sensing and Control (ICNSC), pp 16. IEEE.Google Scholar
Rathi, S. and Gupta, R. (2014). Sensor placement methods for contamination detection in water distribution networks: A review. Procedia Engineering, 89:181188.CrossRefGoogle Scholar
Sage, J., Bonhomme, C., Berthier, E., and Gromaire, M.-C. (2017). Assessing the effect of uncertainties in pollutant wash-off dynamics in stormwater source-control systems modeling: Consequences of using an inappropriate error model. Journal of Environmental Engineering, 143(2):04016077.CrossRefGoogle Scholar
Santos-Ruiz, I., López-Estrada, F.-R., Puig, V., Valencia-Palomo, G., and Hernández, H.-R. (2022). Pressure sensor placement for leak localization in water distribution networks using information theory. Sensors, 22(2):443.CrossRefGoogle ScholarPubMed
Schwartz, R., Lahav, O., and Ostfeld, A. (2014a). Integrated hydraulic and organophosphate pesticide injection simulations for enhancing event detection in water distribution systems. Water Research, 63:271284.CrossRefGoogle ScholarPubMed
Schwartz, R., Lahav, O., and Ostfeld, A. (2014b). Optimal sensor placement in water distribution systems for injection of chlorpyrifos. In World Environmental and Water Resources Congress 2014, pp 485494.CrossRefGoogle Scholar
Semadeni-Davies, A., Hernebring, C., Svensson, G., and Gustafsson, L.-G. (2008). The impacts of climate change and urbanisation on drainage in helsingborg, sweden: Suburban stormwater. Journal of Hydrology, 350(1–2):114125.CrossRefGoogle Scholar
Shahra, E. Q. and Wu, W. (2023). Water contaminants detection using sensor placement approach in smart water networks. Journal of Ambient Intelligence and Humanized Computing, 14(5):49714986.CrossRefGoogle Scholar
Shannon, C. E. (1948). A mathematical theory of communication. The Bell System Technical Journal, 27(3):379423.CrossRefGoogle Scholar
Shastri, Y. and Diwekar, U. (2006). Sensor placement in water networks: A stochastic programming approach. Journal of Water Resources Planning and Management, 132(3):192203.CrossRefGoogle Scholar
Shi, B., Catsamas, S., Kolotelo, P., Wang, M., Lintern, A., Jovanovic, D., Bach, P. M., Deletic, A., and McCarthy, D. T. (2021). A low-cost water depth and electrical conductivity sensor for detecting inputs into urban stormwater networks. Sensors, 21(9):3056.CrossRefGoogle ScholarPubMed
Skinner, L., De Peyster, A., and Schiff, K. (1999). Developmental effects of urban storm water in medaka (oryzias latipes) and inland silverside (menidia beryllina). Archives of Environmental Contamination and Toxicology, 37:227235.CrossRefGoogle ScholarPubMed
Smith, R. W. (2002). The use of random-model tolerance intervals in environmental monitoring and regulation. Journal of Agricultural, Biological, and Environmental Statistics, 7:7494.CrossRefGoogle Scholar
Soldevila, A., Blesa, J., Tornil-Sin, S., Fernandez-Canti, R. M., and Puig, V. (2018). Sensor placement for classifier-based leak localization in water distribution networks using hybrid feature selection. Computers & Chemical Engineering, 108:152162.CrossRefGoogle Scholar
Syakur, M., Khotimah, B. K., Rochman, E., and Satoto, B. D. (2018). Integration k-means clustering method and elbow method for identification of the best customer profile cluster. In IOP conference series: Materials science and engineering, 336, p 012017. IOP Publishing.Google Scholar
Venkateswaran, P., Han, Q., Eguchi, R. T., and Venkatasubramanian, N. (2018). Impact driven sensor placement for leak detection in community water networks. In 2018 ACM/IEEE 9th International Conference on Cyber-Physical Systems (ICCPS), pp 7787. IEEE.CrossRefGoogle Scholar
Wang, M., Shi, B., Catsamas, S., and M, D.T.. (2021). An innovative low-cost turbidity sensor for long-term turbidity monitoring in the urban water system. International Conference on Urban Drainage. Melbourne.Google Scholar
Watson, J.-P., Murray, R., and Hart, W. E. (2009). Formulation and optimization of robust sensor placement problems for drinking water contamination warning systems. Journal of Infrastructure Systems, 15(4):330339.CrossRefGoogle Scholar
Yan, X., Gong, J., and Wu, Q. (2021). Pollution source intelligent location algorithm in water quality sensor networks. Neural Computing and Applications, 33:209222.CrossRefGoogle Scholar
Yoganathan, D., Kondepudi, S., Kalluri, B., and Manthapuri, S. (2018). Optimal sensor placement strategy for office buildings using clustering algorithms. Energy and Buildings, 158:12061225.CrossRefGoogle Scholar
Figure 0

Figure 1. STEP components and workflow.

Figure 1

Figure 2. EPA SWMM networks used for evaluation.

Figure 2

Table 1. Sensors considered in placement

Figure 3

Figure 3. Number of anomalies detected in evaluation networks.

Figure 4

Figure 4. Evaluation of network node traceability versus budget.

Figure 5

Figure 5. Evaluation of nodal coverage versus budget.

Figure 6

Figure 6. Comparison of structural representation of anomalies.

Figure 7

Figure 7. Comparison of behavioral representation of anomalies.

Figure 8

Figure 8. Comparison of semantic representation of anomalies.

Figure 9

Figure 9. The STEP prototype architecture.

Figure 10

Figure 10. The STEP interactive dashboard.

Figure 11

Figure 11. Real-world deployment of sensors in a storm drain.

Submit a response

Comments

No Comments have been published for this article.